Currently there is an option to set the memory and CPU requests via --proxy-memory and --proxy-cpu but there is no way to set the limits without hand-editing the manifests.
Setting the resource limits to prevent runaway usage in case of a memory or CPU leak.
Add in an option --proxy-memory-limit and --proxy-cpu-limit to set the limits.
What do you want to happen? Add any considered drawbacks.
Probably best to not overload the existing CLI options for requests.
No
Is there another way to solve this problem that isn't as good a solution?
Manually adjusting the manifests.
CLI
If you can, explain how users will be able to use this. Maybe some sample CLI
output?
helm get manifest mychart |linkerd inject --proxy-cpu-limit=100m --proxy-cpu=100m --proxy-memory-limit=128Mi --proxy-memory=100Mi - |kubectl apply -f -
Would create output:
resources:
limits:
cpu: 100m
memory: 128Mi
requests:
cpu: 100m
memory: 100Mi
Also related to: #2001 #1999
It's a really great idea!
I'd like just to say that it maybe would be interesting if the parameters to set the requests and limits use the same nomenclature which Kubernetes uses.
Something like: --proxy-memory-requests and --proxy-memory-limits.
I believe that this can avoid some misunderstanding.
This is also important if a pod needs to have a specific qosClass. Because linkerd doesn't currently allow inject to set the resource limit, it prevents a pod from having a qosClass of Guaranteed, even if all other containers in the pod meet the requirements.
TIL, I've not played around with the qosClass stuff. Another great reason to set limits as well.
Most helpful comment
It's a really great idea!
I'd like just to say that it maybe would be interesting if the parameters to set the
requestsandlimitsuse the same nomenclature which Kubernetes uses.Something like: --proxy-memory-requests and --proxy-memory-limits.
I believe that this can avoid some misunderstanding.