Describe the bug
selector of default-backend and controller are the same.
with version 1.23.7
`$ helm template -n dev-nginx-ingress -x templates/controller-deployment.yaml . | yq .spec.selector
{
"matchLabels": {
"app": "nginx-ingress",
"release": "dev-nginx-ingress"
}
}
$ helm template -n dev-nginx-ingress -x templates/default-backend-deployment.yaml . | yq .spec.selector
{
"matchLabels": {
"app": "nginx-ingress",
"release": "dev-nginx-ingress"
}
}
`
This causes hpa to fail. Throwing error 'failed to get cpu utilization: missing request for cpu' (Tho this is a bug of hpa https://github.com/kubernetes/kubernetes/issues/79365)
Adding the 'component' label back to deployments selector fixes hpa.
Not very sure if this is something we should 'fix' in the chart or should just wait for the hpa bug to be fixed.
Version of Helm and Kubernetes:
kubernetes version: v1.14.3
helm version: v2.14.2
Which chart:
nginx-ingress
What happened:
hpa fails to get metrics as it relies only on selector of the deployment
What you expected to happen:
hpa works correctly
How to reproduce it (as minimally and precisely as possible):
a normal deployment of the chart
Anything else we need to know:
related to: https://github.com/kubernetes/kubernetes/issues/79365
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.
This issue is being automatically closed due to inactivity.
This is still an issue.
If no resources request is set on the default backend pods then hpa don't work at all (missing memory/cpu request), and if a request is set, I don't think the number of replicas is correctly set since it also collect the metrics from the backend.
There is an issue on kubernetes on that subject and a PR to fix the HPA to check ownership, but an easy fix for this, and for backward compatibility, would be to just add the component on the label slector.
Hit this issue recently.
Meanwhile it is getting fixed, example kubectl patch command that can be put in a shell script to fix the hpa.
# change ${nginx_ingress} to the installed helm chart release name
$ kubectl patch deployment "${nginx_ingress}-controller" -p "{\"spec\":{\"selector\":{\"matchLabels\":{\"component\":\"controller\"}}}}}"
edit: fixed command thanks to @soumeng09
@tkozma I might be missing something obvious here, but how would that patch help?
It will apply an extra label (component: controller) on the controller pods.
However, the spec.selector.matchLabels on both controller and backend deployments are unchanged, they are still only specifying app and release - no component.
(And this is what specifies which pods the deployment will point to)
There's no way via helm values.yml to add selector.matchLabels to either controller or backend
@soumeng09 you are absolutley right, I mixed up the patch command
Hi guys, is there no option in the chart to update this? going through all clusters manually patching is an options but it would be nice if this would be fixed in the chart itself. i have upgraded my chart to latest stable version and still see the hpa cannot get the metrics because of this.
you can also set resources for the default backend but I do not think this is a proper fix as it will mixed up HPA calculations.
Adding my voice that this is still an issue many months after it was first reported.
Edit: Looking at the charts in stable, there is a new values flag called useComponentLabel which is false by default, but when set to true will add the required label to split the deployments.
Most helpful comment
Adding my voice that this is still an issue many months after it was first reported.
Edit: Looking at the charts in stable, there is a new values flag called useComponentLabel which is false by default, but when set to true will add the required label to split the deployments.
https://github.com/helm/charts/pull/21361