Charts: [stable/nginx-ingress] Could not collect memory metrics

Created on 23 Jan 2020  路  10Comments  路  Source: helm/charts

Version of Helm and Kubernetes:
Helm v3, Kubernetes v1.15.7, metrics-server v0.3.3

Which chart:
stable/nginx-ingress

What happened:
When autoscaling is enabled, HPA can't collect memory metrics while cpu is collected

What you expected to happen:
Memory metric is collected, as cpu

How to reproduce it (as minimally and precisely as possible):
Installing nginx-ingress with the following values.yaml and describing HPA.

Anything else we need to know:
kubectl top pod is showing both cpu and memory metrics

Captura de pantalla 2020-01-23 a las 13 07 16

But when describing HPA it shows "missing request for memory" and "unknown"
Captura de pantalla 2020-01-23 a las 13 06 50

Values.yaml is setting resources requests & limits.
Captura de pantalla 2020-01-23 a las 13 09 50

Anyone has experienced this issue?

lifecyclstale

Most helpful comment

I hit upon this issue myself, the solution is based upon @monobaila's comment and how the selectors overlap for the controller and the default backend.

The deployments are scoped by app: nginx-ingress but then by a second label "component" which is either "controller" or "default-backend".

By adding component: "default-backend" to the spec.selector.matchLabels for the backend deployment, and component: "controller" to spec.selector.matchLabels of the controller deployment, and applying, these will no longer overlap and HPA will be able to pick up the metrics from the controller.

Update: This can be done by enabling the "useComponentLabel" values in the controller and default backend in values.yaml.

All 10 comments

+1
I also got the similar issue and still have no idea how to fix it. But in my case, both CPU and Memory metric isn't collected but kubectl top pod is working fine.

P.S. I successfully set up HPA following this guide that mean HPA should working fine within my cluster.

Same as @SpeedEX . We do have other services scaling properly, but nginx-ingress doesn't. While the error only mentions than memory metrics can't be collected, both CPU and memory metrics aren't and as a consequence, there is no way to scale.

There's a few bug reports floating around for this and I haven't had time to dig too deep. For me personally I've reverted use of hpa for the nginx-ingress.

I know roughly what the issue is and why it's pretty confusing. I'm just a little unclear on where responsibility lies for fixing it (whether kubernetes or helm chart).

Ultimately the issue is the HPA is using label of app=nginx-ingress to discover the pod metrics, unfortunately the helm chart uses the same label for the ingress controller and the default backend. In my case I had no resource limits setup for the default backend so got the same error, if you configure the helm chart to set memory limits for the default backend it should work (although incorrectly as it's aggregating metrics across different pods to drive auto-scaling logic!)

This is the most appropiate bug ticket in the kubernetes project although it's conflating other issues too: https://github.com/kubernetes/kubernetes/issues/79365

Hope this helps other avoid wasting a few hours troubleshooting like I did! I wish I could address root causes but haven't got the time so simply opted out of using hpa for this knowing it's not currently in a working state.

Hi everyone, I've been digging all over the place and came across #20724. It seems like there is a dependency that the default-backend also requires it's resource limits/requests for CPU/memory as @monobaila has also mentioned.

# values.yaml

controller:
  autoscaling:
    enabled: true
    minReplicas: 2
    maxReplicas: 11
    targetCPUUtilizationPercentage: 50
    targetMemoryUtilizationPercentage: 75
  resources:
    limits:
      cpu: 150m
      memory: 250Mi
    requests:
      cpu: 50m
      memory: 64Mi
defaultBackend:
  resources:
    limits:
      cpu: 40m
      memory: 40Mi
    requests:
      cpu: 20m
      memory: 20Mi

Although now my autoscaling works, it seems like the memory utilization is 114% / 75% which means I have 11 replicas right now despite kubectl -n ingress top pods shows that memory is sitting around 80Mi. This problem is probably related to the label issue @monobaila mentioned above.

I can confirm the above. I added the resources to the backend and it worked.
Thanks @jackieluc

I hit upon this issue myself, the solution is based upon @monobaila's comment and how the selectors overlap for the controller and the default backend.

The deployments are scoped by app: nginx-ingress but then by a second label "component" which is either "controller" or "default-backend".

By adding component: "default-backend" to the spec.selector.matchLabels for the backend deployment, and component: "controller" to spec.selector.matchLabels of the controller deployment, and applying, these will no longer overlap and HPA will be able to pick up the metrics from the controller.

Update: This can be done by enabling the "useComponentLabel" values in the controller and default backend in values.yaml.

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.

This issue is being automatically closed due to inactivity.

useComponentLabel

this works for me too. but only for new deployments. existing ones when adding controller.useComponentLabel fail with:
nginx ingress helm cannot patch "nginx-ingress-controller" with kind Deployment MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable

Was this page helpful?
0 / 5 - 0 ratings