Minikube: Horizontal pod autoscaler not able to get metrics in minikube deployment

Created on 1 Oct 2020  Â·  6Comments  Â·  Source: kubernetes/minikube


Steps to reproduce the issue:

  1. $ minikube start — extra-config=controller-manager.horizontal-pod-autoscaler-upscale-delay=1m — extra-config=controller-manager.horizontal-pod-autoscaler-downscale-delay=1m — extra-config=controller-manager.horizontal-pod-autoscaler-sync-period=10s — extra-config=controller-manager.horizontal-pod-autoscaler-downscale-stabilization=1m
  2. $ minikube add-ons enable metrics-server
  3. Create .yaml with resource requests and limits:
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: orion
  name: orion
spec:
  replicas: 1
  selector:
    matchLabels:
      app: orion
  template:
    metadata:
      labels:
        app: orion
    spec:
      containers:
      - args:
        - -dbhost
        - mongo-db
        - -logLevel
        - DEBUG
        - -noCache
        name: fiware-orion
        image: fiware/orion:2.3.0
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 1026
        resources:
          limits:
            cpu: 500m
            memory: 1Gi
          requests:
            cpu: 200m
            memory: 0.5Gi
      restartPolicy: Always
  1. $ kubectl -n test-1 autoscale deployment orion --min=1 --max=5 --cpu-percent=50


Full output of failed command:

Command $ kubectl -n test-1 describe hpa orion returns:

Name:                                                  orion
Namespace:                                             udp-test-1
Labels:                                                <none>
Annotations:                                           CreationTimestamp:  Thu, 01 Oct 2020 14:00:46 +0000
Reference:                                             Deployment/orion
Metrics:                                               ( current / target )
  resource cpu on pods  (as a percentage of request):  0% (0) / 20%
Min replicas:                                          1
Max replicas:                                          5
Deployment pods:                                       1 current / 1 desired
Conditions:
  Type            Status  Reason                   Message
  ----            ------  ------                   -------
  AbleToScale     True    SucceededGetScale        the HPA controller was able to get the target's current scale
  ScalingActive   False   FailedGetResourceMetric  the HPA was unable to compute the replica count: unable to get metrics for resource cpu: no metrics returned from resource metrics API
  ScalingLimited  False   DesiredWithinRange       the desired count is within the acceptable range
Events:
  Type     Reason                        Age                   From                       Message
  ----     ------                        ----                  ----                       -------
  Warning  FailedComputeMetricsReplicas  39s (x12 over 4m27s)  horizontal-pod-autoscaler  invalid metrics (1 invalid out of 1), first error is: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
  Warning  FailedGetResourceMetric       24s (x13 over 4m27s)  horizontal-pod-autoscaler  unable to get metrics for resource cpu: no metrics returned from resource metrics API

Command $ minikube addons list returns:

|-----------------------------|----------|--------------|
|         ADDON NAME          | PROFILE  |    STATUS    |
|-----------------------------|----------|--------------|
| ambassador                  | minikube | disabled     |
| dashboard                   | minikube | enabled ✅   |
| default-storageclass        | minikube | enabled ✅   |
| efk                         | minikube | disabled     |
| freshpod                    | minikube | disabled     |
| gvisor                      | minikube | disabled     |
| helm-tiller                 | minikube | disabled     |
| ingress                     | minikube | enabled ✅   |
| ingress-dns                 | minikube | disabled     |
| istio                       | minikube | disabled     |
| istio-provisioner           | minikube | disabled     |
| kubevirt                    | minikube | disabled     |
| logviewer                   | minikube | disabled     |
| metallb                     | minikube | disabled     |
| metrics-server              | minikube | enabled ✅   |
| nvidia-driver-installer     | minikube | disabled     |
| nvidia-gpu-device-plugin    | minikube | disabled     |
| olm                         | minikube | disabled     |
| pod-security-policy         | minikube | disabled     |
| registry                    | minikube | disabled     |
| registry-aliases            | minikube | disabled     |
| registry-creds              | minikube | disabled     |
| storage-provisioner         | minikube | enabled ✅   |
| storage-provisioner-gluster | minikube | disabled     |
|-----------------------------|----------|--------------|

As you may see in the commands output, even though it seems that the metrics server is working properly (metrics in hpa Orion say: resource cpu on pods (as a percentage of request): 0%), when it comes to the events produced by the Orion hpa there is an error regarding the computation of the metrics:

horizontal-pod-autoscaler  unable to get metrics for resource cpu: no metrics returned from resource metrics API

What is the reason for this horizontal pod autoscaler not working properly?

addonmetrics-server help wanted kinsupport

Most helpful comment

You have to add the following clusterrolebinding

kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: metrics-server
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:heapster
subjects:
- kind: ServiceAccount
  name: default
  namespace: kube-system

This is clearly a minikube bug

And the existing clusterrole system:heapster is outdated so that no stats of statefulsets or nodes are possible.
So execute

kubectl delete clusterrole system:heapster 

and instead add the following clusterrole

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: system:heapster
rules:
- apiGroups:
  - ""
  resources:
  - events
  - namespaces
  - nodes
  - pods
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - apps
  resources:
  - deployments
  - statefulsets
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/stats
  verbs:
  - get

All 6 comments

Isn't there anyone that can help me on this?

It seems quite likely that the answer here will be general to Kubernetes rather than specific to minikube.

That said, it seems unlikely, but is it possible that the metrics-server needs to be started first? I'm not quite sure how this is supposed to work. More likely that this is related to one of these issues, and may be indicative of a missing flag in either the controller or metrics server. There are some hints in these issues:

Please let me know what you discover!

I have the same issue. @adr-arroyo Did you find a solution?

Unluckily I haven't @marcphilipp

If you happen to find a solution, please post it here :)

You have to add the following clusterrolebinding

kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: metrics-server
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:heapster
subjects:
- kind: ServiceAccount
  name: default
  namespace: kube-system

This is clearly a minikube bug

And the existing clusterrole system:heapster is outdated so that no stats of statefulsets or nodes are possible.
So execute

kubectl delete clusterrole system:heapster 

and instead add the following clusterrole

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: system:heapster
rules:
- apiGroups:
  - ""
  resources:
  - events
  - namespaces
  - nodes
  - pods
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - apps
  resources:
  - deployments
  - statefulsets
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/stats
  verbs:
  - get

Thank you for your contribution @eddytruyen

I will try to test it on my environment

Was this page helpful?
0 / 5 - 0 ratings