Charts: stable/prometheus-adapter custom metrics failure

Created on 5 Nov 2019  路  12Comments  路  Source: helm/charts

Describe the bug
No custom metrics are being collected.

Version of Helm and Kubernetes:
Helm v2.15.2
K8S 1.15.5

Which chart:
stable/prometheus-adapter

What happened:
Seeing errors in prometheus-adapter pod:

I1105 05:39:29.600899       1 wrap.go:42] GET /healthz: (1.898558ms) 200 [[kube-probe/1.15] 10.195.5.143:45706]
I1105 05:39:29.657621       1 wrap.go:42] GET /openapi/v2: (1.841216ms) 404 [[] 100.113.195.128:40587]
I1105 05:39:35.670550       1 wrap.go:42] GET /apis/custom.metrics.k8s.io/v1beta1?timeout=32s: (5.095939ms) 200 [[kube-controller-manager/v1.15.5 (linux/amd64) kubernetes/20c265f/controller-discovery] 100.113.195.128:40587]
I1105 05:39:38.181353       1 wrap.go:42] GET /healthz: (93.575碌s) 200 [[kube-probe/1.15] 10.195.5.143:45770]
I1105 05:39:39.599022       1 wrap.go:42] GET /healthz: (110.15碌s) 200 [[kube-probe/1.15] 10.195.5.143:45782]
I1105 05:39:48.185177       1 wrap.go:42] GET /healthz: (2.093117ms) 200 [[kube-probe/1.15] 10.195.5.143:45834]
E1105 05:39:48.337447       1 writers.go:149] apiserver was unable to write a JSON response: http2: stream closed
E1105 05:39:48.337537       1 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http2: stream closed"}
I1105 05:39:48.338740       1 wrap.go:42] GET /apis/custom.metrics.k8s.io/v1beta1: (5.94554ms) 200 [[Go-http-client/2.0] 100.124.49.128:5087]
E1105 05:39:48.338893       1 writers.go:149] apiserver was unable to write a JSON response: http2: stream closed
E1105 05:39:48.339658       1 writers.go:149] apiserver was unable to write a JSON response: http2: stream closed
E1105 05:39:48.339722       1 writers.go:149] apiserver was unable to write a JSON response: http2: stream closed
E1105 05:39:48.339661       1 writers.go:149] apiserver was unable to write a JSON response: http2: stream closed
E1105 05:39:48.339848       1 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http2: stream closed"}
I1105 05:39:48.341039       1 wrap.go:42] GET /apis/custom.metrics.k8s.io/v1beta1: (8.342336ms) 200 [[Go-http-client/2.0] 100.124.49.128:5087]
E1105 05:39:48.342180       1 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http2: stream closed"}
I1105 05:39:48.343352       1 wrap.go:42] GET /apis/custom.metrics.k8s.io/v1beta1: (10.620066ms) 200 [[Go-http-client/2.0] 100.124.49.128:5087]
E1105 05:39:48.347286       1 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http2: stream closed"}
E1105 05:39:48.348438       1 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http2: stream closed"}
I1105 05:39:48.349610       1 wrap.go:42] GET /apis/custom.metrics.k8s.io/v1beta1: (17.005981ms) 200 [[Go-http-client/2.0] 100.124.49.128:5087]
I1105 05:39:48.350770       1 wrap.go:42] GET /apis/custom.metrics.k8s.io/v1beta1: (18.022796ms) 200 [[Go-http-client/2.0] 100.124.49.128:5087]
I1105 05:39:49.599244       1 wrap.go:42] GET /healthz: (112.809碌s) 200 [[kube-probe/1.15] 10.195.5.143:45848]
I1105 05:39:58.181748       1 wrap.go:42] GET /healthz: (130.847碌s) 200 [[kube-probe/1.15] 10.195.5.143:45904]
I1105 05:39:58.660409       1 wrap.go:42] GET /apis/custom.metrics.k8s.io/v1beta1: (4.355808ms) 200 [[Go-http-client/2.0] 100.113.195.128:63746]
E1105 05:39:58.669056       1 writers.go:149] apiserver was unable to write a JSON response: http2: stream closed
E1105 05:39:58.669087       1 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http2: stream closed"}
I1105 05:39:58.669593       1 wrap.go:42] GET /apis/custom.metrics.k8s.io/v1beta1: (8.32555ms) 200 [[Go-http-client/2.0] 100.113.195.128:63746]
I1105 05:39:58.670262       1 wrap.go:42] GET /apis/custom.metrics.k8s.io/v1beta1: (9.020669ms) 200 [[Go-http-client/2.0] 100.113.195.128:63746]
I1105 05:39:58.671156       1 wrap.go:42] GET /apis/custom.metrics.k8s.io/v1beta1: (10.01689ms) 200 [[Go-http-client/2.0] 100.113.195.128:63746]
I1105 05:39:58.672050       1 wrap.go:42] GET /apis/custom.metrics.k8s.io/v1beta1: (11.132155ms) 200 [[Go-http-client/2.0] 100.113.195.128:63746]
E1105 05:39:58.711513       1 writers.go:149] apiserver was unable to write a JSON response: http2: stream closed
E1105 05:39:58.711541       1 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http2: stream closed"}
E1105 05:39:58.712711       1 writers.go:149] apiserver was unable to write a JSON response: http2: stream closed
E1105 05:39:58.712711       1 writers.go:149] apiserver was unable to write a JSON response: http2: stream closed

What you expected to happen:
Custom metrics to work.

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know:
Install command:

helm install --name prometheus-adapter --set image.tag=v0.5.0,rbac.create=true,prometheus.url=http://prometheus-k8s.monitoring.svc.cluster.local,prometheus.port=9090 stable/prometheus-adapter
lifecyclstale

Most helpful comment

I have the same issue and I can not get custom-metrics. Any solution for this issue?

All 12 comments

Also getting the errors mentioned by @igoratencompass , however it seems as if the adapter is actually collecting and exposing the custom metrics, since I can see the custom API register proper metrics when I query it with kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/namespaces/ [.....]

Any clue on what might be the problem ?

I have the same situation mentioned by @ivnilv. I am using K8S 1.16.2 and Helm 3.0.0

I have the same situation mentioned by @ivnilv. I am using K8S 1.16.2 and Helm 3.0.0

yeah, also HorizontalPodAutoscaler is able to scale based on the custom metrics ... not sure where the log spam comes from tho ... and whether it's showing an actual problem or just needs to be silenced ...

hi, I get the same error when following this tutorial:
https://medium.com/cloudzone/autoscaling-kubernetes-workloads-with-istio-metrics-92f86baabba9

I1221 12:32:05.795952       1 wrap.go:42] GET /healthz: (2.411788ms) 200 [[kube-probe/1.15] 100.97.56.1:18774]
I1221 12:32:07.038975       1 wrap.go:42] GET /apis/custom.metrics.k8s.io/v1beta1?timeout=32s: (3.720814ms) 200 [[kube-controller-manager/v1.15.6 (linux/amd64) kubernetes/7015f71/system:serviceaccount:kube-system:resourcequota-controller] 172.20.32.111:59326]
I1221 12:32:08.686203       1 wrap.go:42] GET /healthz: (98.74碌s) 200 [[kube-probe/1.15] 100.97.56.1:18776]
I1221 12:32:15.793700       1 wrap.go:42] GET /healthz: (105.158碌s) 200 [[kube-probe/1.15] 100.97.56.1:18792]
I1221 12:32:17.843441       1 wrap.go:42] GET /apis/custom.metrics.k8s.io/v1beta1: (3.520532ms) 200 [[Go-http-client/2.0] 172.20.32.111:59316]
E1221 12:32:17.843802       1 writers.go:149] apiserver was unable to write a JSON response: http2: stream closed
E1221 12:32:17.843821       1 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http2: stream closed"}
E1221 12:32:17.843850       1 writers.go:149] apiserver was unable to write a JSON response: http2: stream closed
I1221 12:32:17.844993       1 wrap.go:42] GET /apis/custom.metrics.k8s.io/v1beta1: (5.105521ms) 200 [[Go-http-client/2.0] 172.20.32.111:59316]
E1221 12:32:17.845914       1 writers.go:149] apiserver was unable to write a JSON response: http2: stream closed
I1221 12:32:17.845962       1 wrap.go:42] GET /apis/custom.metrics.k8s.io/v1beta1: (5.874927ms) 200 [[Go-http-client/2.0] 172.20.32.111:59316]
E1221 12:32:17.846000       1 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http2: stream closed"}
I1221 12:32:17.847100       1 wrap.go:42] GET /apis/custom.metrics.k8s.io/v1beta1: (7.291752ms) 200 [[Go-http-client/2.0] 172.20.32.111:59316]
E1221 12:32:17.848157       1 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http2: stream closed"}
I1221 12:32:17.849273       1 wrap.go:42] GET /apis/custom.metrics.k8s.io/v1beta1: (9.374677ms) 200 [[Go-http-client/2.0] 172.20.32.111:59316]
I1221 12:32:18.688467       1 wrap.go:42] GET /healthz: (2.382731ms) 200 [[kube-probe/1.15] 100.97.56.1:18794]
I1221 12:32:25.793586       1 wrap.go:42] GET /healthz: (98.499碌s) 200 [[kube-probe/1.15] 100.97.56.1:18810]
I1221 12:32:28.686212       1 wrap.go:42] GET /healthz: (102.644碌s) 200 [[kube-probe/1.15] 100.97.56.1:18814]
I1221 12:32:29.186659       1 wrap.go:42] GET /apis/custom.metrics.k8s.io/v1beta1?timeout=32s: (4.019335ms) 200 [[kube-controller-manager/v1.15.6 (linux/amd64) kubernetes/7015f71/system:serviceaccount:kube-system:generic-garbage-collector] 172.20.32.111:59326]
I1221 12:32:31.914026       1 wrap.go:42] GET /apis/custom.metrics.k8s.io/v1beta1?timeout=32s: (3.634452ms) 200 [[kube-controller-manager/v1.15.6 (linux/amd64) kubernetes/7015f71/controller-discovery] 172.20.32.111:59326]
I1221 12:32:35.795629       1 wrap.go:42] GET /healthz: (2.107537ms) 200 [[kube-probe/1.15] 100.97.56.1:18830]
I1221 12:32:37.648940       1 wrap.go:42] GET /apis/custom.metrics.k8s.io/v1beta1?timeout=32s: (3.888425ms) 200 [[kube-controller-manager/v1.15.6 (linux/amd64) kubernetes/7015f71/system:serviceaccount:kube-system:resourcequota-controller] 172.20.32.111:59326]
I1221 12:32:38.686260       1 wrap.go:42] GET /healthz: (86.262碌s) 200 [[kube-probe/1.15] 100.97.56.1:18832]
I1221 12:32:45.793698       1 wrap.go:42] GET /healthz: (100.624碌s) 200 [[kube-probe/1.15] 100.97.56.1:18848]
I1221 12:32:47.845754       1 wrap.go:42] GET /apis/custom.metrics.k8s.io/v1beta1: (3.484248ms) 200 [[Go-http-client/2.0] 172.20.32.111:59316]
I1221 12:32:47.846079       1 wrap.go:42] GET /apis/custom.metrics.k8s.io/v1beta1: (4.014033ms) 200 [[Go-http-client/2.0] 172.20.32.111:59316]
E1221 12:32:47.847051       1 writers.go:149] apiserver was unable to write a JSON response: http2: stream closed
E1221 12:32:47.847077       1 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http2: stream closed"}
I1221 12:32:47.847674       1 wrap.go:42] GET /apis/custom.metrics.k8s.io/v1beta1: (5.240547ms) 200 [[Go-http-client/2.0] 172.20.32.111:59316]
I1221 12:32:47.848122       1 wrap.go:42] GET /apis/custom.metrics.k8s.io/v1beta1: (6.175013ms) 200 [[Go-http-client/2.0] 172.20.32.111:59316]
I1221 12:32:47.848161       1 wrap.go:42] GET /apis/custom.metrics.k8s.io/v1beta1: (6.417656ms) 200 [[Go-http-client/2.0] 172.20.32.111:59316]
I1221 12:32:48.688424       1 wrap.go:42] GET /healthz: (2.318026ms) 200 [[kube-probe/1.15] 100.97.56.1:18850]
I1221 12:32:55.793730       1 wrap.go:42] GET /healthz: (116.224碌s) 200 [[kube-probe/1.15] 100.97.56.1:18866]
I1221 12:32:56.794202       1 wrap.go:42] GET /apis/custom.metrics.k8s.io/v1beta1?timeout=32s: (3.83693ms) 200 [[kube-controller-manager/v1.15.6 (linux/amd64) kubernetes/7015f71/controller-discovery] 172.20.32.111:59326]
I1221 12:32:58.686248       1 wrap.go:42] GET /healthz: (96.063碌s) 200 [[kube-probe/1.15] 100.97.56.1:18870]
I1221 12:32:59.838940       1 wrap.go:42] GET /apis/custom.metrics.k8s.io/v1beta1?timeout=32s: (3.979875ms) 200 [[kube-controller-manager/v1.15.6 (linux/amd64) kubernetes/7015f71/system:serviceaccount:kube-system:generic-garbage-collector] 172.20.32.111:59326]
I1221 12:33:05.795787       1 wrap.go:42] GET /healthz: (2.254013ms) 200 [[kube-probe/1.15] 100.97.56.1:18886]
I1221 12:33:08.259278       1 wrap.go:42] GET /apis/custom.metrics.k8s.io/v1beta1?timeout=32s: (4.126578ms) 200 [[kube-controller-manager/v1.15.6 (linux/amd64) kubernetes/7015f71/system:serviceaccount:kube-system:resourcequota-controller] 172.20.32.111:59326]
I1221 12:33:08.687881       1 wrap.go:42] GET /healthz: (112.353碌s) 200 [[kube-probe/1.15] 100.97.56.1:18888]

I can see the apiservice object:

$ kubectl get apiservice v1beta1.custom.metrics.k8s.io -oyaml
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
  creationTimestamp: "2019-12-21T12:02:18Z"
  labels:
    app: prometheus-adapter
    chart: prometheus-adapter-1.4.0
    heritage: Tiller
    release: custom-metrics
  name: v1beta1.custom.metrics.k8s.io
  resourceVersion: "1305481"
  selfLink: /apis/apiregistration.k8s.io/v1/apiservices/v1beta1.custom.metrics.k8s.io
  uid: 6078023e-79c6-43d8-9536-b2073492335f
spec:
  group: custom.metrics.k8s.io
  groupPriorityMinimum: 100
  insecureSkipTLSVerify: true
  service:
    name: custom-metrics-prometheus-adapter
    namespace: istio-system
    port: 443
  version: v1beta1
  versionPriority: 100
status:
  conditions:
  - lastTransitionTime: "2019-12-21T12:28:55Z"
    message: all checks passed
    reason: Passed
    status: "True"
    type: Available

I cannot see envoy_http_rq_total metrics:

$ kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/namespaces/istio-system/pods/*/envoy_http_rq_total" | jq -r 'last(.items[])'
Error from server (NotFound): the server could not find the metric envoy_http_rq_total for pods

I think this log is related to etcd not being able to responde to prometheus-adapter request. See this https://github.com/kubernetes/kubernetes/issues/82633

I have a question, what is the value of config param prometheus.url? Is it http://service-name ?

@dogra-gopal - i have noticed that it is "http://..svc"

For me, http:// worked eventually.

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.

This issue is being automatically closed due to inactivity.

I got the same issue on k8s v1.17 with adaptor v0.5.0 , changing to http:// doesn't work. I deploy adapter using kube-prometheus.

Any update on issue like this ?

I0506 11:28:29.190304       1 adapter.go:93] successfully using in-cluster auth
I0506 11:28:29.624391       1 serving.go:273] Generated self-signed cert (/var/run/serving-cert/apiserver.crt, /var/run/serving-cert/apiserver.key)
I0506 11:28:30.131352       1 serve.go:96] Serving securely on [::]:6443
E0506 11:28:53.628657       1 writers.go:149] apiserver was unable to write a JSON response: http2: stream closed
E0506 11:28:53.628687       1 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http2: stream closed"}
E0506 11:28:53.628699       1 writers.go:149] apiserver was unable to write a JSON response: http2: stream closed
E0506 11:28:53.630557       1 writers.go:149] apiserver was unable to write a JSON response: http2: stream closed
E0506 11:28:53.630851       1 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http2: stream closed"}
E0506 11:28:53.632975       1 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http2: stream closed"}

I have the same issue and I can not get custom-metrics. Any solution for this issue?

Was this page helpful?
0 / 5 - 0 ratings