Prometheus-operator: No metrics from kubelet

Created on 26 Apr 2018  路  3Comments  路  Source: prometheus-operator/prometheus-operator

What did you do?
Installed prometheus-operator and kube-prometheus using helm

What did you expect to see?
Pods resource usage metrics in grafana

What did you see instead? Under which circumstances?
Constantly firing K8SKubeletDown alert and no resource usage metrics on pods (only requests and limits)

I can confirm that issue reproduced several times. We deploy clean kubespray cluster, then two helm charts and get this.

Environment

  • Kubernetes version information:
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0", GitCommit:"fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState:"clean", BuildDate:"2018-03-26T16:55:54Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.5", GitCommit:"f01a2bf98249a4db383560443a59bed0c13575df", GitTreeState:"clean", BuildDate:"2018-03-19T15:50:45Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
  • Kubernetes cluster kind:
    kubespray deployment on top of ubuntu 16.04 in azure

  • Manifests:
    kubectl get servicemonitor kube-prometheus-exporter-kubelets --namespace monitoring --output yaml

apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  clusterName: ""
  creationTimestamp: 2018-04-23T14:25:55Z
  labels:
    chart: exporter-kubelets-0.2.8
    component: kubelets
    heritage: Tiller
    prometheus: kube-prometheus
    release: kube-prometheus
  name: kube-prometheus-exporter-kubelets
  namespace: monitoring
  resourceVersion: "6115"
  selfLink: /apis/monitoring.coreos.com/v1/namespaces/monitoring/servicemonitors/kube-prometheus-exporter-kubelets
  uid: 3ba93267-4702-11e8-b5bd-000d3a22c412
spec:
  endpoints:
  - bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
    interval: 15s
    port: https-metrics
    scheme: https
    tlsConfig:
      caFile: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
      insecureSkipVerify: true
  - bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
    honorLabels: true
    interval: 30s
    path: /metrics/cadvisor
    port: https-metrics
    scheme: https
    tlsConfig:
      caFile: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
      insecureSkipVerify: true
  jobLabel: component
  namespaceSelector:
    matchNames:
    - kube-system
  selector:
    matchLabels:
      k8s-app: kubelet

kubectl get svc kubelet -n kube-system --output yaml

apiVersion: v1
kind: Service
metadata:
  creationTimestamp: 2018-04-23T14:18:07Z
  labels:
    k8s-app: kubelet
  name: kubelet
  namespace: kube-system
  resourceVersion: "5279"
  selfLink: /api/v1/namespaces/kube-system/services/kubelet
  uid: 24ea84ca-4701-11e8-b5bd-000d3a22c412
spec:
  clusterIP: None
  ports:
  - name: https-metrics
    port: 10250
    protocol: TCP
    targetPort: 10250
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}
  • Prometheus Operator Logs:
    No logs greped by kubelet
helm

Most helpful comment

I found problem, this related to webhook auth.

Was resolved by adding this to kubespray inventory

kube_read_only_port: 10255
kubelet_authentication_token_webhook: true
kubelet_authorization_mode_webhook: true

All 3 comments

Could you check if in Prometheus you have the kubelet target?

Yes I can see Kubelet (0/8) each of these says server returned HTTP status 401 Unauthorized

I found problem, this related to webhook auth.

Was resolved by adding this to kubespray inventory

kube_read_only_port: 10255
kubelet_authentication_token_webhook: true
kubelet_authorization_mode_webhook: true
Was this page helpful?
0 / 5 - 0 ratings