Prometheus-operator: Having hard time with additional scrape configuration

Created on 20 Oct 2020  路  3Comments  路  Source: prometheus-operator/prometheus-operator

What happened?
Trying to get some static target configurations applied to Prometheus. Have followed what is suggested in
https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/additional-scrape-config.md and
https://github.com/prometheus-operator/prometheus-operator/issues/2840 and many other related issues.

Not able to figure out what I am missing and why the config is not getting applied.

Did you expect to see something different?
Was expecting the prometheus-config-reload container to detect the config changes and Prometheus hot reloads it.

How to reproduce it (as minimally and precisely as possible):

  1. Installed prometheus-operator into cluster using helm provided by kube-prometheus-stack. No changes made to it.
  2. Tried adding below static config.

Environment
Ubuntu 18

  • Prometheus Operator version:
    v0.42.1
amuralid@charm:~/dev_test/prom_reload$ kubectl describe deployment amuralid-master-kube-prome-operator
Name:                   amuralid-master-kube-prome-operator
Namespace:              default
CreationTimestamp:      Tue, 20 Oct 2020 15:37:13 +0000
Labels:                 app=kube-prometheus-stack-operator
                        app.kubernetes.io/managed-by=Helm
                        chart=kube-prometheus-stack-10.1.0
                        heritage=Helm
                        release=amuralid-master
Annotations:            deployment.kubernetes.io/revision: 1
                        meta.helm.sh/release-name: amuralid-master
                        meta.helm.sh/release-namespace: default
Selector:               app=kube-prometheus-stack-operator,release=amuralid-master
Replicas:               1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:           app=kube-prometheus-stack-operator
                    chart=kube-prometheus-stack-10.1.0
                    heritage=Helm
                    release=amuralid-master
  Service Account:  amuralid-master-kube-prome-operator
  Containers:
   kube-prometheus-stack:
    Image:      quay.io/prometheus-operator/prometheus-operator:v0.42.1
    Port:       8080/TCP
    Host Port:  0/TCP
    Args:
      --kubelet-service=kube-system/amuralid-master-kube-prome-kubelet
      --logtostderr=true
      --localhost=127.0.0.1
      --prometheus-config-reloader=quay.io/prometheus-operator/prometheus-config-reloader:v0.42.1
      --config-reloader-image=docker.io/jimmidyson/configmap-reload:v0.4.0
      --config-reloader-cpu=100m
      --config-reloader-memory=25Mi
    Environment:  <none>
    Mounts:       <none>
   tls-proxy:
    Image:      squareup/ghostunnel:v1.5.2
    Port:       8443/TCP
    Host Port:  0/TCP
    Args:
      server
      --listen=:8443
      --target=127.0.0.1:8080
      --key=cert/key
      --cert=cert/cert
      --disable-authentication
    Environment:  <none>
    Mounts:
      /cert from tls-proxy-secret (ro)
  Volumes:
   tls-proxy-secret:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  amuralid-master-kube-prome-admission
    Optional:    false
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   amuralid-master-kube-prome-operator-9768f6f8d (1/1 replicas created)
Events:          <none>
  • Kubernetes version information:
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.8", GitCommit:"9f2892aab98fe339f3bd70e3c470144299398ace", GitTreeState:"clean", BuildDate:"2020-08-13T16:12:48Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.5", GitCommit:"e6503f8d8f769ace2f338794c914a96fc335df0f", GitTreeState:"clean", BuildDate:"2020-06-26T03:39:24Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
  • Kubernetes cluster kind:

kubeadm

  • Manifests:
    The secret and Prometheus Kind I am applying:
apiVersion: v1
kind: Secret
metadata:
  name: prom-additional-scrape-configs
  namespace: default
stringData:
  prometheus-additional.yaml: |
    - job_name: "test-target"
      static_configs:
      - targets: ["localhost:9090"]
---
apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
  namespace: default
  name: prometheus
spec:
  replicas: 1
  serviceAccountName: amuralid-master-kube-prome-operator
  securityContext:
    runAsUser: 65534
    runAsNonRoot: true
    runAsGroup: 65534
    fsGroup: 65534
  additionalScrapeConfigs:
    key: "prometheus-additional.yaml"
    name: "prom-additional-scrape-configs"
  serviceMonitorSelector: {}

Installed secrets:

kubectl get secrets
NAME                                                          TYPE                                  DATA   AGE
alertmanager-amuralid-master-kube-prome-alertmanager          Opaque                                1      78m
amuralid-master-grafana                                       Opaque                                3      78m
amuralid-master-grafana-test-token-xtl7v                      kubernetes.io/service-account-token   3      78m
amuralid-master-grafana-token-5mzrn                           kubernetes.io/service-account-token   3      78m
amuralid-master-kube-prome-admission                          Opaque                                3      78m
amuralid-master-kube-prome-alertmanager-token-m57mp           kubernetes.io/service-account-token   3      78m
amuralid-master-kube-prome-operator-token-gv4fr               kubernetes.io/service-account-token   3      78m
amuralid-master-kube-prome-prometheus-token-7cljj             kubernetes.io/service-account-token   3      78m
amuralid-master-kube-state-metrics-token-ggh6n                kubernetes.io/service-account-token   3      78m
amuralid-master-prometheus-node-exporter-token-mlqdp          kubernetes.io/service-account-token   3      78m
default-token-85f7f                                           kubernetes.io/service-account-token   3      103d
prom-additional-scrape-configs                                Opaque                                1      69m
prometheus-amuralid-master-kube-prome-prometheus              Opaque                                1      78m
prometheus-amuralid-master-kube-prome-prometheus-tls-assets   Opaque                                0      78m
prometheus-prometheus                                         Opaque                                1      69m
prometheus-prometheus-tls-assets                              Opaque                                0      69m
sh.helm.release.v1.amuralid-master.v1                         helm.sh/release.v1                    1      78m
sh.helm.release.v1.lpnravisha3.v1                             helm.sh/release.v1                    1      15d
amuralid@charm:~/dev_test/prom_reload$ kubectl get secret prom-additional-scrape-configs -o yaml
apiVersion: v1
data:
  prometheus-additional.yaml: LSBqb2JfbmFtZTogInRlc3QtdGFyZ2V0IgogIHN0YXRpY19jb25maWdzOgogIC0gdGFyZ2V0czogWyJsb2NhbGhvc3Q6OTA5MCJdCg==
kind: Secret
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"Secret","metadata":{"annotations":{},"name":"prom-additional-scrape-configs","namespace":"default"},"stringData":{"prometheus-additional.yaml":"- job_name: \"test-target\"\n  static_configs:\n  - targets: [\"localhost:9090\"]\n"}}
  creationTimestamp: "2020-10-20T15:46:01Z"
  managedFields:
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:data:
        .: {}
        f:prometheus-additional.yaml: {}
      f:metadata:
        f:annotations:
          .: {}
          f:kubectl.kubernetes.io/last-applied-configuration: {}
      f:type: {}
    manager: kubectl
    operation: Update
    time: "2020-10-20T15:46:01Z"
  name: prom-additional-scrape-configs
  namespace: default
  resourceVersion: "18992983"
  selfLink: /api/v1/namespaces/default/secrets/prom-additional-scrape-configs
  uid: ed65862c-5301-4d22-bb1c-f01387ab17f3
type: Opaque
  • Prometheus Operator Logs:

I see these logs when I apply the config.yaml:

level=info ts=2020-10-20T15:57:24.917992528Z caller=operator.go:424 component=alertmanageroperator msg="sync alertmanager" key=default/amuralid-master-kube-prome-alertmanager
level=info ts=2020-10-20T15:57:24.945624949Z caller=operator.go:424 component=alertmanageroperator msg="sync alertmanager" key=default/amuralid-master-kube-prome-alertmanager
level=warn ts=2020-10-20T16:02:24.917954749Z caller=operator.go:482 component=alertmanageroperator msg="alertmanager key=default/amuralid-master-kube-prome-alertmanager, field spec.baseImage is deprecated, 'spec.image' field should be used instead"
level=info ts=2020-10-20T16:02:24.91812631Z caller=operator.go:424 component=alertmanageroperator msg="sync alertmanager" key=default/amuralid-master-kube-prome-alertmanager
level=info ts=2020-10-20T16:02:24.940617234Z caller=operator.go:424 component=alertmanageroperator msg="sync alertmanager" key=default/amuralid-master-kube-prome-alertmanager
level=warn ts=2020-10-20T16:07:24.918076907Z caller=operator.go:482 component=alertmanageroperator msg="alertmanager key=default/amuralid-master-kube-prome-alertmanager, field spec.baseImage is deprecated, 'spec.image' field should be used instead"
level=info ts=2020-10-20T16:07:24.918218971Z caller=operator.go:424 component=alertmanageroperator msg="sync alertmanager" key=default/amuralid-master-kube-prome-alertmanager
level=info ts=2020-10-20T16:07:24.943370753Z caller=operator.go:424 component=alertmanageroperator msg="sync alertmanager" key=default/amuralid-master-kube-prome-alertmanager
level=warn ts=2020-10-20T16:12:24.918204981Z caller=operator.go:482 component=alertmanageroperator msg="alertmanager key=default/amuralid-master-kube-prome-alertmanager, field spec.baseImage is deprecated, 'spec.image' field should be used instead"
level=info ts=2020-10-20T16:12:24.918359118Z caller=operator.go:424 component=alertmanageroperator msg="sync alertmanager" key=default/amuralid-master-kube-prome-alertmanager
level=info ts=2020-10-20T16:12:24.941112238Z caller=operator.go:424 component=alertmanageroperator msg="sync alertmanager" key=default/amuralid-master-kube-prome-alertmanager
level=warn ts=2020-10-20T16:17:24.91846587Z caller=operator.go:482 component=alertmanageroperator msg="alertmanager key=default/amuralid-master-kube-prome-alertmanager, field spec.baseImage is deprecated, 'spec.image' field should be used instead"
level=info ts=2020-10-20T16:17:24.918799062Z caller=operator.go:424 component=alertmanageroperator msg="sync alertmanager" key=default/amuralid-master-kube-prome-alertmanager
level=info ts=2020-10-20T16:17:24.93783407Z caller=operator.go:424 component=alertmanageroperator msg="sync alertmanager" key=default/amuralid-master-kube-prome-alertmanager
level=info ts=2020-10-20T16:20:32.305827038Z caller=operator.go:1228 component=prometheusoperator msg="sync prometheus" key=default/prometheus
level=info ts=2020-10-20T16:20:32.431686712Z caller=operator.go:1228 component=prometheusoperator msg="sync prometheus" key=default/prometheus
level=info ts=2020-10-20T16:20:32.635079207Z caller=operator.go:1228 component=prometheusoperator msg="sync prometheus" key=default/prometheus
level=info ts=2020-10-20T16:20:32.776421694Z caller=operator.go:1228 component=prometheusoperator msg="sync prometheus" key=default/prometheus
level=info ts=2020-10-20T16:20:34.027583875Z caller=operator.go:1228 component=prometheusoperator msg="sync prometheus" key=default/prometheus

No log in prometheus and Prometheus-config-reload container. No error or anything reported which is making it hard for me to debug.

Service accounts:

kubectl get sa
NAME                                       SECRETS   AGE
amuralid-master-grafana                    1         81m
amuralid-master-grafana-test               1         81m
amuralid-master-kube-prome-alertmanager    1         81m
amuralid-master-kube-prome-operator        1         81m
amuralid-master-kube-prome-prometheus      1         81m
amuralid-master-kube-state-metrics         1         81m
amuralid-master-prometheus-node-exporter   1         81m
default                                    1         103d
amuralid@charm:~/dev_test/prom_reload$ kubectl get Prometheus
NAME                                    VERSION   REPLICAS   AGE
amuralid-master-kube-prome-prometheus   v2.21.0   1          82m
prometheus                                        1          73m
amuralid@charm:~/dev_test/prom_reload$ kubectl get Prometheus prometheus -o yaml
apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"monitoring.coreos.com/v1","kind":"Prometheus","metadata":{"annotations":{},"name":"prometheus","namespace":"default"},"spec":{"additionalScrapeConfigs":{"key":"prometheus-additional.yaml","name":"prom-additional-scrape-configs"},"replicas":1,"securityContext":{"fsGroup":65534,"runAsGroup":65534,"runAsNonRoot":true,"runAsUser":65534},"serviceAccountName":"amuralid-master-kube-prome-operator","serviceMonitorSelector":{}}}
  creationTimestamp: "2020-10-20T15:46:01Z"
  generation: 2
  managedFields:
  - apiVersion: monitoring.coreos.com/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .: {}
          f:kubectl.kubernetes.io/last-applied-configuration: {}
      f:spec:
        .: {}
        f:additionalScrapeConfigs:
          .: {}
          f:key: {}
          f:name: {}
        f:replicas: {}
        f:securityContext:
          .: {}
          f:fsGroup: {}
          f:runAsGroup: {}
          f:runAsNonRoot: {}
          f:runAsUser: {}
        f:serviceAccountName: {}
        f:serviceMonitorSelector: {}
    manager: kubectl
    operation: Update
    time: "2020-10-20T16:20:32Z"
  name: prometheus
  namespace: default
  resourceVersion: "18997563"
  selfLink: /apis/monitoring.coreos.com/v1/namespaces/default/prometheuses/prometheus
  uid: 1441ef57-c749-4c6d-b23f-62c08ed81cfc
spec:
  additionalScrapeConfigs:
    key: prometheus-additional.yaml
    name: prom-additional-scrape-configs
  replicas: 1
  securityContext:
    fsGroup: 65534
    runAsGroup: 65534
    runAsNonRoot: true
    runAsUser: 65534
  serviceAccountName: amuralid-master-kube-prome-operator
  serviceMonitorSelector: {}

Anything else we need to know?:
Took the latest version of prometheus-operator as it is.
Everything is installed in the default namespace.
Tried giving serviceaccount of both prometheus and prometheus operator in Prometheus Kind object.
No logs in prometheus-config-reload container.

kinsupport

Most helpful comment

Hi @wesleywh,
I install kube-prometheus-stack today and the additionalScrapeConfigsSecret seems work and can scrape.

Chart Version

helm list -n prometheus
NAME        NAMESPACE   REVISION    UPDATED                                 STATUS      CHART                           APP VERSION
prometheus  prometheus  3           2020-11-11 15:49:00.576513131 +0800 CST deployed    kube-prometheus-stack-11.1.1    0.43.2     

Scrape config
prometheus-additional.yaml

- job_name: "DCS-logstash"
  scrape_interval: 60s
  static_configs:
  - targets: ["x.x.x.x:9114"]
    labels:
      env: 'production'
      app: 'logstash'

Create Secret

kubectl create secret generic additional-configs --from-file=prometheus-additional.yaml -n prometheus

custom-values.yaml

...
...
...
## Deploy a Prometheus instance
##
prometheus:
  ## Settings affecting prometheusSpec
  ## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#prometheusspec
  ##
  prometheusSpec:
    ## How long to retain metrics
    ##
    retention: 365d

    # Additional Scrape Config Secret
    additionalScrapeConfigsSecret:
        enabled: true
        name: additional-configs
        key: prometheus-additional.yaml

result
kubectl get prometheus -n prometheus -o yaml

...
...
...
  spec:
    additionalScrapeConfigs:
      key: prometheus-additional.yaml
      name: additional-configs
...
...

All 3 comments

This does not work for me either. Using the additionalScrapeConfigsSecret and the additionalScrapeConfigs values are completely ignored.

Someone answered my stack overflow answer and I was able to accomplish this by following the instructions provided on that answer found here: https://stackoverflow.com/questions/64452966/add-custom-scrape-endpoints-in-helm-chart-kube-prometheus-stack-deployment/64507135#64507135

Hi @wesleywh,
I install kube-prometheus-stack today and the additionalScrapeConfigsSecret seems work and can scrape.

Chart Version

helm list -n prometheus
NAME        NAMESPACE   REVISION    UPDATED                                 STATUS      CHART                           APP VERSION
prometheus  prometheus  3           2020-11-11 15:49:00.576513131 +0800 CST deployed    kube-prometheus-stack-11.1.1    0.43.2     

Scrape config
prometheus-additional.yaml

- job_name: "DCS-logstash"
  scrape_interval: 60s
  static_configs:
  - targets: ["x.x.x.x:9114"]
    labels:
      env: 'production'
      app: 'logstash'

Create Secret

kubectl create secret generic additional-configs --from-file=prometheus-additional.yaml -n prometheus

custom-values.yaml

...
...
...
## Deploy a Prometheus instance
##
prometheus:
  ## Settings affecting prometheusSpec
  ## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#prometheusspec
  ##
  prometheusSpec:
    ## How long to retain metrics
    ##
    retention: 365d

    # Additional Scrape Config Secret
    additionalScrapeConfigsSecret:
        enabled: true
        name: additional-configs
        key: prometheus-additional.yaml

result
kubectl get prometheus -n prometheus -o yaml

...
...
...
  spec:
    additionalScrapeConfigs:
      key: prometheus-additional.yaml
      name: additional-configs
...
...
Was this page helpful?
0 / 5 - 0 ratings