Charts: [stable/prometheus-operator] No serviceMonitors are picked up from different namespaces

Created on 23 Apr 2019  Â·  6Comments  Â·  Source: helm/charts

Is this a request for help?:

Yes

Version of Helm and Kubernetes:

Client: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}

Which chart:
[stable/prometheus-operator]

What happened:

Hi
I’ve been struggling getting metrics from another namespace with prometheus operator helm chart on minikube.

What did you do?

$ minikube start  \
--memory=4096 \
--bootstrapper=kubeadm \
--extra-config=scheduler.address=0.0.0.0 \
--extra-config=controller-manager.address=0.0.0.0

$ kubectl create serviceaccount tiller --namespace kube-system
$ kubectl create clusterrolebinding tiller-role-binding --clusterrole cluster-admin --serviceaccount=kube-system:tiller
$ helm init --service-account tiller

$ helm install stable/prometheus-operator --name=monitoring --namespace=monitoring --values=values.yaml

I changed values.yaml to give Prometheus relevant rbac permission for different namespaces by changing prometheus.rbac.roleNamespaces parameters.

I installed mongodb chart in another namespace.

helm install stable/mongodb --set metrics.enabled=true  --set metrics.serviceMonitor.enabled=true --namespace=customer1 -n customer1

However I can not get Prometheus scrape metrics from other namespaces. So no discovery in Prometheus UI. I checked Prometheus conf but no customer1 related label.

kubectl get servicemonitor -o yaml -n customer1

apiVersion: v1
items:
- apiVersion: monitoring.coreos.com/v1
  kind: ServiceMonitor
  metadata:
    creationTimestamp: "2019-04-23T07:22:01Z"
    generation: 3
    labels:
      app: mongodb
      chart: mongodb-5.16.1
      heritage: Tiller
      release: customer1
    name: customer1-mongodb
    namespace: customer1
    resourceVersion: "7929"
    selfLink: /apis/monitoring.coreos.com/v1/namespaces/customer1/servicemonitors/customer1-mongodb
    uid: 7cd14fc4-6598-11e9-8bf9-0800270c636b
  spec:
    endpoints:
    - interval: 30s
      port: metrics
    jobLabel: customer1-mongodb
    namespaceSelector:
      matchNames:
      - customer1
    selector:
      matchLabels:
        app: mongodb
        chart: mongodb-5.16.1
        heritage: Tiller
        release: customer1-mongodb
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""
kubectl get prometheus -o yaml -n monitoring

apiVersion: v1
items:
- apiVersion: monitoring.coreos.com/v1
  kind: Prometheus
  metadata:
    creationTimestamp: "2019-04-23T07:13:39Z"
    generation: 3
    labels:
      app: prometheus-operator-prometheus
      chart: prometheus-operator-5.0.12
      heritage: Tiller
      release: monitoring
    name: monitoring-prometheus-oper-prometheus
    namespace: monitoring
    resourceVersion: "1752"
    selfLink: /apis/monitoring.coreos.com/v1/namespaces/monitoring/prometheuses/monitoring-prometheus-oper-prometheus
    uid: 51a242c0-6597-11e9-8bf9-0800270c636b
  spec:
    alerting:
      alertmanagers:
      - name: monitoring-prometheus-oper-alertmanager
        namespace: monitoring
        pathPrefix: /
        port: web
    baseImage: quay.io/prometheus/prometheus
    externalUrl: http://monitoring-prometheus-oper-prometheus.monitoring:9090
    listenLocal: false
    logLevel: info
    paused: false
    replicas: 1
    retention: 10d
    routePrefix: /
    ruleNamespaceSelector: {}
    ruleSelector:
      matchLabels:
        app: prometheus-operator
        release: monitoring
    securityContext:
      fsGroup: 2000
      runAsNonRoot: true
      runAsUser: 1000
    serviceAccountName: monitoring-prometheus-oper-prometheus
    serviceMonitorNamespaceSelector: {}
    serviceMonitorSelector:
      matchLabels:
        release: monitoring
    version: v2.7.1
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""

Most helpful comment

Looks like the current ServiceMonitorSelector is generated due to prometheus.prometheusSpec.serviceMonitorSelectorNilUsesHelmValues being not overwritten to false: https://github.com/helm/charts/blob/master/stable/prometheus-operator/templates/prometheus/prometheus.yaml#L71-L77. So it should be enough to set this one without modifying the first… (edited)
stable/prometheus-operator/templates/prometheus/prometheus.yaml:71

{{ else if .Values.prometheus.prometheusSpec.serviceMonitorSelectorNilUsesHelmValues }}

Instead I changed serviceMonitorSelectorNilUsesHelmValues to false. So prometheus resource got updated and picked up serviceMonitors from other namespaces.

  serviceAccountName: monitoring-prometheus-oper-prometheus
  serviceMonitorNamespaceSelector: {}
  serviceMonitorSelector: {}
  version: v2.7.1

Thanks!

All 6 comments

@occelebi Typically, Service Monitoring runs in the same namespace as prometheus-operator

Thanks for quick reply @batazor
It works in kube-prometheus without putting serviceMonitor of services running on same namespaces than Prometheus operator. If so other helm chart(eg mongodb) should be configurable to put their serviceMonitor to different namespace ? I could not find any parameter to separate serviceMonitor from namespace where mongodb pods get installed.

Looks like the current ServiceMonitorSelector is generated due to prometheus.prometheusSpec.serviceMonitorSelectorNilUsesHelmValues being not overwritten to false: https://github.com/helm/charts/blob/master/stable/prometheus-operator/templates/prometheus/prometheus.yaml#L71-L77. So it should be enough to set this one without modifying the first… (edited)
stable/prometheus-operator/templates/prometheus/prometheus.yaml:71

{{ else if .Values.prometheus.prometheusSpec.serviceMonitorSelectorNilUsesHelmValues }}

Instead I changed serviceMonitorSelectorNilUsesHelmValues to false. So prometheus resource got updated and picked up serviceMonitors from other namespaces.

  serviceAccountName: monitoring-prometheus-oper-prometheus
  serviceMonitorNamespaceSelector: {}
  serviceMonitorSelector: {}
  version: v2.7.1

Thanks!

@occelebi thank you so much!

I spent hours trying to figure this out!

@occelebi Thanks a lot! I solved my problem too.

Was this page helpful?
0 / 5 - 0 ratings