Charts: [stable/prometheus] Load alerts and rules from configmap

Created on 14 Nov 2018  Â·  10Comments  Â·  Source: helm/charts

Is this a request for help?:

Yes

Is this a BUG REPORT or FEATURE REQUEST? (choose one):
Feature req

Version of Helm and Kubernetes:

➜  helm version
Client: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"}
➜ kubectl version
Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.0", GitCommit:"0ed33881dc4355495f623c6f22e7dd0b7632b7c0", GitTreeState:"clean", BuildDate:"2018-09-28T15:18:13Z", GoVersion:"go1.11", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"10+", GitVersion:"v1.10.7-gke.11", GitCommit:"fa90543563c9cfafca69128ce8cd9ecd5941940f", GitTreeState:"clean", BuildDate:"2018-11-08T20:22:21Z", GoVersion:"go1.9.3b4", Compiler:"gc", Platform:"linux/amd64"}

Which chart:
stable/prometheus

What happened:
How to load alerts and rules from configmap rather then polluting values.yaml file
Update the instructions please?

What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know:

Most helpful comment

I just recently solved this issue for my own use. Perhaps this will help you. Or, perhaps someone with more experience will be able to offer up something better.

First, I loaded all my alerts files into a directory and created a single configMap to contain all the various files with:

#!/bin/bash

ALERT_NAME=prometheus-alerts
kubectl create configmap ${ALERT_NAME} --from-file alerts/ --namespace monitoring -o yaml --dry-run | kubectl apply -f -

This gives me a single shell script that will update the config map as new alerts are added to that directory.

Now, we need to configure Prometheus to mount that configmap _and_ tell Prometheus where to look for alerts and rules files. I did that with the following values:

# mount alerts at `/etc/config/alerts`
server:
  extraConfigmapMounts:
    - name: prometheus-alerts
      mountPath: /etc/config/alerts
      configMap: prometheus-alerts
      readOnly: true

# configure Prometheus to look at `/etc/config/alerts` for alert files
serverFiles:
  prometheus.yml:
    rule_files:
      - /etc/config/rules/*.yaml
      - /etc/config/alerts/*.yaml

Note: I'm using a blob-type *.yaml for my alerts files. If you're not using a .yaml file, update accordingly.

Lastly, for bonus points, we can configure the configmapReload container that sits alongside the Prometheus container on a pod to also mount that config map and trigger a Prometheus reload on changes. This gives us hot-reloading of alerts without the need to redeploy the server every time we want to update. That can be done with this:

configmapReload:
  extraConfigmapMounts:
    - name: prometheus-alerts
      mountPath: /etc/alerts
      configMap: prometheus-alerts
      readOnly: true

  extraVolumeDirs:
    - /etc/alerts

Test it by creating an alert, running the bash script to load it into the configmap, and port-forwarding 9090 from the prometheus-server pod to your localhost. The alert should be shown on the alerts tab of Prometheus. If you make a change or add another alert, update the configmap with the shell script, and wait a few seconds, the alerts should also update on Prometheus.

Hope this helps!

All 10 comments

@jaipradeesh perhaps try

I just recently solved this issue for my own use. Perhaps this will help you. Or, perhaps someone with more experience will be able to offer up something better.

First, I loaded all my alerts files into a directory and created a single configMap to contain all the various files with:

#!/bin/bash

ALERT_NAME=prometheus-alerts
kubectl create configmap ${ALERT_NAME} --from-file alerts/ --namespace monitoring -o yaml --dry-run | kubectl apply -f -

This gives me a single shell script that will update the config map as new alerts are added to that directory.

Now, we need to configure Prometheus to mount that configmap _and_ tell Prometheus where to look for alerts and rules files. I did that with the following values:

# mount alerts at `/etc/config/alerts`
server:
  extraConfigmapMounts:
    - name: prometheus-alerts
      mountPath: /etc/config/alerts
      configMap: prometheus-alerts
      readOnly: true

# configure Prometheus to look at `/etc/config/alerts` for alert files
serverFiles:
  prometheus.yml:
    rule_files:
      - /etc/config/rules/*.yaml
      - /etc/config/alerts/*.yaml

Note: I'm using a blob-type *.yaml for my alerts files. If you're not using a .yaml file, update accordingly.

Lastly, for bonus points, we can configure the configmapReload container that sits alongside the Prometheus container on a pod to also mount that config map and trigger a Prometheus reload on changes. This gives us hot-reloading of alerts without the need to redeploy the server every time we want to update. That can be done with this:

configmapReload:
  extraConfigmapMounts:
    - name: prometheus-alerts
      mountPath: /etc/alerts
      configMap: prometheus-alerts
      readOnly: true

  extraVolumeDirs:
    - /etc/alerts

Test it by creating an alert, running the bash script to load it into the configmap, and port-forwarding 9090 from the prometheus-server pod to your localhost. The alert should be shown on the alerts tab of Prometheus. If you make a change or add another alert, update the configmap with the shell script, and wait a few seconds, the alerts should also update on Prometheus.

Hope this helps!

Thanks @danieldides

What happens when you have multiple apps, each with their own set of alerts (and each with their own helm chart). If we have a single ConfigMap, such as in @danieldides example, then each app would need to update it (ie. patch it). Helm doesn't seem to support updating pre-existing resources though (in our case the prometheus-alerts ConfigMap).
It would seem dandy, if we each app would be able to create their own ConfigMap (eg. prometheus-alerts-app1 & prometheus-alerts-app2) and automagically mount these in Prometheus.
Without having to manually update the extraConfigmapMounts section, and redeploying the prometheus helm chart.

Do you guys have any suggestions in this area?

In this video https://www.linkedin.com/learning/kubernetes-monitoring-with-prometheus/creating-alerts-in-prometheus

The speaker talk about the auto-discovery of alert rules configmap. I tried it but the alert does not appear. I did not find any documentation about this special label "role: prometheus-role".

In this video https://www.linkedin.com/learning/kubernetes-monitoring-with-prometheus/creating-alerts-in-prometheus

The speaker talk about the auto-discovery of alert rules configmap. I tried it but the alert does not appear. I did not find any documentation about this special label "role: prometheus-role".

As they mention in the video they use prometheus-operator. So this is not possible with just prometheus as installed with the prometheus chart.

@danieldides does it work for you on latest stable version ? Cause I get a error on prometheus.yml file .

incredibly informative ^^

@danieldides I took inspiration from that you did and I went one step ahead and added a sidecar container to watch for configmaps. Here is my configuration

  extraVolumes:
    - name: prometheus-alerts
      emptyDir: {}
  extraVolumeMounts:
    - mountPath: "/etc/config/alerts"
      name: "prometheus-alerts"
sidecarContainers:
    - name: alert-datasources
      image: kiwigrid/k8s-sidecar:0.1.99
      imagePullPolicy: IfNotPresent
      env:
        - name: METHOD
          value: WATCH
        - name: LABEL
          value: alertmanager_datasource
        - name: NAMESPACE
          value: ALL
        - name: FOLDER
          value: "/etc/config/alerts"
        - name: REQ_URL
          value: http://localhost:9090/-/reload
        - name: REQ_METHOD
          value: POST
        - name: RESOURCE
          value: "configmap"
      volumeMounts:
        - name: prometheus-alerts
          mountPath: /etc/config/alerts

To anyone still wanting this functionality.

@danieldides I took inspiration from that you did and I went one step ahead and added a sidecar container to watch for configmaps. Here is my configuration

  extraVolumes:
    - name: prometheus-alerts
      emptyDir: {}
  extraVolumeMounts:
    - mountPath: "/etc/config/alerts"
      name: "prometheus-alerts"
sidecarContainers:
    - name: alert-datasources
      image: kiwigrid/k8s-sidecar:0.1.99
      imagePullPolicy: IfNotPresent
      env:
        - name: METHOD
          value: WATCH
        - name: LABEL
          value: alertmanager_datasource
        - name: NAMESPACE
          value: ALL
        - name: FOLDER
          value: "/etc/config/alerts"
        - name: REQ_URL
          value: http://localhost:9090/-/reload
        - name: REQ_METHOD
          value: POST
        - name: RESOURCE
          value: "configmap"
      volumeMounts:
        - name: prometheus-alerts
          mountPath: /etc/config/alerts

To anyone still wanting this functionality.

Very good idea. Can you provide some detailed configuration? Thank you very much!

Was this page helpful?
0 / 5 - 0 ratings