Prometheus-operator: [Exporter] How to use the blackbox_exporter with prometheus-operator?

Created on 1 Apr 2017  路  81Comments  路  Source: prometheus-operator/prometheus-operator

The blackbox_exporter requires targets and params to be set for the exporter to report data.
See the example configuration for prometheus: https://github.com/prometheus/blackbox_exporter#prometheus-configuration

Is there a way to achieve a config like the above with prometheus-operator and how?
If it is achievable, it would be nice to add it to the user-guides, as I'm definitely not the only one interested in running the blackbox_exporter.

Most helpful comment

I was able to use prometheus-operator along with blackbox-exporter to setup scrape jobs for both internal Kubernetes services and external URLs.

First is using Prometheus' service discovery for Kubernetes services and latter one is based on static configs. Also static configs can easily be replaced with file based discovery if you want to define http endpoints in separate files.

Here's an example you can put into values.yaml when installing prometheus-operator:

    additionalScrapeConfigs:
      ## Monitor internal Kubernetes services
      # Blackbox-exported should be installed separately via https://github.com/helm/charts/tree/master/stable/prometheus-blackbox-exporter
      - job_name: 'kubernetes-services'
         # blackbox-exporter path to scrape
        metrics_path: /probe
        params:
          module: [http_2xx]
        kubernetes_sd_configs:
        - role: service
        relabel_configs:

        # 1. Example relabel to probe only some services that have "example.io/should_be_probed = true" annotation
        - source_labels: [__meta_kubernetes_service_annotation_example_io_should_be_probed]
          action: keep
          regex: true

        # 2. Save address in a separate label
        - source_labels: [__address__]
          target_label: __param_target

        # 3. Replace address with an internal blackbox service so scraper is always pointed at blackbox-exporter
        - target_label: __address__
          replacement: blackbox-exporter-service:9115

        # 4. Save address in an instance label since __param_target is going to be dropped
        - source_labels: [__param_target]
          target_label: instance

        # 5. Save module in an module label since __param_module is going to be dropped
        - source_labels: [__param_module]
          target_label: module

        # 6. Add namespace and service name labels
        - action: labelmap
          regex: __meta_kubernetes_service_label_(.+)
        - source_labels: [__meta_kubernetes_namespace]
          target_label: kubernetes_namespace
        - source_labels: [__meta_kubernetes_service_name]
          target_label: kubernetes_name

      ## Monitor external services
      - job_name: 'external-services'
        # blackbox-exporter path to scrape
        metrics_path: /probe
        static_configs:
          - labels:
              module: http_2xx
            targets:
              - https://kubernetespodcast.com/about/index.html
              - https://www.cncf.io
        relabel_configs:
          # 1. Save address in a separate label
          - source_labels: [__address__]
            target_label: __param_target

          # 2. Save module in an module label since __param_module is going to be dropped
          - source_labels: [module]
            target_label: __param_module

          # 3. Save module in an module label since __param_target is going to be dropped
          - source_labels: [__param_target]
            target_label: instance

          # 4. Replace address with an internal blackbox service so scraper is always pointed at blackbox-exporter
          - target_label: __address__
            replacement: blackbox-exporter-service:9115

I've added comments to each block to make it more helpful.

All 81 comments

We ultimately want to have blackbox probing fully-integrated. Right now you'd have to achieve this by using a custom Prometheus configuration with your operator. You can do so by omitting the serviceMonitorSelector.

For a deeper integration, we have to understand the different use cases better. Can you elaborate a bit on which things you want to blackbox probe?

Thanks for the hints at using a custom configuration for this!

I was thinking about a) cluster internal and b) external blackbox probing (tcp, icmp, http, https).

Also interested in this, and we're currently using blackbox exporter to monitor availability for external services to give us a bit of a clearer picture of what a user is seeing until the rest of our monitoring shapes up.

Current workaround is having a second Prometheus instance with a custom configuration that sends to our alertmanager.

Me too...

I'm trying to figure out how to monitor things external to the k8s cluster.

I.E. WMI exporter and Node Exporter for nodes in our regular VM environment, and snmp and blackbox exporter for ping/snmp monitoring of other devices on our network.

@MattMencel anything not running as part of the Kubernetes cluster or in it is a use case for a custom Prometheus configuration. You can specify your own configuration by skipping the serviceMonitorSelector. I've been meaning to write some documentation on custom configurations, I'm hoping to do this somewhat soon.

Yeah that's where I got stuck. I'm still pretty new to prometheus and k8s.

For now I've got an external prometheus VM where I'm doing some of this monitoring. Eventually I'm wanting to move it all into the k8s cluster.

Thanks!

I'm starting to monitor external services with blackbox and operator so I got onto this issue.

My proposal for this issue and for others that will not be managed by operator is to accept a job append configuration. The appendJobSelector will be attached to the end of the job configuration, generated by prometheus. Below are the prometheus spec and the job configmap.

Prometheus

  apiVersion: monitoring.coreos.com/v1alpha1
  kind: Prometheus
  metadata:
    labels:
      prometheus: k8s
    name: k8s
    namespace: monitoring
  spec:
    alerting:
      alertmanagers:
      - name: alertmanager-main
        namespace: monitoring
        port: web
    appendJobSelector:                               
      matchLabels:                              
        name: prometheus-custom-jobs          
        prometheus: k8s                                    
    replicas: 1
  ...

Configmap

kind: ConfigMap
apiVersion: v1
metadata:
  name: prometheus-custom-jobs
  labels:
    name: prometheus-custom-jobs
    prometheus: k8s
data:
    - job_name: 'blackbox'
    metrics_path: /probe
    params:
      module: [http_2xx]  # Look for a HTTP 200 response.
    static_configs:
      - targets:
        - http://prometheus.io    # Target to probe with http.
        - https://prometheus.io   # Target to probe with https.
        - http://example.com:8080 # Target to probe with http on port 8080.
    relabel_configs:
      - source_labels: [__address__]
        target_label: __param_target
      - source_labels: [__param_target]
        target_label: instance
      - target_label: __address__
        replacement: 127.0.0.1:9115  # Blackbox exporter

@gianrubio +1 for the appendJob idea; that's exactly what I'm looking for.

I like the simplicity of ServiceMonitors and would like to use them in most cases, but there are a few jobs in my existing (non-Operator) Prometheus I'd like to carry across, such as monitoring the external etcd used by Kube masters, which can't be expressed as a ServiceMonitor.

@grrywlsn actually in that case what you want to do is create an Endpoints object yourself. As this is a common theme for etcd, we've prepared an example for it: https://github.com/coreos/prometheus-operator/blob/master/contrib/kube-prometheus/manifests/etcd/etcd-bootkube-gce.yaml

This issue is about actually blackbox probing your applications. For example performing real HTTP requests on your application to see whether it is responding the way you expect.

@gianrubio the point of the Prometheus Operator is to provide sensible abstractions for Prometheus. Your suggestion starts a bit too low level. We want to build abstractions that are higher up, so people don't need to know the Prometheus/blackbox exporter configuration paradigms.

@brancz awesome, thanks for that example! it wasn't all that obvious from the docs that I could see.

@grrywlsn docs contributions are always welcome! :slightly_smiling_face: If anything is not as clear as it should be we're happy about PRs!

Good to know. I'm working on moving our Prometheus deployments to Prometheus Operator deployments at the moment, so would be happy to update when we've got something good.

@brancz so do you think operator will be responsible to manage blackbox-export as same way it take care of alertmanager?

What about a statefulset implementation of blackbox exporter?

The blackbox exporter doesn't have any state of its own, so a StatefulSet would be exaggerated, but some kind of abstraction would make sense, however, currently I am thinking of mainly abstracting the way the blackbox exporter is configured, rather than how it is run. I believe that the integration will look something like the integration of the Alertmanager in the Prometheus object, where we loosely reference a blackbox exporter running behind a specific Service, and then an abstracted form of blackbox probing configuration in a ServiceMonitor. This is all work in progress thinking, any input is appreciated.

@brancz I wrote a draft to manage blackbox exporter with prometheus operator as a tpr. Suggestions are welcomed

https://github.com/gianrubio/prometheus-operator/tree/blackbox-exporter/Documentation/proposals/blackbox-exporter#generated-prometheus-config

@gianrubio Nice draft, but exactly do you mean with the question 1, "the http protocol (http x http) on the service spec"? Should it mean http and https?

@galexrt it was a typo, fixed now!

@gianrubio How about changing the externalName to externalNames in the service and making it a list of ExternalName:

type ExternalName struct{
    name     string
    probe     string
    address string
}

For http probe the address would contain http:// or https:// and for others just the address without "protocol".

Edit: Just saw that the externalName is from Kubernetes service. How about making this into a ThirdPartyResource named ExternalService?

@gianrubio thanks a lot for your effort. I was two weeks on vacation, and am catching up on everything as quickly as I can. I have some comments regarding the proposal.

  • I don't think the BlackboxExporter resources is necessary. As far as I can tell everything described by it is perfectly possible with a normal Deployment and a ConfigMap. I don't think users actually win anything as the same things will still have to be specified.

  • The critical part is the integration that makes the Operator generate the right config, and tell it where to find the BlackboxExporter (I would think this is a namespace/service combination similar to the Alertmanager configuration). The configuration source should probably be the ServiceMonitor.

@brancz were you able to add documentation on custom configuration as a workaround for this or can you point me to one? I am wanting to use blackbox-exporter using the operator to monitor our internal k8s services

@punitag I haven't found the time, but the tl;dr is that you skip the serviceMonitorSelector and provide your own Secret named prometheus-<name-of-your-prometheus-object>, the two caveats are that you need to manually ensure the rule files and secrets that are mounted into the container if you need them. There have been issues around this in the past so if you can get around to writing this doc, that would be amazing! :slightly_smiling_face:

@brancz
I have found a good exemple of config for blackbox / kubernetes
https://github.com/prometheus/blackbox_exporter/issues/113

Bit is it possible to add custom job config for specifi service monitor?
This can help all new exporter to inject there job definition.
Or add possibility to append relabel_config ???

For the time being I would recommend to use a custom prometheus config to run the blackbox exporter. We'll eventually have to figure out a meaningful abstraction to use it, but we're not there yet.

Yes but is it possible to use custom promethues config with prometheus-operator?
This is a very bad lock for my project.
Can i help you

Have you an example o using it?

I'm not sure I understand the question. The Prometheus object just must not have a serviceMonitorSelector field and you have to provide a Secret named prometheus-<name> that has a prometheus.yaml. The config is just like any other Prometheus configuration file.

I struggled figuring out how to get this working too. I ended up with a Deployment (and Service) exposing blackbox_exporter and then configuring prometheus to scrape via blackbox_exporter's /probe endpoint.

I'd like to help out with some docs around this as it wasn't very clear.


blackbox_exporter Deployment

{
    "apiVersion": "apps/v1beta1",
    "kind": "Deployment",
    "metadata": {
        "name": "blackbox-exporter",
        "namespace": "legacy"
    },
    "spec": {
        "replicas": 1,
        "selector": {
            "matchLabels": {
                "app": "blackbox-exporter"
            }
        },
        "template": {
            "metadata": {
                "labels": {
                    "app": "blackbox-exporter"
                }
            },
            "spec": {
                "containers": [
                    {
                        "image": "registry/blackbox-exporter:v0.11.0",
                        "name": "blackbox-exporter",
                        "ports": [
                            {
                                "containerPort": 9115,
                                "name": "metrics"
                            }
                        ]
                    }
                ]
            }
        }
    }
}


blackbox_exporter Service

{
    "apiVersion": "v1",
    "kind": "Service",
    "metadata": {
        "labels": {
            "app": "blackbox-exporter"
        },
        "name": "blackbox-exporter",
        "namespace": "legacy"
    },
    "spec": {
        "ports": [
            {
                "name": "http-metrics",
                "port": 9115,
                "protocol": "TCP",
                "targetPort": "metrics"
            }
        ],
        "selector": {
            "app": "blackbox-exporter"
        }
    }
}

{
    "apiVersion": "monitoring.coreos.com/v1",
    "kind": "ServiceMonitor",
    "metadata": {
        "labels": {
            "k8s-app": "blackbox-exporter"
        },
        "name": "blackbox-exporter",
        "namespace": "tectonic-system"
    },
    "spec": {
        "endpoints": [
            {
                "interval": "60s",
                "port": "http-metrics"
            },
            {
                "interval": "60s",
                "params": {
                    "module": [
                        "http_2xx"
                    ],
                    "target": [
                        "10.10.1.120:9200"
                    ]
                },
                "path": "/probe",
                "targetPort": 9115
            }
        ],
        "namespaceSelector": {
            "matchNames": [
                "legacy"
            ]
        },
        "selector": {
            "app": "blackbox-exporter"
        }
    }
}

@adamdecaf that would be awesome! Feel free to open a PR to add a new doc. Maybe Documentation/user-guides/blackbox-probing.md?

That would be highly appreciated!

How can I generate the docs locally?

Does make docs answer your question?

@gianrubio Only in that they're generated, but where can I serve them from? I want to view the docs in a browser ideally.

@adamdecaf my suggestion is to commit and push the code to your fork so github can serve for you.

@adamdecaf I've succesfully replicated your example, thanks for sharing!

Any ideas on how to preserve the module and target labels in the final metrics? Otherwise all information about what is actually probed seems to be lost in the final recorded metric series.

I tried to configure metricRelabelings in the endpoints entry of the ServiceMonitor, but it does not seem to do anything:

    metricRelabelings:
    - sourceLabels:
      - __param_target
      targetLabel: target
      action: replace
    - sourceLabels:
      - __param_module
      targetLabel: module
      action: replace

@adamdecaf does your setup requires that the service IP be hardcoded into your ServiceMonitor?

@bgagnon

I tried to configure metricRelabelings in the endpoints entry of the ServiceMonitor, but it does not seem to do anything:

Same. I've been trying to figure out a way to upgrade prometheus-operator in my tectonic cluster.

@ptagr No. I've been able to add a target of a dns name.

{
    "apiVersion": "monitoring.coreos.com/v1",
    "kind": "ServiceMonitor",
    "metadata": {
        "labels": {
            "k8s-app": "blackbox-exporter"
        },
        "name": "blackbox-exporter",
        "namespace": "tectonic-system"
    },
    "spec": {
        "endpoints": [
            {
                "interval": "60s",
                "port": "http-metrics"
            },
            {
                "interval": "60s",
                "metricRelabelings": [
                    {
                        "sourceLabels": [
                            "__address__"
                        ],
                        "targetLabel": "__param_target"
                    },
                    {
                        "sourceLabels": [
                            "__param_target"
                        ],
                        "targetLabel": "instance"
                    },
                    {
                        "replacement": "127.0.0.1:9115",
                        "targetLabel": "__address__"
                    }
                ],
                "params": {
                    "module": [
                        "http_2xx"
                    ],
                    "target": [
                        "https://vault.example.com/v1/sys/health"
                    ]
                },
                "path": "/probe",
                "targetPort": 9115,
                "tlsConfig:": {
                    "serverName": "vault.example.com"
                }
            }
        ]
    }
}

I tried to configure metricRelabelings in the endpoints entry of the ServiceMonitor, but it does not seem to do anything

Looks like this is the only blocker now?

Any more update on this?

Docs, which I said I'd help with. heh.

Getting prom-operator upgraded in tectonic would be nice. :)

https://github.com/coreos/prometheus-operator/issues/481#issuecomment-361526352

I followed the deployment and service and while the config updates, the endpoint won't display under targets... am I doing something wrong?

@shamsalmon Which Deployment and Service? There were a couple thrown around here.

How about the prometheus-operator pod? I've had to track down config syntax errors in the logs there.

I don't see a log which would indicate it being added to the config. But it is in the config, just not in the targets.

I've seen an underlying issue with the blackbox probe causing that. Or it's "ntp is out of sync".

To find the error I've looked at recent probe logs on the blackbox_exporter page. (port-forward and open localhost:9115 in a browser)

$ kubectl --context ... -n ... port-forward backbox-exporter-$blah 9115 

@adamdecaf it does not look like its doing any probes at all. I see no recent probes in blackbox exporter. Is there any way to see the status of the ServiceMonitor?

@shamsalmon That'd be in the prometheus-operator logs. I don't know of a log for the ServiceMonitor.

The indentation on your yaml above is a bit hard to diagnose. Are you sure the properties are lined up correctly?

Is tectonic required for this?? I deployed the same deployment and service you have exactly, but changed the image name to prom/node-exporter and changed the namespaces. Maybe I missed something?? I am just not sure how the target isn't being added but the configuration is... its not even trying to probe my blackbox exporter so I am a little lost. Is that secret file prometheus-.yaml required as stated above? I do not see it anywhere else under prometheus-operator.

apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  labels:
    k8s-app: blackbox-exporter
  name: blackbox-exporter
  namespace: monitoring
spec:
  endpoints:
  - interval: 60s
    port: http-metrics
  - interval: 60s
    params:
      module:
      - http_2xx
      target:
      - 10.10.1.120:9200
    path: "/probe"
    targetPort: 9115
  namespaceSelector:
    matchNames:
    - monitoring
  selector:
    app: blackbox-exporter
apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: blackbox-exporter
  namespace: monitoring
spec:
  replicas: 1
  selector:
    matchLabels:
      app: blackbox-exporter
  template:
    metadata:
      labels:
        app: blackbox-exporter
    spec:
      containers:
      - image: prom/blackbox-exporter:v0.11.0
        name: blackbox-exporter
        ports:
        - containerPort: 9115
          name: metrics

I got it working... it was as simple as missing the service config.

It will be awesome if someone could make a PR documenting how to connect prometheus operator to blackbox

Those can start an scrape metric but not solve __param_target relabel on metric.
I have found a PR that can help us: https://github.com/coreos/prometheus-operator/pull/923

Show on prometheus doc

Labels starting with __ will be removed from the label set after relabeling is completed.

And of courese this label is not available for metricRelabelings
is it possible to preserve all __param labels?
that can solve this issues...

An update on this:

I tried to configure metricRelabelings in the endpoints entry of the ServiceMonitor, but it does not seem to do anything

I now have working relabelings for a Servicemonitor + blackbox_exporter combo.
I ended up generating the Servicemonitor from a Python controller, but it should be easy to adapt for other cases.

Here's the piece of code that generates an endpoint:

target_url = 'https://hostname/path'
endpoint = {
  'interval': '60s',
  'params': {
    'module': ['my-http-module'],
    'target': [target_url]
  },
  'path': '/probe',
  'targetPort': 9115,
  'metricRelabelings': [
    {
      'sourceLabels': [],
      'targetLabel': 'target_url',
      'replacement': target_url
    }
  ]
}

I'm adding several dozens of those to the same Servicemonitor with different values for target_url. They each point to the same blackbox_exporter pod in the end, but the params are different for each.

The metricRelabelings directive adds a new target_url label to every metric collected by this specific endpoint to expose the URL value. Note that's a little redundant because the url is passed both as a /probe?target query param and also included as a label. But since I'm generating the Servicemonitor, it's no big deal.

Anyway, it ain't the best solution, but it's something :)

I actually really like your approach as it鈥檚 very similar to what I was imagining to automate with the BlackboxMonitor as described in #923.

I've followed @adamdecaf config and adjusted the manifests to deploy a SNMP exporter to collect data from my router. The files can be seen on https://github.com/carlosedp/prometheus-operator-ARM/tree/master/manifests/snmp-exporter

I am having trouble getting this working with the snmp-exporter. I followed instructions provided by @carlosedp but the operator doesn't seem to pick up the configuration changes. I created the service and deployment as above, changed the ServiceMonitor to the following.

apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  labels:
    k8s-app: snmp-exporter
  name: snmp-exporter
spec:
  jobLabel: k8s-app
  selector:
    app: snmp-exporter
  namespaceSelector:
    matchNames:
    - monitoring
  endpoints:
  - interval: 60s
    port: http-metrics
    params:
      module:
      - default
      target:
      - 1.2.3.4
    path: "/snmp"
    targetPort: 9116

Any idea why this wouldn't work? I don't see anything in the prometheus-operator logs that say it has updated the configuration or would otherwise indicate that these changes have gotten into Prometheus.

@jmreicha this thread is about blackbox-exporter support, please open a new issue if you're having problems with another type of exporter.

Hey guys, I created a PR that implements a helm chart for the blackbox exporter. As well as an example on how to use additionalscrapeconfigs to add a blackbox scrape in prometheus/values.yaml.

https://github.com/coreos/prometheus-operator/pull/1465

There is still work to be done, but it's at least a start.

Nice! I have some code lying out there about blackbox helm chart, I'll try to find time to enhance yours.

any update on this issue? I'm trying to get blackbox-exporter work with ServiceMonitor
I tried to add this in my black-box-exporter servicemonitor:

    metricRelabelings:
    - sourceLabels:
      - __param_target
      targetLabel: target
      action: replace
    - sourceLabels:
      - __param_module
      targetLabel: module
      action: replace

but it seems not working.

I've been playing around with the example provided by @adamdecaf - that was very helpful, thank you!
So I was able to see my legacy services' health status under 'blackbox exporter', which is a good step forward.

Now I would like to have my legacy services separate, i.e. monitored by individual service monitors. I approached this by creating several Service objects selecting the same blackbox exporter pod, i.e. one 'exporter' service per legacy service I want to monitor, and for each of those services a ServiceMonitor object.

I found that in the prometheus dashboard under each of my service monitors _all_ my legacy services' endpoints are listed (see the attached screenshot). This is not what I hoped for of course.

I thought I could circumvent this by instanciating as a blackbox exporter pod for each endpoint I want to monitor, but this yields the same result.

Any idea?

screenshot-2018-08-01-17 20 43

FYI, I did succeed by using the metricRelabelings in ServiceMonitor, but it does not match my original purpose. I want to use blackbox exporter to probe all my internal Kubernetes Services.
Because the servicemonitor will only create kubernetes_sd_config: role=endpoint, but I need role=service.

Finally I succeeded by using the additionalScrapeConfigs in PrometheusSpec, to add a new scrape config into the prometheus.yml config file. The sample job scrape config for blackbox with services can be found here:
https://github.com/prometheus/prometheus/blob/70c98a06f13a167795f867640f6702adaaef2e2f/documentation/examples/prometheus-kubernetes.yml#L177

hope it would help

@justlaputa that is what I would recommend you to do right now. We do intend to add tighter support for blackbox probing in the Prometheus Operator.

Replying to myself above, running separate blackbox exporter processes actually yields the expected result. The effect I described resulted from parse errors logged by the prometheus process (while the SystemMonitor yamls were syntactically correct, but obviously some of the metricRelabelings I had in one of them wasn't).

I have been trying to set up ingress blackbox monitoring when using the prometheus operator with the additional config parameter in prometheus using this.
The service and endpoint blackbox monitoring works fine, the ingress monitoring doesn't. Suggestions?

I was able to use prometheus-operator along with blackbox-exporter to setup scrape jobs for both internal Kubernetes services and external URLs.

First is using Prometheus' service discovery for Kubernetes services and latter one is based on static configs. Also static configs can easily be replaced with file based discovery if you want to define http endpoints in separate files.

Here's an example you can put into values.yaml when installing prometheus-operator:

    additionalScrapeConfigs:
      ## Monitor internal Kubernetes services
      # Blackbox-exported should be installed separately via https://github.com/helm/charts/tree/master/stable/prometheus-blackbox-exporter
      - job_name: 'kubernetes-services'
         # blackbox-exporter path to scrape
        metrics_path: /probe
        params:
          module: [http_2xx]
        kubernetes_sd_configs:
        - role: service
        relabel_configs:

        # 1. Example relabel to probe only some services that have "example.io/should_be_probed = true" annotation
        - source_labels: [__meta_kubernetes_service_annotation_example_io_should_be_probed]
          action: keep
          regex: true

        # 2. Save address in a separate label
        - source_labels: [__address__]
          target_label: __param_target

        # 3. Replace address with an internal blackbox service so scraper is always pointed at blackbox-exporter
        - target_label: __address__
          replacement: blackbox-exporter-service:9115

        # 4. Save address in an instance label since __param_target is going to be dropped
        - source_labels: [__param_target]
          target_label: instance

        # 5. Save module in an module label since __param_module is going to be dropped
        - source_labels: [__param_module]
          target_label: module

        # 6. Add namespace and service name labels
        - action: labelmap
          regex: __meta_kubernetes_service_label_(.+)
        - source_labels: [__meta_kubernetes_namespace]
          target_label: kubernetes_namespace
        - source_labels: [__meta_kubernetes_service_name]
          target_label: kubernetes_name

      ## Monitor external services
      - job_name: 'external-services'
        # blackbox-exporter path to scrape
        metrics_path: /probe
        static_configs:
          - labels:
              module: http_2xx
            targets:
              - https://kubernetespodcast.com/about/index.html
              - https://www.cncf.io
        relabel_configs:
          # 1. Save address in a separate label
          - source_labels: [__address__]
            target_label: __param_target

          # 2. Save module in an module label since __param_module is going to be dropped
          - source_labels: [module]
            target_label: __param_module

          # 3. Save module in an module label since __param_target is going to be dropped
          - source_labels: [__param_target]
            target_label: instance

          # 4. Replace address with an internal blackbox service so scraper is always pointed at blackbox-exporter
          - target_label: __address__
            replacement: blackbox-exporter-service:9115

I've added comments to each block to make it more helpful.

Any progress on implementing tighter support for this?

It hasn鈥檛 happened yet.

This issue has been automatically marked as stale because it has not had any activity in last 60d. Thank you for your contributions.

This issue has been automatically marked as stale because it has not had any activity in last 60d. Thank you for your contributions.

There is still work to be done to have a first class support for blackbox probing.

This issue has been automatically marked as stale because it has not had any activity in last 60d. Thank you for your contributions.

2832 is working on this API. Hopefully soon coming to your prometheus-operator deployments! :)

This issue has been automatically marked as stale because it has not had any activity in last 60d. Thank you for your contributions.

Hi @iroller
I am trying to monitor kubernetes services using blackbox-exporter. But didn't make it to work.
i have been blocked for week, it would be great if you can give some help.
Below are my settings:
blackbox config:

    modules:
      http_2xx:
        prober: http
        http:
          method: GET
          preferred_ip_protocol: "ip4"
          valid_status_codes: [200]
      http_post_2xx:
        prober: http
        http:
          method: POST
      http_kubernetes_service:
        prober: http
        timeout: 5s
        http:
          headers:
            Accept: "*/*"
            Accept-Language: "en-US"
          tls_config:
            ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
          bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
          preferred_ip_protocol: "ip4"

Prometheus additional-scrape-configs

- job_name: blackbox-exporter-kubernetes-services
  tls_config:
    ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
  bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
  metrics_path: /probe
  params:
    module: [http_2xx]
  kubernetes_sd_configs:
  - role: service
  relabel_configs:
  - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_probed]
    action: keep
    regex: true
  - source_labels: [__address__]
    target_label: __param_target
  - target_label: __address__
    replacement: monitoring-blackbox-exporter.kyma-system.svc.cluster.local:9115
  - source_labels: [__param_target]
    target_label: instance
  - action: labelmap
    regex: __meta_kubernetes_service_label_(.+)
  - source_labels: [__meta_kubernetes_namespace]
    target_label: kubernetes_namespace
  - source_labels: [__meta_kubernetes_service_name]
    target_label: kubernetes_name

Error In Prometheus Targets
Get http://monitoring-blackbox-exporter.kyma-system.svc.cluster.local:9115/probe?module=http_2xx&target=kiali.my-ns.svc%3A20001: read tcp {ip}:40780->{ip}:9115: read: connection reset by peer

This issue has been automatically marked as stale because it has not had any activity in last 60d. Thank you for your contributions.

https://github.com/coreos/prometheus-operator/pull/2832 is progressing nicely :)

Since we merged https://github.com/coreos/prometheus-operator/pull/2832 we can close this issue for now. If there is anything missing for this feature or bugs feel free to open a new issue. 馃帀

For future googlers:

helm install --generate-name --namespace monitoring -f custom-values.yaml prometheus-community/prometheus-blackbox-exporter

custom.values.yaml
```config:
modules:
tcp_connect:
prober: tcp
timeout: 5s

serviceMonitor:
enabled: true
targets:
- name: clamav
url: clamav.foo:3310
labels:
app: blackbox
release: bar
interval: 15s
scrapeTimeout: 15s
module: tcp_connect
additionalMetricsRelabels: {}```

To make the above solution work, you also need to configure prometheus as written in the note here.

set prometheus.prometheusSpec.podMonitorSelectorNilUsesHelmValues and prometheus.prometheusSpec.serviceMonitorSelectorNilUsesHelmValues to false.

Another solution is to use Probe CRD. Here is an example. Note for the Probe CRD:

Probe, which declaratively specifies how groups of ingresses or static targets should be monitored. The Operator automatically generates Prometheus scrape configuration based on the definition

Was this page helpful?
0 / 5 - 0 ratings