What did you do?
Set up a Prometheus server, following the instructions at https://prometheus.io/docs/introduction/install/ and https://prometheus.io/docs/introduction/getting_started/ .
What did you expect to see?
I expected to find some documentation that would allow me to not have to use a static prometheus.yml file, or atleast, find an API that would let me supply new scrape configs via a POST API call to Prometheus. I found another external link on google describing the reload API - curl -X POST http://localhost:9090/-/reload but it still seems to require the prometheus.yml file to be configured manually with the new scrape_config section (job_name + scrape_interval + target_groups).
What did you see instead? Under which circumstances?
Couldn't find more API info for dynamic configuration of a running Prometheus server.
Environment
Either docker or standalone (OS doesn't matter in this case).
insert output of uname -srm here
Linux 4.4.8-boot2docker x86_64
Prometheus version:
insert output of prometheus -version here
/prometheus # prometheus -version
prometheus, version 0.18.0 (branch: stable, revision: f12ebd6)
build user: @dfaf0577f3e7
build date: 20160506-15:29:59
go version: go1.5.3
/prometheus #
Alertmanager version:
insert output of alertmanager -version here (if relevant to the issue)
/prometheus # alertmanager -version
sh: alertmanager: not found
/prometheus #
/prometheus # cat /etc/prometheus/prometheus.yml
# my global config
global:
scrape_interval: 15s # By default, scrape targets every 15 seconds.
evaluation_interval: 15s # By default, scrape targets every 15 seconds.
# scrape_timeout is set to the global default (10s).
# Attach these labels to any time series or alerts when communicating with
# external systems (federation, remote storage, Alertmanager).
external_labels:
monitor: 'codelab-monitor'
# Load and evaluate rules in this file every 'evaluation_interval' seconds.
rule_files:
# - "first.rules"
# - "second.rules"
# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: 'prometheus'
# Override the global default and scrape targets from this job every 5 seconds.
scrape_interval: 5s
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
target_groups:
- targets: ['localhost:9090']
/prometheus #
insert configuration here (if relevant to the issue)
insert Prometheus and Alertmanager logs relevant to the issue here
The flat file and SIGHUP/reload is the API we provide, you're free to build something on top of this that takes in POSTs and causes the config to be reloaded.
Roger, thanks @brian-brazil
Note that we _do_ provide a POST /-/reload endpoint - but it just reloads whatever is on disk, it doesn't take any new config over HTTP. This is intentional, as otherwise the Prometheus server wouldn't be operational after startup before receiving its config over HTTP or alternatively would have to get into the business of managing its own config persistence.
This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.
Most helpful comment
Note that we _do_ provide a POST
/-/reloadendpoint - but it just reloads whatever is on disk, it doesn't take any new config over HTTP. This is intentional, as otherwise the Prometheus server wouldn't be operational after startup before receiving its config over HTTP or alternatively would have to get into the business of managing its own config persistence.