What did you do?
docker pull prom/prometheus
docker run --name prom -d -p 9090:9090 -v /configs/prom/share:/etc/prometheus prom/prometheus -config.file=/etc/prometheus/prometheus.yml -storage.local.memory-chunks=20000 -storage.local.path=/etc/prometheus/data
niflheim:/configs/prom/share$ ls -la
total 2
drwxrwxrwx 3 1037 users 0 Oct 29 22:25 .
drwxrwxrwx 4 1037 users 0 Oct 29 22:55 ..
drwxrwxrwx 3 1037 users 0 Oct 29 22:41 data
-rwxrwxrwx 1 1037 users 1734 Oct 29 22:25 prometheus.yml
What did you expect to see?
A running docker prometheus container which stores his data at /etc/prometheus/data respectively on the host. (Mountet from host /configs/prom/share to Conatainer /etc/prometheus)
What did you see instead? Under which circumstances?
A docker container which does not start.
Environment
niflheim:/configs/prom/share$ cat /etc/alpine-release
3.5.2
niflheim:/configs/prom/share$ docker version
Client:
Version: 17.05.0-ce
API version: 1.29
Go version: go1.8.1
Git commit: v17.05.0-ce
Built: Tue May 16 10:10:04 2017
OS/Arch: linux/amd64
Server:
Version: 17.05.0-ce
API version: 1.29 (minimum version 1.12)
Go version: go1.8.1
Git commit: v17.05.0-ce
Built: Tue May 16 10:10:04 2017
OS/Arch: linux/amd64
Experimental: false
System information:
niflheim:/configs/prom/share$ uname -srm
Linux 4.4.59-0-grsec x86_64
Prometheus version:
Newes pulled form Docker Hub at 30.10.17
Alertmanager version:
Enviroment without Alertmanager
Prometheus configuration file:
niflheim:/configs/prom/share$ cat prometheus.yml
# my global config
global:
scrape_interval: 2s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
evaluation_interval: 2s # Evaluate rules every 15 seconds. The default is every 1 minute.
# scrape_timeout is set to the global default (10s).
# Attach these labels to any time series or alerts when communicating with
# external systems (federation, remote storage, Alertmanager).
external_labels:
monitor: 'codelab-monitor'
# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
# - "first.rules"
# - "second.rules"
# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
#scrape_configs:
# # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
# - job_name: 'prometheus'
#
# # metrics_path defaults to '/metrics'
# # scheme defaults to 'http'.
scrape_configs:
- job_name: 'prometheus'
static_configs:
- targets:
- 192.168.1.253:9090
- job_name: 'powerdns'
static_configs:
- targets:
- 192.168.1.253:9120 # Master PDNS
metrics_path: /metrics
- job_name: 'snmp'
scrape_interval: 1s
static_configs:
- targets:
- 192.168.1.2 # SNMP device.
- 192.168.1.4 # SNMP device.
- 192.168.1.5 # NAS device.
- 192.168.1.100 # NAS device.
metrics_path: /snmp
params:
module: [default]
relabel_configs:
- source_labels: [__address__]
target_label: __param_target
- source_labels: [__param_target]
target_label: instance
- target_label: __address__
replacement: 192.168.1.253:9116 # SNMP exporter.
No Logs available
When I don't use the option "-storage.local.path=/etc/prometheus/data", the container Starts and works for a few day an stops then scraping data.
It makes more sense to ask questions like this on the prometheus-users mailing list rather than in a GitHub issue. On the mailing list, more people are available to potentially respond to your question, and the whole community can benefit from the answers provided.
Thanks
I'm Trying figuring it out and i'm near trothing my pc through my room!
This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.