Version of Helm and Kubernetes: v2.5.0
Which chart:Prometheus
What happened: When installing grafana dashboards (eg https://grafana.com/dashboards/162), the dashboard has a lot of gaps.
After investigation, it seems the dashboard requires a relatively small scrape interval. As least smaller that the OOTB one of 1m.
To my surprise it was not possible to change the scrape interval of the template in an easy way using values.yaml. AFAIK, i have to duplicate the complete prometheus config template in my values.yaml file.
What you expected to happen:
Just update scrape interval in values.yaml and update helm chart.
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know:
+1 on this, curious how you got it to work currently @janssk1, I tried attaching to the existing container running in my K8s instance and modifying it but it had no effect (and then when I lessed the file, my config change was gone anyway). I also added the global scrape interval setting in values.yaml and again to no avail.
Wouldn't this be a change in Prometheus itself though?
We'd have the scrape interval be passed in optionally as a system property or an env variable, then we'd either pass it in as arguments to the container or as an env var for the container.
it's noddy to change it otherwise so the alternative, AFAIK, would be to build our own Docker images with the modified config file and deploy that...not ideal.
Hi changed the values.yaml file in the chart to add a scrape interval. Then installed the modified chart.
No need to change the prometheus image.
bash-4.2$ git diff -p
diff --git a/stable/prometheus/values.yaml b/stable/prometheus/values.yaml
index 385e393..2124034 100644
--- a/stable/prometheus/values.yaml
+++ b/stable/prometheus/values.yaml
@@ -433,6 +433,9 @@ serverFiles:
rules: ""
prometheus.yml: |-
+ global:
+ scrape_interval: 30s
+
rule_files:
- /etc/config/rules
- /etc/config/alerts
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Prevent issues from auto-closing with an /lifecycle frozen comment.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale
/lifecycle frozen
Most helpful comment
Hi changed the values.yaml file in the chart to add a scrape interval. Then installed the modified chart.
No need to change the prometheus image.