Hello Colleagues,
grafana dashboards are not automatically getting loaded with helm template whereas it works fine with helm install.
helm install --name-template my-release stable/prometheus-operator -n monitoring
(☸ |shoot--test3az--conc-test:DEFAULT)➜ ~ kubectl --namespace monitoring get pods
NAME READY STATUS RESTARTS AGE
alertmanager-my-release-prometheus-oper-alertmanager-0 2/2 Running 0 11m
my-release-grafana-69b974879-cstff 2/2 Running 0 11m
my-release-kube-state-metrics-655d6c49f8-r658b 1/1 Running 0 11m
my-release-prometheus-node-exporter-9m7qj 1/1 Running 0 11m
my-release-prometheus-node-exporter-chzd7 1/1 Running 0 11m
my-release-prometheus-node-exporter-k8v9n 1/1 Running 0 11m
my-release-prometheus-oper-operator-54584f767d-2njf8 2/2 Running 0 11m
prometheus-my-release-prometheus-oper-prometheus-0 3/3 Running 1 11m
helm template -f values.yaml --name-template my-release stable/prometheus-operator -n monitoring
(☸ |shoot--test3az--conc-test:DEFAULT)➜ ~ k get po -n monitoring
NAME READY STATUS RESTARTS AGE
alertmanager-prometheus-operator-alertmanager-0 2/2 Running 0 41s
prometheus-operator-admission-create-gskpl 0/1 Completed 0 80s
prometheus-operator-admission-patch-kbvvs 0/1 Completed 2 81s
prometheus-operator-grafana-d9ccb4fbf-25tfn 2/2 Running 0 110s
prometheus-operator-grafana-test 0/1 Completed 0 88s
prometheus-operator-operator-fd5bccd59-rd4k4 2/2 Running 0 109s
prometheus-operator-prometheus-node-exporter-4kv76 1/1 Running 0 110s
prometheus-operator-prometheus-node-exporter-d7p52 1/1 Running 0 110s
prometheus-operator-prometheus-node-exporter-nf6cp 1/1 Running 0 110s
prometheus-prometheus-operator-prometheus-0 3/3 Running 1 31s
Logs:
(☸ |shoot--test3az--conc-test:DEFAULT)➜ ~ k logs po/prometheus-operator-grafana-d9ccb4fbf-rkjt9 --all-containers=true -n monitoring
Starting collector
No folder annotation was provided, defaulting to k8s-sidecar-target-directory
Selected resource type: ('secret', 'configmap')
Config for cluster api loaded...
Working on secret: monitoring/alertmanager-prometheus-operator-alertmanager
Working on secret: monitoring/prometheus-operator-grafana
Working on configmap: monitoring/prometheus-operator-grafana
Working on configmap: monitoring/prometheus-operator-grafana-config-dashboards
Working on configmap: monitoring/prometheus-operator-grafana-datasource
Found configmap with label
Working on configmap: monitoring/prometheus-operator-grafana-test
Starting collector
No folder annotation was provided, defaulting to k8s-sidecar-target-directory
Selected resource type: ('secret', 'configmap')
Config for cluster api loaded...
Working on configmap monitoring/prometheus-operator-grafana
Working on configmap monitoring/prometheus-operator-grafana-config-dashboards
Working on configmap monitoring/prometheus-operator-grafana-test
Working on configmap monitoring/prometheus-operator-grafana-datasource
Working on secret monitoring/prometheus-operator-grafana
Working on secret monitoring/alertmanager-prometheus-operator-alertmanager
Working on configmap monitoring/prometheus-prometheus-operator-prometheus-rulefiles-0
Working on secret monitoring/prometheus-prometheus-operator-prometheus
Working on secret monitoring/prometheus-prometheus-operator-prometheus-tls-assets
Working on secret monitoring/prometheus-prometheus-operator-prometheus
Working on secret monitoring/prometheus-prometheus-operator-prometheus
Working on secret monitoring/prometheus-prometheus-operator-prometheus
Working on secret monitoring/prometheus-prometheus-operator-prometheus
Working on secret monitoring/prometheus-prometheus-operator-prometheus
Working on secret monitoring/prometheus-prometheus-operator-prometheus
ProtocolError when calling kubernetes: ('Connection broken: IncompleteRead(0 bytes read)', IncompleteRead(0 bytes read))
Working on configmap monitoring/prometheus-prometheus-operator-prometheus-rulefiles-0
Working on configmap monitoring/prometheus-operator-grafana-datasource
Working on configmap monitoring/prometheus-operator-grafana
Working on configmap monitoring/prometheus-operator-grafana-config-dashboards
Working on configmap monitoring/prometheus-operator-grafana-test
ProtocolError when calling kubernetes: ('Connection broken: IncompleteRead(0 bytes read)', IncompleteRead(0 bytes read))
Working on secret monitoring/prometheus-operator-grafana
Working on secret monitoring/prometheus-prometheus-operator-prometheus-tls-assets
Working on secret monitoring/alertmanager-prometheus-operator-alertmanager
Working on secret monitoring/prometheus-prometheus-operator-prometheus
ProtocolError when calling kubernetes: ('Connection broken: IncompleteRead(0 bytes read)', IncompleteRead(0 bytes read))
ts=2020-01-08T07:46:49.468017695Z caller=main.go:85 msg="Starting prometheus-config-reloader version '0.34.0'."
level=info ts=2020-01-08T07:46:49.468197412Z caller=reloader.go:127 msg="started watching config file for changes" in=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml
level=error ts=2020-01-08T07:46:49.468878106Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://127.0.0.1:9090/-/reload: dial tcp 127.0.0.1:9090: connect: connection refused"
level=info ts=2020-01-08T07:46:54.510000181Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=
Please adjust the title in the prefix from [prometheus-operator] to [stable/prometheus-operator] as stated in the issue template.
What version of helm did you use? Is grafana working at all, i.e. is grafana not being created or just a dashboard in grafana?
@krichter722 i am using helm 3 .. grafana everything getting created but dashboards are just empty with helm template but with helm install all works cool ..
@krichter722 Any update on this issue ?
Experiencing the same issue. Any update?
This is a blocker issue for Prometheus-exporter via the helm template.
@dmyerscough @sjentzsch @bitnami-bot
Damian Myerscough
@gowrisankar22 Try adding namespace: {{ $.Release.Namespace }} into the metadata of the header object in sync_grafana_dashboards.py (https://github.com/helm/charts/blob/cc4d7d91c142c6b06907be942e0c78a3286084cf/stable/prometheus-operator/hack/sync_grafana_dashboards.py#L71)
In my values file, I also added the following line:
kubeTargetVersionOverride: "x.x.x" With the x's representing your kubernetes version for use with the configmaps.
Thanks a lot @gibbonsjohnm 👍 That did the trick.
This is still an issue without resolution and should not be closed. The following changes do not fix the issue for me. Installing in the default namespace, which presumably would alleviate the need for the changes, also does not fix it.
stable/prometheus-operator$ git diff
diff --git a/stable/prometheus-operator/hack/sync_grafana_dashboards.py b/stable/prometheus-operator/hack/sync_grafana_dashboards.py
index 4686d3356..32574102a 100755
--- a/stable/prometheus-operator/hack/sync_grafana_dashboards.py
+++ b/stable/prometheus-operator/hack/sync_grafana_dashboards.py
@@ -77,6 +77,7 @@ apiVersion: v1
kind: ConfigMap
metadata:
name: {{ printf "%%s-%%s" (include "prometheus-operator.fullname" $) "%(name)s" | trunc 63 | trimSuffix "-" }}
+ namespace: {{ $.Release.Namespace }}
labels:
{{- if $.Values.grafana.sidecar.dashboards.label }}
{{ $.Values.grafana.sidecar.dashboards.label }}: "1"
diff --git a/stable/prometheus-operator/values.yaml b/stable/prometheus-operator/values.yaml
index a8d3059dd..c0ea7f064 100644
--- a/stable/prometheus-operator/values.yaml
+++ b/stable/prometheus-operator/values.yaml
@@ -2,6 +2,8 @@
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
+kubeTargetVersionOverride: "1.14.8"
+
## Provide a name in place of prometheus-operator for `app:` labels
##
nameOverride: ""
You are right .. yesterday I tried but again it dint work even for me with gke1.15.9..
@vsliouniaev @bismarck @gianrubio @VLZZZ can you please provide some workarounds to fix this issue.
Try specifying the kubernetes version with --kube-version when you use helm template
@vsliouniaev This option is not available with helm3. I have created PR #21263 to fix the issue. Can you review it?
Looks like this was changed to --api-versions in helm 3, which can be used to set .Capabilities.KubeVersion.GitVersion
Most helpful comment
@vsliouniaev This option is not available with helm3. I have created PR #21263 to fix the issue. Can you review it?