Describe the bug
Tried following issues that were closed dating back to version Dec 2018. Looking to create Grafana Dashboard folders with custom dashboards rather than having everything in a general folder. Using Prometheus Operator Chart to deploy Grafana but seems to me like adding these folders is extremely over engineered. Maybe I'm not understanding correctly and an explanation would be very much appreciated as I've spent way to long on this.
Current Setup:
prometheusOperator:
createCustomResource: true
nameOverride: prom-op
fullNameOverride: prom-op
prometheus:
rbac:
roleNameSpaces:
- metrics
- kube-system
- entitlement
- infra
prometheusSpec:
serviceMonitorSelectorNilUsesHelmValues: false
storageSpec:
volumeClaimTemplate:
spec:
storageClassName: pd-ssd
accessModes: ["READWRITEONCE"]
additionalScrapeConfigs:
- job_name: kubernetes-nodes-cadvisor
scrape_interval: 10s
scrape_timeout: 10s
scheme: https # remove if you want to scrape metrics on insecure port
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
kubernetes_sd_configs:
- role: node
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- target_label: __address__
replacement: kubernetes.default.svc:443
- source_labels: [__meta_kubernetes_node_name]
regex: (.+)
target_label: __metrics_path__
replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor
metric_relabel_configs:
- action: replace
source_labels: [id]
regex: '^/machine\.slice/machine-rkt\\x2d([^\\]+)\\.+/([^/]+)\.service$'
target_label: rkt_container_name
replacement: '${2}-${1}'
- action: replace
source_labels: [id]
regex: '^/system\.slice/(.+)\.service$'
target_label: systemd_service_name
replacement: '${1}'
service:
annotations:
beta.cloud.google.com/backend-config: '{"ports": {"9090":"cloud-iap-backendconfig"}}'
type: NodePort
grafana:
admin:
existingSecret: "grafana-admin-auth"
userKey: admin-user
passwordKey: admin-password
persistence:
accessModes: ["ReadWriteOnce"]
envFromSecret: "grafana-google-auth"
service:
annotations:
beta.cloud.google.com/backend-config: '{"ports": {"80":"cloud-iap-backendconfig"}}'
type: NodePort
dashboardProviders:
dashboardproviders.yaml:
apiVersion: 1
providers:
# - name: 'default'
# orgId: 1
# folder: ''
# type: file
# disableDeletion: false
# editable: true
# options:
# path: /var/lib/grafana/dashboards/default
- name: 'cost'
orgId: 1
folder: 'Costs Overview'
type: file
disableDeletion: false
editable: true
options:
path: /tmp/dashboards/costsOverview
- name: 'nodes'
orgId: 1
folder: 'Nodes Overview'
type: file
disableDeletion: false
editable: true
options:
path: /tmp/dashboards/nodesOverview
- name: 'nginx'
orgId: 1
folder: 'Nginx Overview'
type: file
disableDeletion: false
editable: true
options:
path: /tmp/dashboards/nginx
# dashboards:
# default:
# kubernetes-cluster:
# gnetId: 7249
# datasource: Prometheus
alertmanager:
alertmanagerSpec:
storage:
volumeClaimTemplate:
spec:
storageClassName: standard
accessModes: ["ReadWriteOnce"]
service:
annotations:
beta.cloud.google.com/backend-config: '{"ports": {"9093":"cloud-iap-backendconfig"}}'
type: NodePort
(as you can see by the commented out stuff, I've tried a bunch of things)
apiVersion: v1
kind: ConfigMap
metadata:
labels:
grafana_dashboard: "1"
annotations:
k8s-sidecar-target-directory: "/tmp/dashboards/costsOverview"
name: cluster-costs-dashboard
data:
{{ (.Files.Glob .Values.dashboards.clusterCosts).AsConfig | indent 4 }}
---
apiVersion: v1
kind: ConfigMap
metadata:
labels:
grafana_dashboard: "1"
annotations:
k8s-sidecar-target-directory: "/tmp/dashboards/costsOverview"
name: namespace-costs-dashboard
data:
{{ (.Files.Glob .Values.dashboards.namespaceCosts).AsConfig | indent 4 }}
---
apiVersion: v1
kind: ConfigMap
metadata:
labels:
grafana_dashboard: "1"
annotations:
k8s-sidecar-target-directory: "/tmp/dashboards/costsOverview"
name: pod-costs-dashboard
data:
{{ (.Files.Glob .Values.dashboards.podCosts).AsConfig | indent 4 }}
(this is how my dashboard config maps are created)
The Deployment:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "4"
creationTimestamp: "2019-07-31T14:21:13Z"
generation: 4
labels:
app: grafana
chart: grafana-3.7.3
heritage: Tiller
release: promop
name: promop-grafana
namespace: metrics
resourceVersion: "79448889"
selfLink: /apis/extensions/v1beta1/namespaces/metrics/deployments/promop-grafana
uid: 732f27b0-b39e-11e9-8355-42010a800230
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: grafana
release: promop
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
annotations:
checksum/config: b16a0049f50f3f0a607606edc62d051331b997c834edb927d2f8c554523581b8
checksum/dashboards-json-config: 01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546b
checksum/sc-dashboard-provider-config: a3e3e098584c6eebdc829f679967d23614ee6feb1d6e0a36d72b0fe4395f288a
creationTimestamp: null
labels:
app: grafana
release: promop
spec:
containers:
- env:
- name: LABEL
value: grafana_dashboard
- name: FOLDER
value: /tmp/dashboards
- name: RESOURCE
value: both
image: kiwigrid/k8s-sidecar:0.0.18
imagePullPolicy: IfNotPresent
name: grafana-sc-dashboard
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /tmp/dashboards
name: sc-dashboard-volume
- env:
- name: GF_SECURITY_ADMIN_USER
valueFrom:
secretKeyRef:
key: admin-user
name: grafana-admin-auth
- name: GF_SECURITY_ADMIN_PASSWORD
valueFrom:
secretKeyRef:
key: admin-password
name: grafana-admin-auth
- name: GF_AUTH_GOOGLE_ALLOWED_DOMAINS
value: <REDACTED>
- name: GF_AUTH_GOOGLE_ALLOW_SIGN_UP
value: "true"
- name: GF_AUTH_GOOGLE_AUTH_URL
value: https://accounts.google.com/o/oauth2/auth
- name: GF_AUTH_GOOGLE_ENABLED
value: "true"
- name: GF_AUTH_GOOGLE_SCOPES
value: https://www.googleapis.com/auth/userinfo.profile https://www.googleapis.com/auth/userinfo.email
- name: GF_AUTH_GOOGLE_TOKEN_URL
value: https://accounts.google.com/o/oauth2/token
- name: GF_SERVER_ROOT_URL
value: <REDACTED>
envFrom:
- secretRef:
name: grafana-google-auth
image: grafana/grafana:6.2.5
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 10
httpGet:
path: /api/health
port: 3000
scheme: HTTP
initialDelaySeconds: 60
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 30
name: grafana
ports:
- containerPort: 80
name: service
protocol: TCP
- containerPort: 3000
name: grafana
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /api/health
port: 3000
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /etc/grafana/grafana.ini
name: config
subPath: grafana.ini
- mountPath: /var/lib/grafana
name: storage
- mountPath: /etc/grafana/provisioning/dashboards/dashboardproviders.yaml
name: config
subPath: dashboardproviders.yaml
- mountPath: /tmp/dashboards
name: sc-dashboard-volume
- mountPath: /etc/grafana/provisioning/dashboards/sc-dashboardproviders.yaml
name: sc-dashboard-provider
subPath: provider.yaml
- mountPath: /etc/grafana/provisioning/datasources
name: sc-datasources-volume
dnsPolicy: ClusterFirst
initContainers:
- command:
- chown
- -R
- 472:472
- /var/lib/grafana
image: busybox:1.30
imagePullPolicy: IfNotPresent
name: init-chown-data
resources: {}
securityContext:
procMount: Default
runAsUser: 0
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/lib/grafana
name: storage
- env:
- name: METHOD
value: LIST
- name: LABEL
value: grafana_datasource
- name: FOLDER
value: /etc/grafana/provisioning/datasources
- name: RESOURCE
value: both
image: kiwigrid/k8s-sidecar:0.0.18
imagePullPolicy: IfNotPresent
name: grafana-sc-datasources
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /etc/grafana/provisioning/datasources
name: sc-datasources-volume
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
fsGroup: 472
runAsUser: 472
serviceAccount: promop-grafana
serviceAccountName: promop-grafana
terminationGracePeriodSeconds: 30
volumes:
- configMap:
defaultMode: 420
name: promop-grafana
name: config
- name: storage
persistentVolumeClaim:
claimName: promop-grafana
- emptyDir: {}
name: sc-dashboard-volume
- configMap:
defaultMode: 420
name: promop-grafana-config-dashboards
name: sc-dashboard-provider
- emptyDir: {}
name: sc-datasources-volume
How can I get both the sidecar and custom dashboard Providers to be deployed at the same time without the sc-dashboard-provider overwriting the custom Dashboard Provider? Is there a simpler way to just get Specific Dashboards to be in different folders (i didnt see anything in the Grafana Docs for the json template referencing a location or folder attribute).
Currently if I try to disable the sidecar, the deployment crashes:

If I try to change the directory of the custom Dashboards to my persistent volume directory so that the sc-dashboard-provider doesn't overwrite my dashboard provider (in the sense that the sc-dashboard-provider looks for anything in /tmp/dashboards rather than specific directories) then it also crashes:

Is there a way to overwrite the sc-dashboard-provider or remove it from the deployment using the helm chart? If not, is there a simpler way to achieve my end goal?
Version of Helm and Kubernetes:
Kubectl:
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.1", GitCommit:"4485c6f18cee9a5d3c3b4e523bd27972b1b53892", GitTreeState:"clean", BuildDate:"2019-07-18T09:18:22Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"12+", GitVersion:"v1.12.8-gke.10", GitCommit:"f53039cc1e5295eed20969a4f10fb6ad99461e37", GitTreeState:"clean", BuildDate:"2019-06-19T20:48:40Z", GoVersion:"go1.10.8b4", Compiler:"gc", Platform:"linux/amd64"}
Helm:
client: &version.Version{SemVer:"v2.14.3", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"}
Error: could not find tiller
(using tillerless helm)
Still stuck on this with no results. If anyone knows how to set this up, even in a completely different way any guidance would be great!
@ori78 we encountered a similar issue and had to rollback to 3.7.2 in order to get it working the way we had intended.
It looks like the PR that broke the flow for us is this one: https://github.com/helm/charts/pull/15770
Up to and including 3.7.2 we had been using dashboardProviders and sidecar successfully. i.e. (in our values.yaml file for the chart)
dashboardProviders:
dashboardproviders.yaml:
apiVersion: 1
providers:
- name: 'Data Stores'
orgId: 1
folder: 'Data Stores'
type: file
disableDeletion: true
editable: false
options:
path: /var/lib/grafana/dashboards/data-stores
- name: 'Apps'
orgId: 1
folder: 'Apps'
type: file
disableDeletion: true
editable: false
options:
path: /var/lib/grafana/dashboards/apps
[ ... ]
sidecar:
dashboards:
enabled: true
label: grafana_dashboards
folder: /var/lib/grafana/dashboards
apiVersion: v1
kind: ConfigMap
metadata:
name: grafana-dashboard-data-stores-etcd
labels:
grafana_dashboards: "true"
annotations:
k8s-sidecar-target-directory: "/var/lib/grafana/dashboards/data-stores"
data:
etcd.json: |-
[ ... ]
Note: We're using the stable/grafana chart not the proometheus operator one so not sure of the differences there.
With the changes in 3.7.3 we haven't found a way yet to enable dashboard provisioning with folders. The dashboards get provisioned but all in one big list.
@dashford thats the same PR that broke things for us. Unfortunately we can't rollback without rolling back the entire prometheus-operator chart which would effectively roll us back to grafana 3.5.. The current prometheus-operator chart uses 3.7. requirement:
https://github.com/helm/charts/commit/8a0412450d29191e41410bd8a89acacbb3a8e525#diff-c8273364ef1eeb19bad12f3168779c8fR14
which uses latest stable version being 3.7.3. Hopefully a fix comes soon otherwise we will be stuck with one big list (or a rollback to earlier version than we want).
Any information on this would really be appreciated. Still very stuck on this
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.
Not Stale! Still stuck on this!
Sorry to see you stuck on this @ori78, we're still on 3.7.2 but our scheduled upgrade plan should mean we'll be looking into this again soon.
I don't know what changes have gone in since that version but if we find any solution I'll let you know.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.
We're getting the same problem too ...
It's kind of unfortunate this is still a bug. Hopefully it gets fixed soon. In the meantime, I just had to let the Dashboards be all over the place, but hopefully soon its fixed and I can put it into properly organized folders.
@ori78 I was looking for a solution to a problem and found a solution for another one. @syst0m has posted his values.yaml here https://github.com/prometheus/prometheus/issues/6090 and his yaml file helped me to find a solution to work with multiple folders with Sidecar and dashboardProviders in Grafana ...
Hey @ori78 @lgchiaretto , this broke our workflow, too, so I've made an attempt at resolving it in #19177 (we've applied that solution to our charts already) , but we'll see if it's acceptable by community standards
@ori78 I was looking for a solution to a problem and found a solution for another one. @syst0m has posted his values.yaml here prometheus/prometheus#6090 and his yaml file helped me to find a solution to work with multiple folders with Sidecar and dashboardProviders in Grafana ...
What's the solution?
The same Problem for us.
We are using prometheus-operator chart, this issue is a blocker for us.
As I understand, it should just work as is, since sidecar.dashboards.SCProvider is true by default. We should be able to use sidecar.dashboards.enabled as expected. Haven't tested yet, though.
Hey @SagurovA93 @vivekanandg we had issues when both providers were used simultaneously, so we opted to use only one and edited the chart (PR #19177 ) , if you want you can try updating the chart and doing the same
Dunno if that's what's bugging you though
Most helpful comment
Not Stale! Still stuck on this!