Charts: [stable/grafana] Using sidecar and dashboardProvider causes problems

Created on 27 Sep 2018  路  8Comments  路  Source: helm/charts

Is this a request for help?:
No

Is this a BUG REPORT or FEATURE REQUEST? (choose one):
BUG

Version of Helm and Kubernetes:
Kubernetes 1.9.7
Helm 2.10.0

Which chart:
stable/grafana:1.14.8

What happened:
When both dashboardProviders with a provider named "default" (example for values file) and sidecar dashboard import are enabled, all dashboards are constantly imported and deleted. Both providers feel responsible for all imported dashboards and delete the dashboards imported by the other provider, since it can't find the file locally.

What you expected to happen:
Dashboards of both providers should be installed.

How to reproduce it (as minimally and precisely as possible):
values.yaml:

dashboardProviders:
  dashboardproviders.yaml:
    apiVersion: 1
    providers:
    - name: 'default'
      orgId: 1
      folder: ''
      type: file
      disableDeletion: false
      editable: true
      options:
        path: /var/lib/grafana/dashboards/default
dashboards:
  default:
    kubernetes-cluster-monitoring:
      datasource: Prometheus
      gnetId: 315
      revision: 3
sidecar:
  dashboards:
    enabled: true

Anything else we need to know:

lifecyclstale

Most helpful comment

still relevant, @Tim-Smyth any update?

All 8 comments

I think I have similar / same problem.
My config:

sidecar:
  dashboards:
    enabled: true
    folder: /var/lib/grafana/dashboards/default
  resources:
    requests:
      cpu: 50m
      memory: 100Mi

dashboardProviders:
  dashboardproviders.yaml:
    apiVersion: 1
    providers:
      - name: 'default'
        orgId: 1
        folder: ''
        type: file
        disableDeletion: false
        editable: true
        options:
          path: /var/lib/grafana/dashboards/default

dashboards:
  default:
    kubernetes-pods-1:
      gnetId: 6336
      revision: 1
      datasource: Prometheus

I cannot see any imported dashboard in grafana.
Logs from containers:

$ kubectl logs -n monitoring pod/grafana-6c4b54bd5c-bpv5f -c grafana-sc-dashboard
here everything is ok -- my custom dashboards are added
kubectl logs -n monitoring pod/grafana-6c4b54bd5c-bpv5f -c grafana
t=2018-10-09T13:39:54+0000 lvl=info msg="HTTP Server Listen" logger=http.server address=0.0.0.0:3000 protocol=http subUrl= socket=
t=2018-10-09T13:39:54+0000 lvl=info msg="Database table locked, sleeping then retrying" logger=sqlstore retry=0
t=2018-10-09T13:39:54+0000 lvl=eror msg="failed to save dashboard" logger=provisioning.dashboard type=file name=default error="UNIQUE constraint failed: dashboard.org_id, dashboard.folder_id, dashboard.title"
t=2018-10-09T13:39:54+0000 lvl=info msg="Database table locked, sleeping then retrying" logger=sqlstore retry=0
t=2018-10-09T13:39:54+0000 lvl=info msg="Database table locked, sleeping then retrying" logger=sqlstore retry=0
t=2018-10-09T13:39:54+0000 lvl=info msg="Database table locked, sleeping then retrying" logger=sqlstore retry=0
t=2018-10-09T13:39:54+0000 lvl=info msg="Database table locked, sleeping then retrying" logger=sqlstore retry=0
t=2018-10-09T13:39:54+0000 lvl=info msg="Database table locked, sleeping then retrying" logger=sqlstore retry=0
t=2018-10-09T13:39:55+0000 lvl=info msg="Database table locked, sleeping then retrying" logger=sqlstore retry=0
t=2018-10-09T13:39:55+0000 lvl=eror msg="failed to save dashboard" logger=provisioning.dashboard type=file name=default error="UNIQUE constraint failed: dashboard.org_id, dashboard.folder_id, dashboard.title"
t=2018-10-09T13:39:55+0000 lvl=info msg="Database table locked, sleeping then retrying" logger=sqlstore retry=0
t=2018-10-09T13:39:55+0000 lvl=info msg="Database table locked, sleeping then retrying" logger=sqlstore retry=0
t=2018-10-09T13:39:55+0000 lvl=info msg="Database table locked, sleeping then retrying" logger=sqlstore retry=0
t=2018-10-09T13:39:55+0000 lvl=info msg="Database table locked, sleeping then retrying" logger=sqlstore retry=0
t=2018-10-09T13:39:55+0000 lvl=info msg="Database table locked, sleeping then retrying" logger=sqlstore retry=0
t=2018-10-09T13:39:55+0000 lvl=info msg="Database table locked, sleeping then retrying" logger=sqlstore retry=0
t=2018-10-09T13:39:55+0000 lvl=eror msg="failed to save dashboard" logger=provisioning.dashboard type=file name=default error="UNIQUE constraint failed: dashboard.org_id, dashboard.folder_id, dashboard.title"
t=2018-10-09T13:39:55+0000 lvl=info msg="Database table locked, sleeping then retrying" logger=sqlstore retry=0
t=2018-10-09T13:39:55+0000 lvl=info msg="Database table locked, sleeping then retrying" logger=sqlstore retry=0
t=2018-10-09T13:39:55+0000 lvl=info msg="Database table locked, sleeping then retrying" logger=sqlstore retry=1
t=2018-10-09T13:39:55+0000 lvl=info msg="Database table locked, sleeping then retrying" logger=sqlstore retry=2
t=2018-10-09T13:39:55+0000 lvl=info msg="Database table locked, sleeping then retrying" logger=sqlstore retry=3
t=2018-10-09T13:39:55+0000 lvl=info msg="Database table locked, sleeping then retrying" logger=sqlstore retry=4
t=2018-10-09T13:39:55+0000 lvl=eror msg="failed to save dashboard" logger=provisioning.dashboard type=file name=default error="UNIQUE constraint failed: dashboard.org_id, dashboard.folder_id, dashboard.title"
t=2018-10-09T13:39:55+0000 lvl=info msg="Database table locked, sleeping then retrying" logger=sqlstore retry=0
t=2018-10-09T13:39:55+0000 lvl=eror msg="failed to save dashboard" logger=provisioning.dashboard type=file name=default error="UNIQUE constraint failed: dashboard.org_id, dashboard.folder_id, dashboard.title"
$ kubectl logs -n monitoring pod/grafana-6c4b54bd5c-bpv5f -c download-dashboards
<no logs here>
  • second question:
    Why configmap-dashboard-provider.yaml contains hardcoded values?
{{- if .Values.sidecar.dashboards.enabled }}
apiVersion: v1
kind: ConfigMap
metadata:
  labels:
    app: {{ template "grafana.name" . }}
    chart: {{ template "grafana.chart" . }}
    release: {{ .Release.Name }}
    heritage: {{ .Release.Service }}
{{- with .Values.annotations }}
  annotations:
{{ toYaml . | indent 4 }}
{{- end }}
  name: {{ template "grafana.fullname" . }}-config-dashboards
data:
  provider.yaml: |-
    apiVersion: 1
    providers:
    - name: 'default'
      orgId: 1
      folder: ''
      type: file
      disableDeletion: false
      options:
        path: {{ .Values.sidecar.dashboards.folder }}
{{- end}}

We have similar issue as well, are there any news on this front?

PR is prepared: https://github.com/helm/charts/pull/7998
As a workaround changing the provider name in the dashboardProviders property worked for me
@sta-szek The hardcoded values will be replaced by the PR

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.

still relevant, @Tim-Smyth any update?

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.

This issue is being automatically closed due to inactivity.

This is still an issue and I was unable to make it work with both the sidecar and dashboards:. In order to add a Dashboard from grafana labs i had to download the json, put it into a ConfigMap and set the "datasource" values by hand.

Was this page helpful?
0 / 5 - 0 ratings