Is this a request for help?:
yes
Version of Helm and Kubernetes:
helm: 2.12
k8s: 1.10.11
Which chart:
stable/grafana
What happened:
I would like to add dashboards from configmaps to specific folders in grafana (the grafana UI, I don't really care about the file location inside the pod). Is this possible using the sidecars deployed with grafana? looking through the provided values.yaml, I don't see an obvious solution.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.
bump I'm trying to find the answer to this too.
bump, I also would like to know how to do this.
I did got this mostly working by forking the k8s-sidecar and adding the ability to specify the destination directory for each configmap in annotations. I then left dashboard management up to a wrapper chart that handles dashboard configmap creation and sets the appropriate k8s-sidecar annotations to match the directories expected by the various dashboard provisioner definitions w/ the correct grafana folder.
@mmiller1 Your patch looks very useful -- IMO, it would be good to get it into upstream
I created a PR in the upstream project https://github.com/kiwigrid/k8s-sidecar/pull/15
So as I understand correcly, using k8s-sidecar >=0.0.12 gives me the option to use an annotation to have the sidecar put the dashboards into different directories.
But how do I configure the necessary dashboard providers to act on this directories? According to the comment in values.yaml, it is not possible to use the sidecar together with parameter dashboardProviders.
So as I understand correcly, using k8s-sidecar >=0.0.12 gives me the option to use an annotation to have the sidecar put the dashboards into different directories.
But how do I configure the necessary dashboard providers to act on this directories? According to the comment in values.yaml, it is not possible to use the sidecar together with parameter dashboardProviders.
Yeah I just played around with this and it looks like the dashboardProviders parameter will need to be broken away from the sidecar creation in order for the k8s-sidecar >=0.0.12 changes to be usable. I also noticed that the k8s-sidecar only has permissions to create files within the directory specified by the FOLDER env var, which means that all of the k8s-sidecar-target-directory annotation values on dashboard ConfigMaps need to exist within the FOLDER directory.
@jwenz723 I try too, but strangely my dashboard are not put in folder in Grafana, sometime yes, sometime no.
On the filesystem everything is correct in the right folder.
Not sure how to solve that.
@shinji62 the problem is that the helm chart doesn't create the proper dashboard providers (about half way down the page.
@jwenz723 If you have any solution just make a PR :) that's will help I guess many people.
~In my case~ It is because the sidecar uses 1 path to mount the dashboards folder and storing all the scraped dashboards into it. When the sidecar creates the files with the dashboardProvider folder annotation, they get created into the folder name—inside the "default" folder, which is the mounted path. When grafana reads the configuration from the sidecar config https://github.com/helm/charts/blob/master/stable/grafana/templates/configmap-dashboard-provider.yaml the other dashboards inside the subfolders (supposedly dashboardProviders) get read. Then grafana reads the files specified in the dashboardProviders config you set in values.yaml. The dashboards under the subfolders (dashboardProviders) actually gets read again—the second time.
I created PR https://github.com/helm/charts/pull/12927 to fix this issue. I am using it now from a hosted chart repo, and its working as expected.
@richmondwang as someone else mentioned in your PR:
I suspect that this patch will only allow us to specify one global folder instead, like "scraped Dashboards". Is that correct?
Our team wanted to have multiple dashboard folders managed through the dashboard sidecar, so unfortunately your PR didn't help us out.
We were able to get around these issues by:
/tmp/dashboards/{grafana_folder_name}dashboardProviders config value and creating one provider per subdirectory in /tmp/dashboards. That said the solution is a bit hacky and sometimes the grafana UI will render the dashboards in different folders (probably because the grafana filesystem provider recurses into each subdirectory to look for dashboard json so there is a race between which provider picks up the dashboard first)
@grantatspothero
Yes you still need to use the dashboardProviders.
My patch will change the default folder for global into another directory so that the race condition doesn't happen without more configuration.
We also had the race condition before. Everytime we restart grafana, the dashboard are flying around everywhere.
@richmondwang just tried it out, thanks for your work!
@grantatspothero could you provide your config? I cannot seem to find enough information on adding additional dashboards to grafana. I deployed grafana with the prometheus operator chart. I'm now wanting to add additional dashboards through values.yaml.
I attempted the following with no luck. The prometheus-grafana pod errors with no useful container logs.
To summarize I simply want to configure a couple diff dashboards ;)
grafana:
sidecar:
dashboards:
defaultFolderName: /var/lib/grafana/dashboards/default
dashboardProviders:
dashboardproviders.yaml:
apiVersion: 1
providers:
- name: 'default'
orgId: 1
folder: ''
type: file
disableDeletion: false
editable: true
options:
path: /var/lib/grafana/dashboards/default
dashboards:
default:
prometheus-stats:
gnetId: 2 revision: 2
datasource: Prometheus ceph-cluster:
gnetId: 2842 revision: 2
ceph-osd: gnetId: 5336
ceph-pools:
gnetId: 5342
revision: 2
@retr0h
Refer to this above comment: https://github.com/helm/charts/issues/10183#issuecomment-485505797
If you want to put dashboards in different grafana folders you need to configure multiple dashboard providers in your values file (one provider per grafana folder):
dashboardProviders:
dashboardproviders.yaml:
apiVersion: 1
providers:
- name: 'A'
orgId: 1
folder: 'A'
type: file
disableDeletion: false
editable: true
options:
path: /tmp/dashboards/a
- name: 'B'
orgId: 1
folder: 'B'
type: file
disableDeletion: false
editable: true
options:
path: /tmp/dashboards/b
Then you need to specify to the sidecar container that dashboards from specific configmaps should be written to different filesystem directories (which will then place the dashboards in the specified grafana folder as you configured above).
kind: ConfigMap
apiVersion: v1
metadata:
name: "mydash_in_folder_a"
labels:
grafana_dashboard: "1"
annotations:
k8s-sidecar-target-directory: "/tmp/dashboards/a"
...
The trick is the annotation, which tells the sidecar to not place the dashboards in the default directory, but instead place them in a custom directory.
There's something fishy about this, I tried your setup, the dashboard is being placed in "General" folder, and even though the sidecar reports this in logs:
Configmap with label found
Found a folder override annotation, placing the configmap in: /tmp/dashboards/MyProject-DEV
File in configmap grafana-dashboard-demo.json ADDED
And if I exec inside the pod, I can see that the json file got deployed in the correct folder:
$ ls -R /tmp/dashboards/
/tmp/dashboards/:
MyProject-DEV
/tmp/dashboards/MyProject-DEV:
grafana-dashboard-demo.json
And in /etc/grafana I see the following configuration:
$ cat /etc/grafana/provisioning/dashboards/dashboardproviders.yaml
apiVersion: 1
providers:
- disableDeletion: false
editable: true
folder: MyProject-DEV
name: MyProject-DEV
options:
path: /tmp/dashboards/MyProject-DEV
orgId: 1
type: file
- disableDeletion: false
editable: true
folder: MyProject-PPE
name: MyProject-PPE
options:
path: /tmp/dashboards/MyProject-PPE
orgId: 1
type: file
$ cat /etc/grafana/provisioning/dashboards/sc-dashboardproviders.yaml
apiVersion: 1
providers:
- name: 'default'
orgId: 1
folder: ''
type: file
disableDeletion: false
options:
path: /tmp/dashboards
Here's what I see in UI:

Any ideas?
NAME CHART VERSION APP VERSION
stable/grafana 3.3.7 6.1.6
It's been awhile since I've futzed with this chart but IIRC your two dashboardproviders.yaml files are stepping on each other. The sc-dashboardproviders.yaml file is actually collecting json files from nested directories and placing them in the "general" folder, previously collected json files are then ignored by your other provider definitions. I solved this by explicitly creating a dashboard provider entry and directory for "general".
I think you are right, the two dashboardprovider definitions are stepping on each other, but creating an entry for General doesn't seem to help, I tried putting it in the top or bottom of the list, also tried to put a "root" definition like below, also didn't help:
- name: 'default'
orgId: 1
folder: ''
type: file
disableDeletion: false
options:
path: /tmp/dashboards
The only workaround that actually works is commenting out mounting sc-dashboardproviders.yaml from deployment.yaml entirely, then everything consistently works.
I wonder, does it even make sense for this chart to mount sc-dashboardproviders.yaml if custom dashboardProviders are configured?
Ah yeah, you are correct, looking back this is what I actually ended up
doing myself.
On Tue, May 14, 2019 at 9:15 AM Maxim Baz notifications@github.com wrote:
I think you are right, the two dashboardprovider definitions are stepping
on each other, but creating an entry for General doesn't seem to help, I
tried putting it in the top or bottom of the list, also tried to put a
"root" definition like below, also didn't help:
- name: 'default'
orgId: 1
folder: ''
type: file
disableDeletion: false
options:
path: /tmp/dashboardsThe only workaround that actually works is commenting out mounting
sc-dashboardproviders.yaml from deployment.yaml entirely, then everything
consistently works.I wonder, does it even make sense for this chart to mount
sc-dashboardproviders.yaml if custom dashboardProviders are configured?—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/helm/charts/issues/10183?email_source=notifications&email_token=AGCTIAFZDF66YSDDXCM26SLPVK3ORA5CNFSM4GLS642KYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODVLN5NQ#issuecomment-492232374,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AGCTIAFKFMQFQIQSZCYNZTDPVK3ORANCNFSM4GLS642A
.
@minhdanh, @zanhsieh, @maorfr, the PR https://github.com/helm/charts/pull/15382 (specifically this commit https://github.com/helm/charts/pull/15382/commits/46e541bda2220ab5b4cdb49dc20c5a6130ee8f94) has reverted my earlier change in https://github.com/helm/charts/pull/13761 and broke custom dashboard folders again, could someone take a look?
@maximbaz Hi, the commit https://github.com/helm/charts/commit/46e541bda2220ab5b4cdb49dc20c5a6130ee8f94 was because the checks made the sc-dashboardproviders.yaml removed, so the custom dashboards could not loaded in my case.
- - name: sc-dashboard-provider
- mountPath: "/etc/grafana/provisioning/dashboards/sc-dashboardproviders.yaml"
- subPath: provider.yaml
But it looks like because I was having a default dashboardProvider:
dashboardProviders:
dashboardproviders.yaml:
apiVersion: 1
providers:
- name: 'default'
orgId: 1
folder: ''
type: file
disableDeletion: true
editable: false
options:
path: /var/lib/grafana/dashboards/default
So this looks like my mistake. I'll fix this.
Here's the fix: https://github.com/helm/charts/pull/15550
Here's the fix: #15550
@minhdanh your changes aren't present in latest version as of today 3.8.1 :-(
@infa-ddeore My changes were merged to master. I've just checked again and looks like the changes have been removed. Please check git log to see how that was made.
cc @irasnyd, if this is a regression again, I think it is caused by https://github.com/helm/charts/pull/15770 this time.
Yep, we had to rollback to 3.7.2 as the changes in #15770 now provision the dashboards in one big list instead of divided into folders.
Yep, we had to rollback to
3.7.2as the changes in #15770 now provision the dashboards in one big list instead of divided into folders.
@dashford is there a way now to load dashboards from configmap to different folders than General?
@dashford is there a way now to load dashboards from configmap to different folders than General?
Yes there is with version 3.7.2 of the helm chart (and previous versions but can't remember when it all came together). I followed the comments in this PR and a few others but essentially it works with the following in your values.yaml file (or similar provisioning method):
dashboardProviders:
dashboardproviders.yaml:
apiVersion: 1
providers:
- name: 'Data Stores'
orgId: 1
folder: 'Data Stores'
type: file
disableDeletion: true
editable: false
options:
path: /var/lib/grafana/dashboards/data-stores
- name: 'Apps'
orgId: 1
folder: 'Apps'
type: file
disableDeletion: true
editable: false
options:
path: /var/lib/grafana/dashboards/apps
[ ... ]
sidecar:
dashboards:
enabled: true
label: grafana_dashboards
folder: /var/lib/grafana/dashboards
and then in each configmap you have for each dashboard you need the k8s-sidecar-target-directory annotation e.g.
apiVersion: v1
kind: ConfigMap
metadata:
name: grafana-dashboard-data-stores-etcd
labels:
grafana_dashboards: "true"
annotations:
k8s-sidecar-target-directory: "/var/lib/grafana/dashboards/data-stores"
data:
etcd.json: |-
[ ... ]
@dashford is there a way now to load dashboards from configmap to different folders than General?
Yes there is with version
3.7.2of the helm chart (and previous versions but can't remember when it all came together). I followed the comments in this PR and a few others but essentially it works with the following in yourvalues.yamlfile (or similar provisioning method):dashboardProviders: dashboardproviders.yaml: apiVersion: 1 providers: - name: 'Data Stores' orgId: 1 folder: 'Data Stores' type: file disableDeletion: true editable: false options: path: /var/lib/grafana/dashboards/data-stores - name: 'Apps' orgId: 1 folder: 'Apps' type: file disableDeletion: true editable: false options: path: /var/lib/grafana/dashboards/apps [ ... ] sidecar: dashboards: enabled: true label: grafana_dashboards folder: /var/lib/grafana/dashboardsand then in each configmap you have for each dashboard you need the
k8s-sidecar-target-directoryannotation e.g.apiVersion: v1 kind: ConfigMap metadata: name: grafana-dashboard-data-stores-etcd labels: grafana_dashboards: "true" annotations: k8s-sidecar-target-directory: "/var/lib/grafana/dashboards/data-stores" data: etcd.json: |- [ ... ]
i meant will this work in the latest helm chart version 3.8.3? as the change https://github.com/helm/charts/pull/15550/files was rolled back
@infa-ddeore the breaking change was in a later PR https://github.com/helm/charts/pull/15770 so AFAIK dashboard provisioning in multiple folders is still broken as of 3.7.3 and above.
I have been using the prometheus-operator 6.2.1 chart (which itself uses the grafana 3.7.3 chart) with the following grafana configuration values.yaml. It provisions and loads the dashboards which are part of prometheus-operator, as well as downloading dashboards from grafana.net at the same time.
This behavior was fixed by PR #15770 (it did not work before the PR was merged).
grafana:
# Enable automatic dashboard provisioning
dashboardProviders:
dashboardproviders.yaml:
apiVersion: 1
providers:
- name: 'provisioned'
orgId: 1
folder: ''
type: file
disableDeletion: true
editable: false
options:
path: /var/lib/grafana/dashboards/provisioned
# Automatically install these dashboards from the official Grafana dashboard hub
dashboards:
provisioned:
nginx-ingress:
gnetId: 9614
revision: 1
datasource: Prometheus
elasticsearch:
gnetId: 2322
revision: 4
datasource: Prometheus
postgresql:
gnetId: 9628
revision: 2
datasource: Prometheus
rabbitmq-monitoring:
gnetId: 4279
revision: 3
datasource: Prometheus
rabbitmq-metrics:
gnetId: 2121
revision: 1
datasource: Prometheus
aws-s3-buckets:
gnetId: 575
revision: 5
Hey @irasnyd, were you able to get it working using the sidecar component too (noticed you just referenced using dashboards)? That's the part that seems broken now.
Hey @irasnyd, were you able to get it working using the
sidecarcomponent too (noticed you just referenced usingdashboards)? That's the part that seems broken now.
Yes, I have used grafana 3.7.3 (as part of prometheus-operator 6.2.1) successfully using both the dashboards and sidecar simultaneously. I created PR #15770 to make them work together. I have attached my full values.yaml for stable/prometheus-operator 6.2.1, showing that this works with both dashboards and sidecar simultaneously.
# NOTE: this values.yaml is only known to work with this specific chart:
# stable/prometheus-operator 6.2.1
# helm upgrade --install prometheus-operator \
# stable/prometheus-operator --version 6.2.1 \
# --namespace=kube-system -f ~/path/to/prometheus-operator-values.yaml
prometheusOperator:
# Work around Rancher 2.2 vs. Helm CustomResourceDefinition creation race condition
createCustomResource: false
# CPU/memory resource requests/limits
resources:
requests:
cpu: 10m
memory: 32Mi
limits:
cpu: 1000m
memory: 128Mi
alertmanager:
alertmanagerSpec:
# CPU/memory resource requests/limits
resources:
requests:
cpu: 10m
memory: 32Mi
limits:
cpu: 1000m
memory: 128Mi
# kube-state-metrics configuration
kube-state-metrics:
# CPU/memory resource requests/limits
resources:
requests:
cpu: 10m
memory: 128Mi
limits:
cpu: 1000m
memory: 256Mi
# prometheus-node-exporter configuration
prometheus-node-exporter:
# CPU/memory resource requests/limits
resources:
requests:
cpu: 10m
memory: 32Mi
limits:
cpu: 100m
memory: 64Mi
# Prometheus configuration
prometheus:
prometheusSpec:
# Configure retention period
retention: "30d"
# Configure disk-based storage (~1 GiB/day)
storageSpec:
volumeClaimTemplate:
spec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: "50Gi"
# CPU/memory resource requests/limits
resources:
requests:
cpu: 1000m
memory: 2Gi
limits:
cpu: 4000m
memory: 4Gi
# Additional configuration
additionalScrapeConfigs:
# Scrape metrics from any exporters in the Kubernetes cluster
# within the listed namespaces
- job_name: 'prometheus-exporter-endpoints'
kubernetes_sd_configs:
- role: pod
namespaces:
names:
- dev
- prod
- kube-system
relabel_configs:
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scheme]
action: replace
target_label: __scheme__
regex: (https?)
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
target_label: __address__
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
# Grafana configuration
grafana:
# Multiple replicas for high availability
replicas: 2
# Administrator password
adminPassword: "xxx"
# CPU/memory resource requests/limits
resources:
requests:
cpu: 500m
memory: 384Mi
limits:
cpu: 2000m
memory: 512Mi
# Main Grafana configuration file
grafana.ini:
grafana_net:
url: https://grafana.net
# Database credentials
database:
type: "postgres"
host: "xxx"
name: "grafana"
user: "grafana"
password: "xxx"
max_idle_conn: 10
max_open_conn: 20
conn_max_lifetime: 120
# Enable LDAP Authentication and Authorization
auth.ldap:
enabled: true
allow_sign_up: true
config_file: /etc/grafana/ldap.toml
# LDAP Authentication and Authorization configuration
ldap:
config: |-
verbose_logging = true
[[servers]]
host = "xxx"
port = 636
use_ssl = true
start_tls = false
ssl_skip_verify = true
bind_dn = 'xxx'
bind_password = 'xxx'
search_filter = "(|(sAMAccountName=%s)(mail=%s))"
search_base_dns = ["DC=EXAMPLE,DC=COM"]
[servers.attributes]
name = "givenName"
surname = "sn"
username = "sAMAccountName"
member_of = "memberOf"
email = "mail"
[[servers.group_mappings]]
group_dn = "*"
org_role = "Editor"
# Add additional Grafana datasources to the configuration automatically.
# Keep these alphabetized by name!
additionalDataSources:
- name: archive-s3
type: elasticsearch
access: proxy
database: "archive-s3-*"
url: http://elasticsearch.example.com:9200
jsonData:
timeField: "@timestamp"
esVersion: 56
- name: BANZAI-QC
type: elasticsearch
access: proxy
database: "banzai_qc"
url: http://elasticsearch.example.com:9200
jsonData:
timeField: "@timestamp"
esVersion: 56
- name: fitsheaders
type: elasticsearch
access: proxy
database: "fitsheaders"
url: http://elasticsearch.example.com:9200
jsonData:
timeField: "DATE-OBS"
esVersion: 56
- name: "Cloudwatch"
type: cloudwatch
jsonData:
authType: keys
defaultRegion: us-west-2
secureJsonData:
accessKey: "xxx"
secretKey: "xxx"
isDefault: false
- name: live-telemetry
type: elasticsearch
access: proxy
database: "live-telemetry"
url: http://elasticsearch.example.com:9200
jsonData:
timeField: "@timestamp"
esVersion: 56
- name: logstash
type: elasticsearch
access: proxy
database: "logstash-*"
url: http://elasticsearch.example.com:9200
jsonData:
timeField: "@timestamp"
esVersion: 56
- name: mysql-telemetry
type: elasticsearch
access: proxy
database: "mysql-telemetry-*"
url: http://elasticsearch.example.com:9200
jsonData:
timeField: "timestampmeasured"
esVersion: 56
- name: nagios
type: sni-pnp-datasource
access: proxy
url: http://nagios.example.com/pnp4nagios/
basicAuth: true
withCredentials: true
basicAuthUser: "xxx"
secureJsonData:
basicAuthPassword: "xxx"
- name: optsdb
type: opentsdb
url: http://opentsdb.example.com:4242
access: proxy
isDefault: false
- name: observation
type: elasticsearch
access: proxy
database: "observationv3"
url: http://elasticsearch.example.com:9200
jsonData:
timeField: "configuration_start"
esVersion: 56
- name: Sinistro
type: elasticsearch
access: proxy
database: "sinistro"
url: http://elasticsearch.example.com:9200
jsonData:
timeField: "time"
esVersion: 56
# Automatically provision all of the notifiers
notifiers:
notifiers.yaml:
notifiers:
- name: Foo
type: slack
uid: 1
isDefault: true
settings:
url: https://hooks.slack.com/services/xxx
uploadImage: true
- name: Bar
type: slack
uid: 4
settings:
url: https://hooks.slack.com/services/xxx
uploadImage: true
# Automatically install these Grafana plugins
plugins:
- blackmirror1-singlestat-math-panel
- flant-statusmap-panel
- mtanda-histogram-panel
- natel-discrete-panel
- natel-plotly-panel
- sni-pnp-datasource
# Disable Grafana persistence -- the database is used for all persistence needs
persistence:
enabled: false
# We are not using any persistent disks, so we can use a RollingUpdate strategy
deploymentStrategy:
type: RollingUpdate
# Enable Ingress
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: nginx-ingress-private
hosts:
- "grafana.example.com"
initChownData:
# CPU/memory resource requests/limits
resources:
requests:
cpu: 10m
memory: 16Mi
limits:
cpu: 1000m
memory: 128Mi
sidecar:
# CPU/memory resource requests/limits
resources:
requests:
cpu: 10m
memory: 128Mi
limits:
cpu: 1000m
memory: 256Mi
# Do not allow users to delete provisioned dashboards
dashboards:
provider:
disableDelete: true
# Enable automatic dashboard provisioning
dashboardProviders:
dashboardproviders.yaml:
apiVersion: 1
providers:
- name: 'provisioned'
orgId: 1
folder: ''
type: file
disableDeletion: true
editable: false
options:
path: /var/lib/grafana/dashboards/provisioned
# Automatically install these dashboards from the official Grafana dashboard hub
dashboards:
provisioned:
nginx-ingress:
gnetId: 9614
revision: 1
datasource: Prometheus
elasticsearch:
gnetId: 2322
revision: 4
datasource: Prometheus
postgresql:
gnetId: 9628
revision: 2
datasource: Prometheus
rabbitmq-monitoring:
gnetId: 4279
revision: 3
datasource: Prometheus
rabbitmq-metrics:
gnetId: 2121
revision: 1
datasource: Prometheus
aws-s3-buckets:
gnetId: 575
revision: 5
For adding the dashboard providers, I was wondering if it would be possible to add them via configmaps and whether that would be doable without having to restart grafana everytime.
Hello everybody.
So I have read through this issue and was still experiencing the issue where dashboards are being placed in their correct folders, but are also placed in the General folder too.
To fix this, in the config for the sidecar container make sure that sidecar.dashboards.SCProvider is set to false :-)
...
# Sidecar configuration
sidecar:
image:
repository: kiwigrid/k8s-sidecar
tag: 0.1.151
imagePullPolicy: IfNotPresent
skipTlsVerify: true
dashboards:
enabled: true
# Set SCProvider to false to solve the General folder issue.
SCProvider: false
...
Same here. With SCProvider: false, the annotations set (k8s-sidecar-target-directory: "/tmp/dashboards/a"), and dashboardProviders listed, I could implement the use-case successfully.
Note that you indeed need to put all dashboards into subdirectories in the docker volume, even the ones that are supposed to be placed in root General folder. My dashboardProviders looks as follows:
dashboardProviders:
dashboardproviders.yaml:
apiVersion: 1
providers:
- name: 'general'
orgId: 1
folder: ''
type: file
disableDeletion: true
editable: false
options:
path: /tmp/dashboards/general
{{- range $p := .Values.monitoring.grafana.dashboardProviders }}
- name: '{{ $p }}'
orgId: 1
folder: '{{ $p }}'
type: file
disableDeletion: true
editable: false
options:
path: /tmp/dashboards/{{ $p }}
{{- end }}
Got a similar config here, but I keep the sc-dashboardproviders.yaml file, which is complemented by dashboardproviders.yaml. I set the k8s-sidecar-target-directory annotations according to the folders defined below, and Instead of re-defining the general folder for all dashboards imported by the sidecar, i specify a custom folder to put them into:
grafana:
sidecar:
dashboards:
defaultFolderName: general
dashboardProviders:
dashboardproviders.yaml:
apiVersion: 1
providers:
- name: custom
type: file
folder: Custom
disableDeletion: true
editable: true
options:
path: /tmp/dashboards/custom
A tiny bit more concise 😇
Most helpful comment
For adding the dashboard providers, I was wondering if it would be possible to add them via configmaps and whether that would be doable without having to restart grafana everytime.