Version of Helm and Kubernetes:
helm version
```Client: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}
kubectl version
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-08T16:31:10Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.6", GitCommit:"a21fdbd78dde8f5447f5f6c331f7eb6f80bd684e", GitTreeState:"clean", BuildDate:"2018-07-26T10:04:08Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
```
Which chart: stable/grafana
What happened:
The Datasource sidecar is meant to configure the /etc/grafana/provisioning/datasources/ folder.
It puts the correct datasources.yaml file into the folder, but due to grafana not being able to see these files before booting the main container, these datasources are never populated.
What you expected to happen:
Grafana should be able to provision the datasources defined in /etc/grafana/provisioning/datasources/ through the use of the sidecar.
How to reproduce it (as minimally and precisely as possible):
use the datasource sidecar. It could a timing issue, but it seems that this sidecar actually needs to be an init container.
Anything else we need to know:
It seems that if this issue: https://github.com/grafana/grafana/issues/12878 gets resolved, then these containers will do their job correctly.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.
this isn't stale.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.
I have same issue!
Yep, seems to be some race condition when the Grafana container starts before the datasource sidecar is able to save the datasource config.
I guess an init container would fix it.
Experienced this same problem as well. A quick work around was to kill the grafana pods and let them get restarted. After doing that the sidecar was able to pick up the ConfigMap changes.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.
This issue is being automatically closed due to inactivity.
Just ran into this one myself, and took me a while to find this issue, wonder if it is worth updating the documentation with a warning for now to save someone else tearing their hair out?
Most helpful comment
this isn't stale.