What happened:
We've installed kind and want to set up some local end to end testing with prometheus, grafana, and a controller we are working on.
Unfortunately, when installing prometheus with helm 3, the prometheus-server container fails to start.
We used the following command to get the logs from prometheus-server container
kubectl logs prometheus-server-58dbcfd88c-9ncjh -c prometheus-server
And it appears that it is failing to start due to a permission error.
level=info ts=2020-01-10T21:06:41.640Z caller=main.go:330 msg="Starting Prometheus" version="(version=2.15.2, branch=HEAD, revision=d9613e5c466c6e9de548c4dae1b9aabf9aaf7c57)"
level=info ts=2020-01-10T21:06:41.640Z caller=main.go:331 build_context="(go=go1.13.5, user=root@688433cf4ff7, date=20200106-14:50:51)"
level=info ts=2020-01-10T21:06:41.640Z caller=main.go:332 host_details="(Linux 4.9.184-linuxkit #1 SMP Tue Jul 2 22:58:16 UTC 2019 x86_64 prometheus-server-58dbcfd88c-9ncjh (none))"
level=info ts=2020-01-10T21:06:41.640Z caller=main.go:333 fd_limits="(soft=1048576, hard=1048576)"
level=info ts=2020-01-10T21:06:41.641Z caller=main.go:334 vm_limits="(soft=unlimited, hard=unlimited)"
level=error ts=2020-01-10T21:06:41.641Z caller=query_logger.go:85 component=activeQueryTracker msg="Error opening query log file" file=/data/queries.active err="open /data/queries.active: permission denied"
panic: Unable to create mmap-ed active query log
goroutine 1 [running]:
github.com/prometheus/prometheus/promql.NewActiveQueryTracker(0x7fffce4fe95f, 0x5, 0x14, 0x2c635a0, 0xc0006fb500, 0x2c635a0)
/app/promql/query_logger.go:115 +0x48c
main.main()
/app/cmd/prometheus/main.go:362 +0x5229
We found this discussion related to the problem https://github.com/prometheus/prometheus/issues/5976
However, the work arounds appear to require changing ownership of various docker host machine files and users.
What you expected to happen:
I expect prometheus-server to start with the kind cluster using helm to install without require configuration of the host machine's docker engine / runtime.
How to reproduce it (as minimally and precisely as possible):
kind create cluster
helm repo update
helm install prometheus stable/prometheus
After the containers come up we run:
kubectl get pods -A
And that yields the following
NAMESPACE NAME READY STATUS RESTARTS AGE
default prometheus-alertmanager-5588ffbd-8tvrr 2/2 Running 0 4m3s
default prometheus-kube-state-metrics-55df7bc849-x9jzk 1/1 Running 0 4m3s
default prometheus-node-exporter-6c7lj 1/1 Running 0 4m3s
default prometheus-pushgateway-84745756cd-pfknh 1/1 Running 0 4m2s
default prometheus-server-58dbcfd88c-9ncjh 1/2 CrashLoopBackOff 5 4m3s
...
Note that the prometheus-server pod has 1/2 containers ready. The prometheus-server-config-map-reload container is fine but the server container itself is in a crash loop and refuses to start and thus, we are blocked on using the kind cluster for our testing without prometheus.
Environment:
On mobile atm but this is almost definitely due to k8s.io/hostPath backing the volume which doesn't support non-uid-0 writers.
The next kind release will fix this, the fix is already merged but we haven't yet released it
There are some related (closed) issues with more detail.
No worries! Thanks for the quick response. We are fine holding off until the next release. I'd be happy to test this out once the next release is out 🚀
Awesome! I've confirmed this works with helm prometheus in the above scenario. Thanks so much for the speedy release
Great!
On Tue, Jan 14, 2020 at 9:01 AM John McBride notifications@github.com
wrote:
Awesome! I've confirmed this works with helm prometheus in the above
scenario. Thanks so much for the speedy release—
You are receiving this because you modified the open/close state.
Reply to this email directly, view it on GitHub
https://github.com/kubernetes-sigs/kind/issues/1244?email_source=notifications&email_token=AAHADKZKWTQ7X5T3BMXZAKLQ5XVUXA5CNFSM4KFNOW72YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEI5LR4A#issuecomment-574273776,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AAHADK7A7Q7CYSKNSMRB7PDQ5XVUXANCNFSM4KFNOW7Q
.
You guys are the best, I was struggling with this also.
THANK YOU!
Most helpful comment
You guys are the best, I was struggling with this also.
THANK YOU!