Prometheus-operator: Permission denied using volumeClaimTemplate w/ automatically provisioned storage

Created on 4 Aug 2017  ·  43Comments  ·  Source: prometheus-operator/prometheus-operator

What did you do?

I ran the latest versions of the Prometheus Operator (v0.11.0 and v0.11.1) configured to use the new Prometheus v2.0.0-beta.0 version with persistent storage on the Prometheus pods using the following storage config:

...
  storage:
    volumeClaimTemplate:
      metadata:
        annotations:
          volume.beta.kubernetes.io/storage-class: ssd
      spec:
        resources:
          requests:
            storage: 10Gi
...

Note: ssd is a StorageClass for AWS EBS gp2 volumes.

What did you expect to see?

Pods on the Prometheus StatefulSet launching correctly.

What did you see instead? Under which circumstances?

The prometheus-k8s-0 pod fails to start due to a permissions issue on the persistent volume and ends up in a CrashLoopBackOff state. Inspection on the node reveals the mount point of the persistent volume created by the Prometheus Operator is owned by root, which is not the case for mount points of persistent volumes on a regular StatefulSet using the same StorageClass.

The pods launch correctly when the volumeClaimTemplate configuration is omitted.

This issue seems similar to #518, although in this case storage is being provisioned automatically.

Environment

  • Kubernetes version information:

Tested on both 1.6.2 and 1.6.4.

Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.4", GitCommit:"d6f433224538d4f9ca2f7ae19b252e6fcb66a3ae", GitTreeState:"clean", BuildDate:"2017-05-19T18:44:27Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.4+coreos.0", GitCommit:"8996efde382d88f0baef1f015ae801488fcad8c4", GitTreeState:"clean", BuildDate:"2017-05-19T21:11:20Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
  • Kubernetes cluster kind:

Custom terraform deploy on AWS using CoreOS 1409.7.0

  • Manifests:
see contrib/kube-prometheus/manifests
  • Prometheus Operator Logs:
time="2017-08-04T16:56:37Z" level=info msg="Starting prometheus (version=2.0.0-beta.0, branch=master, revision=2b5d9159537cbd123219296121e05244e26c0940)" source="main.go:202" 
time="2017-08-04T16:56:37Z" level=info msg="Build context (go=go1.8.3, user=root@fc24486243df, date=20170712-12:21:13)" source="main.go:203" 
time="2017-08-04T16:56:37Z" level=info msg="Host details (Linux 4.11.11-coreos #1 SMP Tue Jul 18 23:06:59 UTC 2017 x86_64 prometheus-k8s-0 (none))" source="main.go:204" 
time="2017-08-04T16:56:37Z" level=info msg="Starting tsdb" source="main.go:216" 
time="2017-08-04T16:56:37Z" level=error msg="Opening storage failed: open DB in /var/prometheus/data: open /var/prometheus/data/969552713: permission denied" source="main.go:219" 

Most helpful comment

This might have to do with the docker image being built with user nobody. We should anyways, but I think this might be fixed by setting the correct securityContext:

https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod

To try this out you could take a copy of the StatefulSet generated and set the securityContext in the PodTemplate. If I understand the documentation correctly, I think we should be able to get it working correctly by setting the fsGroup, runAsUser and runAsNonRoot. I don't have a cluster at hand with auto PV provisioning, but I'd first try out these settings:

fsGroup: 2000
runAsUser: 1000
runAsNonRoot: true

All 43 comments

This might have to do with the docker image being built with user nobody. We should anyways, but I think this might be fixed by setting the correct securityContext:

https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod

To try this out you could take a copy of the StatefulSet generated and set the securityContext in the PodTemplate. If I understand the documentation correctly, I think we should be able to get it working correctly by setting the fsGroup, runAsUser and runAsNonRoot. I don't have a cluster at hand with auto PV provisioning, but I'd first try out these settings:

fsGroup: 2000
runAsUser: 1000
runAsNonRoot: true

Is this similar to a hostPath: volume? If yes, fsGroup doesn't work for hostPath IIRC.

Ref: https://github.com/kubernetes/kubernetes/pull/39438

I would expect it to work on external volumes @Gouthamve. Host volumes are somewhat special as they require more sensitive treatment.

Looks like setting the securityContext on the StatefulSet as suggested fixes the permissions issue - thanks @brancz!

Great! Thanks for sharing @Capitrium! Would you like to give it a go and implement it in the Prometheus Operator?

Sure, I'll give it a shot!

If anyone stuck with the same issue, the securityContext did solve the issue for me. The Helm chart from the Kubernetes repo for the 2.0 that has this issue is fixed as of right now https://github.com/kubernetes/charts/pull/2767.

I'm still having this issue with latest prometheus-operator and prometheus:v2.4.3...
What's going on ?

We just today released a version that removes the automatic setting of security context, can you try v0.26.0?

err="opening storage failed: open /prometheus/wal/00002603: permission denied"

securityContext is still empty :

"securityContext": {},

Here is my Prom manifest :

apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
  labels:
    prometheus: k8s
    test: test
  name: k8s
  namespace: monitoring
spec:
  alerting:
    alertmanagers:
    - name: alertmanager-main
      namespace: monitoring
      port: web
  baseImage: quay.io/prometheus/prometheus
  nodeSelector:
    beta.kubernetes.io/os: linux
  replicas: 1
  resources:
    requests:
      cpu: 1
      memory: 400Mi
  retention: 45d
  ruleSelector:
    matchLabels:
      prometheus: k8s
      role: alert-rules
  serviceAccountName: prometheus-k8s
  serviceMonitorNamespaceSelector: {}
  serviceMonitorSelector: {}
  storage:
    volumeClaimTemplate:
      spec:
        resources:
          requests:
            storage: 200Gi
        storageClassName: ssd
  version: v2.4.3

I just added to the spec :

  securitycontext:
    fsGroup: 2000
    runAsNonRoot: true
    runAsUser: 1000

I'm testing it right now.

Adding this securityContext worked for me. See https://github.com/coreos/prometheus-operator/pull/2109#issuecomment-443684143

Seems to be working, but I can't get the Jsonnet files to add the securitycontext when building the manifests.
Here is the relevant part :

local kp = (import 'kube-prometheus/kube-prometheus.libsonnet') + (import 'etcd-mixin/mixin.libsonnet') + {
  _config+:: {
    namespace: 'monitoring',
    prometheus+:: {
      prometheus+: {
        name: 'k8s',
        spec+: {
          retention: "45d",
          storage: {
            volumeClaimTemplate:
              pvc.new() +
              pvc.mixin.spec.withAccessModes('ReadWriteOnce') +
              pvc.mixin.spec.resources.withRequests({ storage: '200Gi' }) +
              pvc.mixin.spec.withStorageClassName('ssd'),
          },
          securityContext: {
            fsGroup: 2000,
            runAsNonRoot: true,
            runAsUser: 1000,
          },
        },
...
      },
    },

That looks like it should do the trick. Is there anything in “...” that might be influencing it?

Actually, I see nothing in the Jsonnet code that would add the SecurityContext.
I'm looking at https://github.com/coreos/prometheus-operator/blob/v0.26.0/contrib/kube-prometheus/jsonnet/kube-prometheus/prometheus/prometheus.libsonnet#L146

If I'm right, I could create a PR for that... IF it is to be merged quickly

Ah I see now why, you have it in the wrong object. The first “prometheus+::” should be a sibling to “_config”.

ah you're right, problem solved ! 👍

It's still failing for me.
Here is my manifest:

apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
  name: prometheus-operator-prometheus
  labels:
    app: prometheus-operator-prometheus
    release: "prometheus-operator"
  namespace: monitoring
spec:
  alerting:
    alertmanagers:
      - namespace: monitoring
        name: prometheus-operator-alertmanager
        port: web
        pathPrefix: "/"
  baseImage: quay.io/prometheus/prometheus
  version: v2.4.3
  externalUrl: "http://prometheus.domain.com"
  paused: false
  replicas: 2
  logLevel: info
  listenLocal: false
  retention: "30d"
  routePrefix: "/"
  serviceAccountName: prometheus-operator-prometheus
  serviceMonitorSelector:
    matchLabels:
      release: "prometheus-operator"
  serviceMonitorNamespaceSelector:
    matchNames:
      - "monitoring"
  ruleSelector:
    matchLabels:
      app: prometheus-operator
      release: "prometheus-operator"
  storage:
    volumeClaimTemplate:
      spec:
        storageClassName: gp2
        resources:
          requests:
            storage: 40Gi
  resources:
    requests:
      memory: 400Mi
  securitycontext:
    fsGroup: 2000
    runAsNonRoot: true
    runAsUser: 1000

The error log:

level=info ts=2018-12-27T13:40:33.309636108Z caller=main.go:238 msg="Starting Prometheus" version="(version=2.4.3, branch=HEAD, revision=167a4b4e73a8eca8df648d2d2043e21bdb9a7449)"
level=info ts=2018-12-27T13:40:33.309693948Z caller=main.go:239 build_context="(go=go1.11.1, user=root@1e42b46043e9, date=20181004-08:42:02)"
level=info ts=2018-12-27T13:40:33.309723026Z caller=main.go:240 host_details="(Linux 4.14.77-81.59.amzn2.x86_64 #1 SMP Mon Nov 12 21:32:48 UTC 2018 x86_64 prometheus-prometheus-operator-prometheus-0 (none))"
level=info ts=2018-12-27T13:40:33.309826893Z caller=main.go:241 fd_limits="(soft=65536, hard=65536)"
level=info ts=2018-12-27T13:40:33.309853161Z caller=main.go:242 vm_limits="(soft=unlimited, hard=unlimited)"
level=info ts=2018-12-27T13:40:33.311730201Z caller=main.go:554 msg="Starting TSDB ..."
level=info ts=2018-12-27T13:40:33.312205542Z caller=web.go:397 component=web msg="Start listening for connections" address=0.0.0.0:9090
level=info ts=2018-12-27T13:40:33.313286326Z caller=main.go:423 msg="Stopping scrape discovery manager..."
level=info ts=2018-12-27T13:40:33.313317108Z caller=main.go:437 msg="Stopping notify discovery manager..."
level=info ts=2018-12-27T13:40:33.313332671Z caller=main.go:459 msg="Stopping scrape manager..."
level=info ts=2018-12-27T13:40:33.313345047Z caller=main.go:433 msg="Notify discovery manager stopped"
level=info ts=2018-12-27T13:40:33.313467778Z caller=main.go:419 msg="Scrape discovery manager stopped"
level=info ts=2018-12-27T13:40:33.313521848Z caller=manager.go:638 component="rule manager" msg="Stopping rule manager..."
level=info ts=2018-12-27T13:40:33.313545646Z caller=manager.go:644 component="rule manager" msg="Rule manager stopped"
level=info ts=2018-12-27T13:40:33.313561252Z caller=notifier.go:512 component=notifier msg="Stopping notification manager..."
level=info ts=2018-12-27T13:40:33.313581019Z caller=main.go:608 msg="Notifier manager stopped"
level=info ts=2018-12-27T13:40:33.313619572Z caller=main.go:453 msg="Scrape manager stopped"
level=error ts=2018-12-27T13:40:33.31390898Z caller=main.go:617 err="opening storage failed: create dir: mkdir /prometheus/wal: permission denied"

@ArjonBu do you have the latest build of the operator ?
Are you sure an older volume with some data is not re-used with bad permissions ?

also, I'm using prometheus image v2.5.0

@prune998 Operator version is the latest one. Also, the volume is created by the operator itself on AWS, so it's new.
Tested it with Prometheus 2.5.0 and 2.6.0. Still, the same.

I think this has to be a bug because it works for Alert Manager but not for Prometheus.

@brancz This problem is still happening for me with the latest version of Prometheus Operator.

Prometheus logs:

level=info ts=2019-01-03T13:36:02.108044121Z caller=main.go:244 msg="Starting Prometheus" version="(version=2.5.0, branch=HEAD, revision=67dc912ac8b24f94a1fc478f352d25179c94ab9b)"
level=info ts=2019-01-03T13:36:02.108112631Z caller=main.go:245 build_context="(go=go1.11.1, user=root@578ab108d0b9, date=20181106-11:40:44)"
level=info ts=2019-01-03T13:36:02.108145683Z caller=main.go:246 host_details="(Linux 4.14.77-81.59.amzn2.x86_64 #1 SMP Mon Nov 12 21:32:48 UTC 2018 x86_64 prometheus-prometheus-operator-prometheus-0 (none))"
level=info ts=2019-01-03T13:36:02.108169375Z caller=main.go:247 fd_limits="(soft=65536, hard=65536)"
level=info ts=2019-01-03T13:36:02.108193637Z caller=main.go:248 vm_limits="(soft=unlimited, hard=unlimited)"
level=info ts=2019-01-03T13:36:02.109850841Z caller=main.go:562 msg="Starting TSDB ..."
level=info ts=2019-01-03T13:36:02.110152994Z caller=main.go:431 msg="Stopping scrape discovery manager..."
level=info ts=2019-01-03T13:36:02.110177237Z caller=main.go:445 msg="Stopping notify discovery manager..."
level=info ts=2019-01-03T13:36:02.110189402Z caller=main.go:467 msg="Stopping scrape manager..."
level=info ts=2019-01-03T13:36:02.110206972Z caller=main.go:441 msg="Notify discovery manager stopped"
level=info ts=2019-01-03T13:36:02.110241883Z caller=web.go:399 component=web msg="Start listening for connections" address=0.0.0.0:9090
level=info ts=2019-01-03T13:36:02.124061505Z caller=main.go:427 msg="Scrape discovery manager stopped"
level=info ts=2019-01-03T13:36:02.124473697Z caller=manager.go:657 component="rule manager" msg="Stopping rule manager..."
level=info ts=2019-01-03T13:36:02.124495747Z caller=manager.go:663 component="rule manager" msg="Rule manager stopped"
level=info ts=2019-01-03T13:36:02.124509975Z caller=notifier.go:512 component=notifier msg="Stopping notification manager..."
level=info ts=2019-01-03T13:36:02.124526386Z caller=main.go:616 msg="Notifier manager stopped"
level=info ts=2019-01-03T13:36:02.124547124Z caller=main.go:461 msg="Scrape manager stopped"
level=error ts=2019-01-03T13:36:02.124622972Z caller=main.go:625 err="opening storage failed: create dir: mkdir /prometheus/wal: permission denied"

Prometheus manifest:

apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
  name: prometheus-operator-prometheus
  labels:
    app: prometheus-operator-prometheus
    release: "prometheus-operator"
  namespace: monitoring
spec:
  alerting:
    alertmanagers:
      - namespace: monitoring
        name: prometheus-operator-alertmanager
        port: web
        pathPrefix: "/"
  baseImage: quay.io/prometheus/prometheus
  version: v2.5.0
  externalUrl: "http://prometheus.${domain}/"
  paused: false
  replicas: 2
  logLevel: info
  listenLocal: false
  retention: "30d"
  routePrefix: "/"
  serviceAccountName: prometheus-operator-prometheus
  serviceMonitorSelector:
    matchLabels:
      release: "prometheus-operator"
  serviceMonitorNamespaceSelector:
    matchNames:
      - "monitoring"
  ruleSelector:
    matchLabels:
      app: prometheus-operator
      release: "prometheus-operator"
  storage:
    volumeClaimTemplate:
      spec:
        storageClassName: gp2
        resources:
          requests:
            storage: 40Gi
  resources:
    requests:
      memory: 400Mi
  securitycontext:
    fsGroup: 2000
    runAsNonRoot: true
    runAsUser: 1000

Is there a way I can help troubleshoot this? FYI storage provisioning is working fine for Alert Manager but as I said, it's not working for prometheus.

I'm using a different user id in the security context and it "works on my cluster". Maybe this will help:

      securityContext:
        runAsNonRoot: true
        runAsUser: 65534

Are you using this user ID for Prometheus only or for both Prometheus and Alert Manager?

@trevorriles I just tested it and I have the same problem.

This used to be set by the Prometheus Operator itself for both Alertmanager and Prometheus. It was removed in November.
We currently set this securityContext in the config for both Prometheus and Alertmanagers.
Please check that the StatefulSet has the securityContext, if you have problems, and if the user id matches with your volumes permissions of prometheus/wal.

Security context is empty in the prometheus statefulset. Isn't this supposed to be populated by the Prometheus manifest file?

I should note that it works fine for Alert Manager (statefulset has the security context) but this is not the case for Prometheus. Is this a bug?

Ah my bad, I didn't see that update in November. I was setting the context on the Prometheus Operator itself. Thanks for clarifying @metalmatze

I have the same securityContext as you on my prometheus resource.

@metalmatze Manually adding securityContext at the Statefulset makes it work but isn't this supposed to be added by the operator? Am I doing something wrong or could this be a bug?

As I wrote above the Operator doesn't set this by default anymore since November. See the link in my comment above for more details in the original PR. 😉😊

@metalmatze What I meant is it's supposed to be added if the Prometheus manifest file has it. See my manifest below:

apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
  name: prometheus-operator-prometheus
  labels:
    app: prometheus-operator-prometheus
    release: "prometheus-operator"
  namespace: monitoring
spec:
  alerting:
    alertmanagers:
      - namespace: monitoring
        name: prometheus-operator-alertmanager
        port: web
        pathPrefix: "/"
  baseImage: quay.io/prometheus/prometheus
  version: v2.5.0
  externalUrl: "http://prometheus.${domain}/"
  paused: false
  replicas: 2
  logLevel: info
  listenLocal: false
  retention: "30d"
  routePrefix: "/"
  serviceAccountName: prometheus-operator-prometheus
  serviceMonitorSelector:
    matchLabels:
      release: "prometheus-operator"
  serviceMonitorNamespaceSelector:
    matchNames:
      - "monitoring"
  ruleSelector:
    matchLabels:
      app: prometheus-operator
      release: "prometheus-operator"
  storage:
    volumeClaimTemplate:
      spec:
        storageClassName: gp2
        resources:
          requests:
            storage: 40Gi
  resources:
    requests:
      memory: 400Mi
  securitycontext:
    fsGroup: 2000
    runAsNonRoot: true
    runAsUser: 1000

This manifest file is supposed to create the Statefulset for Prometheus right? Also, the statefulset should have the securityContext but in my case it doesn't.

Interesting, thanks for bringing that up. I'll have another look.

Should I create another issue?

Yes, that's probably best. Thanks

I just diffed your yaml against the one used by kube-prometheus:

- securitycontext:
+ securityContext:

The Context needs a capital C, then it works.

That's a silly mistake. Thank you for figuring it out.

No worries. I was just wondering why it works on my machine and not yours

@metalmatze

$ docker logs -f k8s_prometheus_prometheus-prometheus-operator-prometheus-0_prometheus-operator_0f0e09e8-13ed-11e9-9e4f-faf206331800_4
level=info ts=2019-01-09T09:02:22.399158895Z caller=main.go:244 msg="Starting Prometheus" version="(version=2.5.0, branch=HEAD, revision=67dc912ac8b24f94a1fc478f352d25179c94ab9b)"
level=info ts=2019-01-09T09:02:22.399286333Z caller=main.go:245 build_context="(go=go1.11.1, user=root@578ab108d0b9, date=20181106-11:40:44)"
level=info ts=2019-01-09T09:02:22.399315528Z caller=main.go:246 host_details="(Linux 3.10.0-693.21.1.el7.x86_64 #1 SMP Wed Mar 7 19:03:37 UTC 2018 x86_64 prometheus-prometheus-operator-prometheus-0 (none))"
level=info ts=2019-01-09T09:02:22.399346813Z caller=main.go:247 fd_limits="(soft=65536, hard=65536)"
level=info ts=2019-01-09T09:02:22.399366848Z caller=main.go:248 vm_limits="(soft=unlimited, hard=unlimited)"
level=info ts=2019-01-09T09:02:22.401086198Z caller=main.go:562 msg="Starting TSDB ..."
level=info ts=2019-01-09T09:02:22.401121934Z caller=web.go:399 component=web msg="Start listening for connections" address=0.0.0.0:9090
level=info ts=2019-01-09T09:02:22.403643711Z caller=main.go:431 msg="Stopping scrape discovery manager..."
level=info ts=2019-01-09T09:02:22.40368408Z caller=main.go:445 msg="Stopping notify discovery manager..."
level=info ts=2019-01-09T09:02:22.403696577Z caller=main.go:467 msg="Stopping scrape manager..."
level=info ts=2019-01-09T09:02:22.403707752Z caller=main.go:441 msg="Notify discovery manager stopped"
level=info ts=2019-01-09T09:02:22.403735389Z caller=main.go:427 msg="Scrape discovery manager stopped"
level=info ts=2019-01-09T09:02:22.403753677Z caller=manager.go:657 component="rule manager" msg="Stopping rule manager..."
level=info ts=2019-01-09T09:02:22.403754242Z caller=main.go:461 msg="Scrape manager stopped"
level=info ts=2019-01-09T09:02:22.403773532Z caller=manager.go:663 component="rule manager" msg="Rule manager stopped"
level=info ts=2019-01-09T09:02:22.403798554Z caller=notifier.go:512 component=notifier msg="Stopping notification manager..."
level=info ts=2019-01-09T09:02:22.403823637Z caller=main.go:616 msg="Notifier manager stopped"
level=error ts=2019-01-09T09:02:22.40392617Z caller=main.go:625 err="opening storage failed: create dir: mkdir /prometheus/wal: permission denied"

$ kubectl get pod prometheus-prometheus-operator-prometheus-0 -o yaml -n prometheus-operator |grep -A 3 "securityContext"
  securityContext:
    fsGroup: 2000
    runAsNonRoot: true
    runAsUser: 1000

$ egrep -v "^$|#" values.yaml
    storageSpec:
      volumeClaimTemplate:
        spec:
          storageClassName: cephfs-prometheus
          accessModes: ["ReadWriteMany"]
          resources:
            requests:
              storage: 1000Gi
        selector: {}

I deployed the latest version of prometheus-operator through helm charts and used cephfs for data persistence, but the above error was prompted.

Getting below error: -
prometheus --config.file /etc/prometheus/prometheus.yml
level=info ts=2019-02-21T06:55:02.276634178Z caller=main.go:302 msg="Starting Prometheus" version="(version=2.7.1, branch=HEAD, revision=62e591f928ddf6b3468308b7ac1de1c63aa7fcf3)"
level=info ts=2019-02-21T06:55:02.276773393Z caller=main.go:303 build_context="(go=go1.11.5, user=root@f9f82868fc43, date=20190131-11:16:59)"
level=info ts=2019-02-21T06:55:02.276836343Z caller=main.go:304 host_details="(Linux 4.14.88-88.76.amzn2.x86_64 #1 SMP Mon Jan 7 18:43:26 UTC 2019 x86_64 (none))"
level=info ts=2019-02-21T06:55:02.276909382Z caller=main.go:305 fd_limits="(soft=1024, hard=4096)"
level=info ts=2019-02-21T06:55:02.276962485Z caller=main.go:306 vm_limits="(soft=unlimited, hard=unlimited)"
level=info ts=2019-02-21T06:55:02.278870308Z caller=main.go:620 msg="Starting TSDB ..."
level=info ts=2019-02-21T06:55:02.278963041Z caller=main.go:489 msg="Stopping scrape discovery manager..."
level=info ts=2019-02-21T06:55:02.278977179Z caller=main.go:503 msg="Stopping notify discovery manager..."
level=info ts=2019-02-21T06:55:02.278991752Z caller=main.go:525 msg="Stopping scrape manager..."
level=info ts=2019-02-21T06:55:02.279001952Z caller=main.go:499 msg="Notify discovery manager stopped"
level=info ts=2019-02-21T06:55:02.279029551Z caller=web.go:416 component=web msg="Start listening for connections" address=0.0.0.0:9090
level=info ts=2019-02-21T06:55:02.280004348Z caller=main.go:485 msg="Scrape discovery manager stopped"
level=info ts=2019-02-21T06:55:02.280342642Z caller=manager.go:736 component="rule manager" msg="Stopping rule manager..."
level=info ts=2019-02-21T06:55:02.280409843Z caller=manager.go:742 component="rule manager" msg="Rule manager stopped"
level=info ts=2019-02-21T06:55:02.280471843Z caller=notifier.go:521 component=notifier msg="Stopping notification manager..."
level=info ts=2019-02-21T06:55:02.280532375Z caller=main.go:679 msg="Notifier manager stopped"
level=info ts=2019-02-21T06:55:02.280587559Z caller=main.go:519 msg="Scrape manager stopped"
level=error ts=2019-02-21T06:55:02.280756928Z caller=main.go:688 err="opening storage failed: mkdir data/: permission denied"

I am not running docker/k8s. This is a basic installation on Amazon linux.
Need help ASAP.

@JigarS91 the prometheus-operator is only about running Prometheus on Kubernetes, please refer to the Prometheus Users Mailing List.

@XiaoMuYi I had the same issue. Have you ever addressed it?

I also got this issue. With adding securityContext, it works.
But what is the real root cause? why not get it done from source?
This issue is not a new one. It appears again and agin. Why this?

Was this page helpful?
0 / 5 - 0 ratings