Charts: [stable/prometheus-operator] Unable to create mmap-ed active query log

Created on 7 Jan 2020  路  3Comments  路  Source: helm/charts

Describe the bug
Installing the chart, produces error while running "VolumeBinding" filter plugin for pod "prometheus-pythia-cluster-monitoring-prometheus-0": pod has unbound immediate PersistentVolumeClaims Back-off restarting failed container

Version of Helm and Kubernetes:
Helm: version.BuildInfo{Version:"v3.0.2", GitCommit:"19e47ee3283ae98139d98460de796c1be1e3975f", GitTreeState:"clean", GoVersion:"go1.13.5"}

Kubernetes:

Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2019-12-07T21:20:10Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2019-12-07T21:12:17Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"}

Which chart:
stable/prometheus-operator:8.5.3

What happened:
After running helm install --namespace monitoring cluster-monitoring -f prom-oper-values.yaml stable/prometheus-operator

# Define persistent storage for Prometheus (PVC)
prometheus:
  prometheusSpec:
    storageSpec:
      volumeClaimTemplate:
        spec:
          accessModes: ["ReadWriteOnce"]
          storageClassName: standard
          resources:
            requests:
              storage: 5Gi
        selector:
          matchLabels: 
            service: prometheus

# Define persistent storage for Grafana (PVC)
grafana:
  ingress:
    enabled: true
    path: /grafana
    hosts:
      - ""
  # Set password for Grafana admin user
  adminPassword: your_admin_password
  persistence:
    enabled: true
    storageClassName: standard
    accessModes: ["ReadWriteOnce"]
    size: 5Gi

# Define persistent storage for Alertmanager (PVC)
alertmanager:
  alertmanagerSpec:
    storage:
      volumeClaimTemplate:
        spec:
          accessModes: ["ReadWriteOnce"]
          storageClassName: standard
          resources:
            requests:
              storage: 5Gi
        selector:
          matchLabels: 
            service: alertmanager

PV file:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: prometheus
  labels: 
    service: prometheus
spec:
  storageClassName: standard
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/data/prometheus"
    type: DirectoryOrCreate
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: prometheus-alertmanager
  labels: 
    service: alertmanager
spec:
  storageClassName: standard
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/data/alertmanager"
    type: DirectoryOrCreate
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: grafana
  labels: 
    service: grafana
spec:
  storageClassName: standard
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/data/grafana"
    type: DirectoryOrCreate

The only failing pod is: cluster-monitoring-prometheus-0 with the following ouput:

level=info ts=2020-01-07T11:32:49.275Z caller=main.go:332 msg="Starting Prometheus" version="(version=2.13.1, branch=HEAD, revision=6f92ce56053866194ae5937012c1bec40f1dd1d9)"
level=info ts=2020-01-07T11:32:49.275Z caller=main.go:333 build_context="(go=go1.13.1, user=root@88e419aa1676, date=20191017-13:15:01)"
level=info ts=2020-01-07T11:32:49.275Z caller=main.go:334 host_details="(Linux 4.15.0-50-generic #54-Ubuntu SMP Mon May 6 18:46:08 UTC 2019 x86_64 prometheus-pythia-cluster-monitoring-prometheus-0 (none))"
level=info ts=2020-01-07T11:32:49.275Z caller=main.go:335 fd_limits="(soft=1048576, hard=1048576)"
level=info ts=2020-01-07T11:32:49.275Z caller=main.go:336 vm_limits="(soft=unlimited, hard=unlimited)"
level=error ts=2020-01-07T11:32:49.276Z caller=query_logger.go:85 component=activeQueryTracker msg="Error opening query log file" file=/prometheus/queries.active err="open /prometheus/queries.active: permission denied"
panic: Unable to create mmap-ed active query log
goroutine 1 [running]:
github.com/prometheus/prometheus/promql.NewActiveQueryTracker(0x7ffd9b11afd2, 0xb, 0x14, 0x29db1e0, 0xc0005d8120, 0x29db1e0)
    /app/promql/query_logger.go:115 +0x48c
main.main()
    /app/cmd/prometheus/main.go:364 +0x5229
lifecyclstale

All 3 comments

I have similar Issue, any help on the same ?

[root@balaji-nomad-consul-1 charts]# helm install stable/elasticsearch --name-template my-release --set data.persistence.storageClass=ssd,data.storage=100Gi
NAME: my-release
LAST DEPLOYED: Mon Jan 13 16:17:58 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
NOTES:
The elasticsearch cluster has been installed.

Elasticsearch can be accessed:

  * Within your cluster, at the following DNS name at port 9200:

    my-release-elasticsearch-client.default.svc

  * From outside the cluster, run these commands in the same shell:

    export POD_NAME=$(kubectl get pods --namespace default -l "app=elasticsearch,component=client,release=my-release" -o jsonpath="{.items[0].metadata.name}")
    echo "Visit http://127.0.0.1:9200 to use Elasticsearch"
    kubectl port-forward --namespace default $POD_NAME 9200:9200

I am getting ..

Conditions:
  Type           Status
  PodScheduled   False
Volumes:
  data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  data-my-release-elasticsearch-master-0
    ReadOnly:   false
  config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      my-release-elasticsearch
    Optional:  false
  my-release-elasticsearch-master-token-pns7x:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  my-release-elasticsearch-master-token-pns7x
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age        From               Message
  ----     ------            ----       ----               -------
  Warning  FailedScheduling  <unknown>  default-scheduler  error while running "VolumeBinding" filter plugin for pod "my-release-elasticsearch-master-0": pod has unbound immediate PersistentVolumeClaims
  Warning  FailedScheduling  <unknown>  default-scheduler  error while running "VolumeBinding" filter plugin for pod "my-release-elasticsearch-master-0": pod has unbound immediate PersistentVolumeClaims

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.

This issue is being automatically closed due to inactivity.

Was this page helpful?
0 / 5 - 0 ratings