Charts: Helm won't create PV and PVC for Prometheus

Created on 11 Feb 2019  路  5Comments  路  Source: helm/charts

Output of helm version: 2.12.3

Output of kubectl version: 1.13.2

Cloud Provider/Platform (AKS, GKE, Minikube etc.): AWS

Using provided in this repo Prometheus Chart I am trying to create a Prometheus Helm chart with PersistentVolumes in EBS. However for some reason it only creates pv and pvc for alertmanager and ignores the same for prometheus.
My configs:
alertmanager-pvc.yaml

```{{- if not .Values.alertmanager.statefulSet.enabled -}}
{{- if and .Values.alertmanager.enabled .Values.alertmanager.persistentVolume.enabled -}}
{{- if not .Values.alertmanager.persistentVolume.existingClaim -}}
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
{{- if .Values.alertmanager.persistentVolume.annotations }}
annotations:
{{ toYaml .Values.alertmanager.persistentVolume.annotations | indent 4 }}
{{- end }}
labels:
{{- include "prometheus.alertmanager.labels" . | nindent 4 }}
name: {{ template "prometheus.alertmanager.fullname" . }}
spec:
accessModes:
{{ toYaml .Values.alertmanager.persistentVolume.accessModes | indent 4 }}
{{- if .Values.alertmanager.persistentVolume.storageClass }}
{{- if (eq "aws" .Values.alertmanager.persistentVolume.storageClass) }}
storageClassName: "gp2"
{{- else }}
storageClassName: "{{ .Values.alertmanager.persistentVolume.storageClass }}"
{{- end }}
{{- end }}
resources:
requests:
storage: "{{ .Values.alertmanager.persistentVolume.size }}"
{{- end -}}
{{- end -}}
{{- end -}}

`alertmanager-pv.yaml`

```{{- if not .Values.alertmanager.statefulSet.enabled -}}
{{- if and .Values.alertmanager.enabled .Values.alertmanager.persistentVolume.enabled -}}
apiVersion: v1
kind: PersistentVolume
metadata:
  {{- if .Values.alertmanager.persistentVolume.annotations }}
  annotations:
{{ toYaml .Values.alertmanager.persistentVolume.annotations | indent 4 }}
  {{- end }}
  labels:
    {{- include "prometheus.alertmanager.labels" . | nindent 4 }}
  name: {{ template "prometheus.alertmanager.fullname" . }}
spec:
  capacity:
    storage: "{{ .Values.alertmanager.persistentVolume.size }}"
  PersistentVolumeReclaimPolicy: "{{ .Values.alertmanager.persistentVolume.ReclaimPolicy }}"
  accessModes:
{{ toYaml .Values.alertmanager.persistentVolume.accessModes | indent 4 }}
{{- if .Values.alertmanager.persistentVolume.storageClass }}
{{- if (eq "aws" .Values.alertmanager.persistentVolume.storageClass) }}
  storageClassName: "gp2"
  awsElasticBlockStore:
    fsType: "ext4"
    volumeID: "{{ .Values.alertmanager.persistentVolume.volumeID }}"
{{- if (eq "nfs" .Values.alertmanager.persistentVolume.storageClass) }}
  StorageClassName: "nfs"
    server: "{{ .Values.alertmanager.persistentVolume.nfs.server }}
    mountOptions:
      {{- range .Values.alertmanager.persistentVolume.nfs.options }}
      - {{ . }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}
{{- end -}}
{{- end -}}

server-pv.yaml

{{- if not .Values.server.statefulSet.enabled -}}
{{- if and .Values.server.enabled .Values.server.persistentVolume.enabled -}}
apiVersion: v1
kind: PersistentVolume
metadata:
  {{- if .Values.server.persistentVolume.annotations }}
  annotations:
{{ toYaml .Values.server.persistentVolume.annotations | indent 4 }}
  {{- end }}
  labels:  
    {{- include "prometheus.server.labels" . | nindent 4 }}
  name: {{ template "prometheus.server.fullname" . }}
spec:
  capacity:
    storage: "{{ .Values.server.persistentVolume.size }}"
  PersistentVolumeReclaimPolicy: "{{ .Values.server.persistentVolume.ReclaimPolicy }}"
  accessModes:
{{ toYaml .Values.server.persistentVolume.accessModes | indent 4 }}
{{- if .Values.server.persistentVolume.storageClass }}
{{- if (eq "aws" .Values.server.persistentVolume.storageClass) }}
  storageClassName: "gp2"
  awsElasticBlockStore:
    fsType: "ext4"
    volumeID: "{{ .Values.server.persistentVolume.volumeID }}"
{{- if (eq "nfs" .Values.server.persistentVolume.storageClass) }}
  StorageClassName: "nfs"
    server: "{{ .Values.server.persistentVolume.nfs.server }}
    mountOptions:
      {{- range .Values.server.persistentVolume.nfs.options }}
      - {{ . }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}
{{- end -}}
{{- end -}}

server-pvc.yaml

{{- if not .Values.server.statefulSet.enabled -}}
{{- if and .Values.server.enabled .Values.server.persistentVolume.enabled -}}
{{- if not .Values.server.persistentVolume.existingClaim -}}
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  {{- if .Values.server.persistentVolume.annotations }}
  annotations:
{{ toYaml .Values.server.persistentVolume.annotations | indent 4 }}
  {{- end }}
  labels:
    {{- include "prometheus.server.labels" . | nindent 4 }}
  name: {{ template "prometheus.server.fullname" . }}
spec:
  accessModes:
{{ toYaml .Values.server.persistentVolume.accessModes | indent 4 }}
{{- if .Values.server.persistentVolume.storageClass }}
{{- if (eq "aws" .Values.server.persistentVolume.storageClass) }}
  storageClassName: "gp2"
{{- else }}
  storageClassName: "{{ .Values.server.persistentVolume.storageClass }}"
{{- end }}
{{- end }}
  resources:
    requests:
      storage: "{{ .Values.server.persistentVolume.size }}"
{{- end -}}
{{- end -}}
{{- end -}}

Pod describe for Prometheus server says:

Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 31s (x2 over 31s) default-scheduler persistentvolumeclaim "prometheus-prometheus" not found

lifecyclstale

Most helpful comment

Has this issue been solved? I'm still having the same problem. I've updated Helm and I'm using chart version prometheus-operator-5.19.0.

All 5 comments

For context, I am transferring this to the charts issue queue as this seems more related to the Prometheus chart and its usage rather than an issue with Helm itself.

workaround is to create 2 pv yourself until they add to chart, using /data1 as example:
apiVersion: v1
kind: PersistentVolume
metadata:
name: prom2
spec:
capacity:
storage: 2Gi
accessModes:

  • ReadWriteOnce
    persistentVolumeReclaimPolicy: Retain
    storageClassName: local-storage
    local:
    path: /data1
    nodeAffinity:
    required:
    nodeSelectorTerms:

    • matchExpressions:



      • key: kubernetes.io/hostname


        operator: In


        values:





        • my-node






This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.

This issue is being automatically closed due to inactivity.

Has this issue been solved? I'm still having the same problem. I've updated Helm and I'm using chart version prometheus-operator-5.19.0.

Was this page helpful?
0 / 5 - 0 ratings