Charts: [stable/prometheus-operator] how to use an existed Persistent Volume Claim

Created on 15 Nov 2018  Â·  7Comments  Â·  Source: helm/charts

Is this a request for help?:

yes

Is this a BUG REPORT or FEATURE REQUEST? (choose one):
BUG REPORT

Version of Helm and Kubernetes:
helm: v2.11.0
k8s: v1.11.2

Which chart:
stable/prometheus-operator

What happened:
I have already created PV and PVC, and I want to use the existed PVC for prometheus server. But in prometheus-operator, I found that value.yaml only has volumeClaimTemplate configuration. In prometheus-operator storage's user-guide, it seems like volumeClaimTemplate is to create PVC dynamic. So I wander how to use PVC which I have created, like volumeMounts in prometheus.

link
https://github.com/coreos/prometheus-operator/blob/master/Documentation/user-guides/storage.md

volumeClaimTemplate:
    description: PersistentVolumeClaim is a user's request for and claim
     to a persistent volume

What you expected to happen:
I want to use PVC that I have created

Most helpful comment

You should only create the PV (_not_ the PVC) beforehand. prometheus-operator can automatically create a PVC based on an existing PV.

For example, my default-values.yaml:

prometheus:
  prometheusSpec:
    storageSpec:
      volumeClaimTemplate:
        spec:
          # Name of the PV you created beforehand
          volumeName: MY-PREEXISTING-PV
          accessModes: ["ReadWriteOnce"]
          # StorageClass should match your existing PV's storage class
          storageClassName: gp2
          resources:
            requests:
              # Size below should match your existing PV's size
              storage: 500

Keep in mind, if you do a helm delete on your chart, the PVC will not be cleaned up.

All 7 comments

You should only create the PV (_not_ the PVC) beforehand. prometheus-operator can automatically create a PVC based on an existing PV.

For example, my default-values.yaml:

prometheus:
  prometheusSpec:
    storageSpec:
      volumeClaimTemplate:
        spec:
          # Name of the PV you created beforehand
          volumeName: MY-PREEXISTING-PV
          accessModes: ["ReadWriteOnce"]
          # StorageClass should match your existing PV's storage class
          storageClassName: gp2
          resources:
            requests:
              # Size below should match your existing PV's size
              storage: 500

Keep in mind, if you do a helm delete on your chart, the PVC will not be cleaned up.

You should only create the PV (_not_ the PVC) beforehand. prometheus-operator can automatically create a PVC based on an existing PV.

For example, my default-values.yaml:

prometheus:
  prometheusSpec:
    storageSpec:
      volumeClaimTemplate:
        spec:
          # Name of the PV you created beforehand
          volumeName: MY-PREEXISTING-PV
          accessModes: ["ReadWriteOnce"]
          # StorageClass should match your existing PV's storage class
          storageClassName: gp2
          resources:
            requests:
              # Size below should match your existing PV's size
              storage: 500

Keep in mind, if you do a helm delete on your chart, the PVC will not be cleaned up.

Thank you so much. I found that the root reason is that the storage which I used don't support dynamic storage volume. I will close this issue.

Hi,

I got it working with this hack - use the SAME names and labels for your existing PV and PVC.

If you would like to keep the data of the current persistent volumes, it should be possible to attach existing volumes to new PVCs and PVs that are created using the conventions in the new chart. For example, in order to use an existing Azure disk for a helm release name called prometheus-operator the following resources can be created:

Create the disk references before deploying the operating

```apiVersion: v1
kind: PersistentVolume
metadata:
name: pvc-prometheus-operator-prometheus-0
spec:
storageClassName: "standard"
capacity:
storage: 64Gi
accessModes:
- ReadWriteOnce
gcePersistentDisk:
pdName: gke-dev-romiko-aae-pvc-c8971937-85f8-2566-b80e-710dfbc17cbb

fsType: ext4

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
app: prometheus
prometheus: prometheus-operator-prometheus
name: prometheus-prometheus-operator-prometheus-db-prometheus-prometheus-operator-prometheus-0
spec:
storageClassName: "standard"
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 64Gi

**In Prometheus Helm Value files**

prometheus:
prometheusSpec:
storageSpec:
volumeClaimTemplate:
spec:
storageClassName: standard
resources:
requests:
storage: 64Gi
```

 romiko  DESKTOP  mnt  c  Windows  System32  %  k get pvc
NAME                                                                                       STATUS   VOLUME                                 CAPACITY   ACCESS MODES   STORAGECLASS   AGE
prometheus-prometheus-operator-prometheus-db-prometheus-prometheus-operator-prometheus-0   Bound    pvc-prometheus-operator-prometheus-0   64Gi       RWO            standard       4m25s
pv-claim-grafana                                                                           Bound    pv-grafana                             10Gi       RWO            standard       47m

romiko  DESKTOP  mnt  c  Windows  System32  %  k get pv
NAME                                   CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                                                                                         STORAGECLASS   REASON   AGE
pv-grafana                             10Gi       RWO            Retain           Bound    service-compliance/pv-claim-grafana                                                                           standard                47m
pvc-prometheus-operator-prometheus-0   64Gi       RWO            Retain           Bound    service-compliance/prometheus-prometheus-operator-prometheus-db-prometheus-prometheus-operator-prometheus-0   standard                4m29s

romiko  DESKTOP  mnt  c  Windows  System32  % 

@Romiko I also did this myself and it works. Although when I helm upgrade the release, I get this and can't figure it out:

Error: error validating "": error validating data: ValidationError(Prometheus.spec.storage.volumeClaimTemplate): unknown field "selector" in com.coreos.monitoring.v1.Prometheus.spec.storage.volumeClaimTemplate

are you able to helm upgrade your release ?

@Romiko I also did this myself and it works. Although when I helm upgrade the release, I get this and can't figure it out:

Error: error validating "": error validating data: ValidationError(Prometheus.spec.storage.volumeClaimTemplate): unknown field "selector" in com.coreos.monitoring.v1.Prometheus.spec.storage.volumeClaimTemplate

are you able to helm upgrade your release ?

The indentation for selector in the example in the default values seems to be off by two spaces.

It should be:

    storage: {}
    # volumeClaimTemplate:
    #   spec:
    #     storageClassName: gluster
    #     accessModes: ["ReadWriteOnce"]
    #     resources:
    #       requests:
    #         storage: 50Gi
    #     selector: {}  <----- indentation adjusted by two spaces

@Romiko Thanks for advise about labels! I successfully replaced pv/pvc to others with different storage class without touching helm.

Was this page helpful?
0 / 5 - 0 ratings