I created the following storage class called 'standard-persist' (using the 'standard' storage class as a template):
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
storageclass.beta.kubernetes.io/is-default-class: "false"
labels:
addonmanager.kubernetes.io/mode: EnsureExists
name: standard-persist
provisioner: k8s.io/minikube-hostpath
reclaimPolicy: Retain
volumeBindingMode: Immediate
I then applied the following PVC:
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-db-data-storage
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: standard-persist
This ended up creating:
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc-db-data-storage Bound pvc-5a3887c3-4efc-11e9-8cda-0800271f16b7 1Gi RWO standard-persist 7s
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-5a3887c3-4efc-11e9-8cda-0800271f16b7 1Gi RWO Delete Bound default/pvc-db-data-storage standard-persist 9s
As you can see the Reclaim Policy still shows as 'Delete' even though it should be 'Retain'.
I can however workaround this problem by manually changing the Reclaim Policy of a PersistentVolume
My environment setup:
$ minikube version
minikube version: v0.35.0
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.4", GitCommit:"c27b913fddd1a6c480c229191a087698aa92f0b1", GitTreeState:"clean", BuildDate:"2019-03-01T23:34:27Z", GoVersion:"go1.12", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.4", GitCommit:"c27b913fddd1a6c480c229191a087698aa92f0b1", GitTreeState:"clean", BuildDate:"2019-02-28T13:30:26Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
Thank you for filing this!
This seems like a bug in @r2d4's external-storage
https://github.com/kubernetes/minikube/blob/master/vendor/github.com/r2d4/external-storage/lib/controller/controller.go#L779.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
I wonder if this is still an issue now that we are not using @r2d4 fork.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
I have just retested with the following version:
$ minikube version
minikube version: v1.7.2
commit: 50d543b5fcb0e1c0d7c27b1398a9a9790df09dfb
And here's the test I performed:
cat <<EOF | kubectl apply -f -
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
storageclass.beta.kubernetes.io/is-default-class: "false"
labels:
addonmanager.kubernetes.io/mode: EnsureExists
name: standard-persist
provisioner: k8s.io/minikube-hostpath
reclaimPolicy: Retain
volumeBindingMode: Immediate
EOF
cat <<EOF | kubectl apply -f -
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-db-data-storage
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: standard-persist
EOF
And here are the results:
$ kubectl get storageclasses.storage.k8s.io -o wide
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
standard (default) k8s.io/minikube-hostpath Delete Immediate false 5h9m
standard-persist k8s.io/minikube-hostpath Retain Immediate false 21s
$ kubectl get pvc -o wide
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE VOLUMEMODE
pvc-db-data-storage Bound pvc-0bc1a715-b9e7-44a9-9ce5-ae55cca467ed 1Gi RWO standard-persist 33s Filesystem
$ kubectl get pv -o wide
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE VOLUMEMODE
pvc-0bc1a715-b9e7-44a9-9ce5-ae55cca467ed 1Gi RWO Delete Bound default/pvc-db-data-storage standard-persist 39s Filesystem
So it looks like this issue isn't fixed yet.
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue with/reopen.
Mark the issue as fresh with/remove-lifecycle rotten.Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/reopen
Seems we are still hitting this issue.
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.3+k3s1", GitCommit:"5b17a175ce333dfb98cb8391afeb1f34219d9275", GitTreeState:"clean", BuildDate:"2020-02-27T07:28:53Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.3+k3s1", GitCommit:"5b17a175ce333dfb98cb8391afeb1f34219d9275", GitTreeState:"clean", BuildDate:"2020-02-27T07:28:53Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgres
namespace: mynamespace
labels:
app.kubernetes.io/name: postgres
spec:
selector:
matchLabels:
app: postgres
serviceName: postgres
replicas: 1
template:
metadata:
labels:
app: postgres
spec:
terminationGracePeriodSeconds: 10
containers:
- name: postgres
image: postgres:latest
ports:
- containerPort: 5432
name: postgres
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgres
volumeClaimTemplates:
- metadata:
name: postgres
labels:
app.kubernetes.io/name: postgres
spec:
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-path
resources:
requests:
storage: 10Gi
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-75320162-1685-463b-b092-6bc6b369e9c9 10Gi RWO Delete Bound mynamespace/postgres-postgres-0 local-path 18m
@vvanouytsel: You can't reopen an issue/PR unless you authored it or you are a collaborator.
In response to this:
/reopen
Seems we are still hitting this issue.
$ kubectl version Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.3+k3s1", GitCommit:"5b17a175ce333dfb98cb8391afeb1f34219d9275", GitTreeState:"clean", BuildDate:"2020-02-27T07:28:53Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.3+k3s1", GitCommit:"5b17a175ce333dfb98cb8391afeb1f34219d9275", GitTreeState:"clean", BuildDate:"2020-02-27T07:28:53Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"apiVersion: apps/v1 kind: StatefulSet metadata: name: postgres namespace: mynamespace labels: app.kubernetes.io/name: postgres spec: selector: matchLabels: app: postgres serviceName: postgres replicas: 1 template: metadata: labels: app: postgres spec: terminationGracePeriodSeconds: 10 containers: - name: postgres image: postgres:latest ports: - containerPort: 5432 name: postgres volumeMounts: - mountPath: /var/lib/postgresql/data name: postgres volumeClaimTemplates: - metadata: name: postgres labels: app.kubernetes.io/name: postgres spec: accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain storageClassName: local-path resources: requests: storage: 10Gi$ kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-75320162-1685-463b-b092-6bc6b369e9c9 10Gi RWO Delete Bound mynamespace/postgres-postgres-0 local-path 18m
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/reopen
@Sher-Chowdhury: Reopened this issue.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/reopen
Seems we are still hitting this issue.
$ kubectl version Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.3+k3s1", GitCommit:"5b17a175ce333dfb98cb8391afeb1f34219d9275", GitTreeState:"clean", BuildDate:"2020-02-27T07:28:53Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.3+k3s1", GitCommit:"5b17a175ce333dfb98cb8391afeb1f34219d9275", GitTreeState:"clean", BuildDate:"2020-02-27T07:28:53Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"apiVersion: apps/v1 kind: StatefulSet metadata: name: postgres namespace: mynamespace labels: app.kubernetes.io/name: postgres spec: selector: matchLabels: app: postgres serviceName: postgres replicas: 1 template: metadata: labels: app: postgres spec: terminationGracePeriodSeconds: 10 containers: - name: postgres image: postgres:latest ports: - containerPort: 5432 name: postgres volumeMounts: - mountPath: /var/lib/postgresql/data name: postgres volumeClaimTemplates: - metadata: name: postgres labels: app.kubernetes.io/name: postgres spec: accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain storageClassName: local-path resources: requests: storage: 10Gi$ kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-75320162-1685-463b-b092-6bc6b369e9c9 10Gi RWO Delete Bound mynamespace/postgres-postgres-0 local-path 18m
It seems I misinterpreted this issue.
I am using persistentVolumeReclaimPolicy on a StatefulSet, which is not valid.
I have to create seperate StorageClass objects, and choose the StorageClass that I want to use.
The StorageClass has the persistentVolumeReclaimPolicy defined.
/reopen
/reopen
Seems we are still hitting this issue.$ kubectl version Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.3+k3s1", GitCommit:"5b17a175ce333dfb98cb8391afeb1f34219d9275", GitTreeState:"clean", BuildDate:"2020-02-27T07:28:53Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.3+k3s1", GitCommit:"5b17a175ce333dfb98cb8391afeb1f34219d9275", GitTreeState:"clean", BuildDate:"2020-02-27T07:28:53Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"apiVersion: apps/v1 kind: StatefulSet metadata: name: postgres namespace: mynamespace labels: app.kubernetes.io/name: postgres spec: selector: matchLabels: app: postgres serviceName: postgres replicas: 1 template: metadata: labels: app: postgres spec: terminationGracePeriodSeconds: 10 containers: - name: postgres image: postgres:latest ports: - containerPort: 5432 name: postgres volumeMounts: - mountPath: /var/lib/postgresql/data name: postgres volumeClaimTemplates: - metadata: name: postgres labels: app.kubernetes.io/name: postgres spec: accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain storageClassName: local-path resources: requests: storage: 10Gi$ kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-75320162-1685-463b-b092-6bc6b369e9c9 10Gi RWO Delete Bound mynamespace/postgres-postgres-0 local-path 18mIt seems I misinterpreted this issue.
I am usingpersistentVolumeReclaimPolicyon aStatefulSet, which is not valid.
I have to create seperateStorageClassobjects, and choose theStorageClassthat I want to use.
TheStorageClasshas thepersistentVolumeReclaimPolicydefined.
Even if you create a StorageClass, it will not work and still ignore "Retain".
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue with/reopen.
Mark the issue as fresh with/remove-lifecycle rotten.Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Most helpful comment
/reopen