Hi, I read from the documentation that minikube doesn't support Persistent Volumes. But does it support persistent volume claims? We only depend on this last bit for our project, if the claim is linked to a persistent volume that is based on HostPath or NFS, it doesn't make a difference for us. So if claims work and we can use a HostPath-based PersistentVolume, it would be great and we can start using minikube for local development.
Thanks!
Great question. I haven't tried it yet, but I think HostPaths with PersistentVolumes and Claims should work. Please let me know if you get a chance to try it. If so, we should add some documentation about how to use this.
Hey,
I just tried it and it works! I made a PersistentVolume with this config:
kind: PersistentVolume
apiVersion: v1
metadata:
name: pv0001
labels:
type: local
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/somepath/data01"
Then made a claim with this config:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: myclaim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
Then used it in a pod:
kind: Pod
apiVersion: v1
metadata:
name: mypod
spec:
containers:
- name: myfrontend
image: gcr.io/google_containers/echoserver:1.4
ports:
- containerPort: 8080
protocol: TCP
volumeMounts:
- mountPath: "/var/www/html"
name: mypd
volumes:
- name: mypd
persistentVolumeClaim:
claimName: myclaim
I sshed in and verified the claim is mapped to the VM host disk correctly.
I'll leave this issue open to document this feature.
Wow, that was quick @dlorenc, thanks!
@dlorenc
Hi, I am new with kubernetes, and I don't know how to make a PersistentVolume... could you tell what is the command line for that? or can it be created on an interface?
@akdj Put what @dlorenc response into a YAML file, for example, pv.yaml. Then run kubectl create -f pv.yaml. If you are trying kubernetes for the first time, you may install and run a minikube first on your local machine.
@dlorenc, I have created pv, pvc and pod. but still I am not able to see that in container when ssh to it.
Did minikube ssh and sudo to it. then created /data/data01 .
Able to see the mount , but no contents - actually its not mounted /data/data01. Anything I am doing wrong. I am new to minikube as well.
I tried some other example from "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/"
but getting permission denied.
Any help is much appreciated.
@lijju In the same position here.
Tried both the example on the kubernetes documentation and @dlorenc example above but neither seem to work.
I can successfully create the pv, pvc and pod, but ssh into the host path I can see the files but ssh into the pod and nothing is there.
Running minikube version: 0.19.1
VirtualBox: 5.1.22
only hostPath works I guess with minikube
@dlorenc I just tried your configuration above and it does not work for me:
; kubectl get pv pv0001
NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE
pv0001 10Gi RWO Retain Available 3m
; kubectl get pvc myclaim
NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE
myclaim Bound pvc-7eb75f94-49fc-11e7-83ab-525400dd1f77 8Gi RWO
standard 4m
So minikube is creating its own storage, not honouring the persistent volume one.
_Am I doing something wrong?_
PS: I did try with NFS as well and got the same result: a new temp data store is created.
The minikube version: v0.19.1 and I am running it on centOS 7 with kvm, in case that matters.
I am getting the same behaviour as @kierun .
Even though the pv is created successfully, prior to the pvc, when I create the pvc, it does not bind the pvc to the existing volume, rather it creates a completely new one.
Hey,
I think some of this documentation needs to be updated to take into account some behavior changes in k8s 1.6. Minikube has a dynamic provisioner configured by default, which takes precedence over any other HostPath volumes configured in k8s 1.6 clusters.
You can see this blog post for a bit more info: http://blog.kubernetes.io/2017/03/dynamic-provisioning-and-storage-classes-kubernetes.html
but to use a volume you created manually, you'll need to add an annotation to your claim that says you don't want to use the default storage class. You can add something like this to your pvc:
spec:
storageClassName: ""
to indicate you don't want to use the default storage class.
You need also specify volume name:
spec:
storageClassName: ""
volumeName: pv001
volume.yaml:
kind: PersistentVolume
apiVersion: v1
metadata:
name: pv0001
labels:
type: local
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/somepath/data01"
claim.yaml:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: myclaim
spec:
accessModes:
- ReadWriteOnce
storageClassName: ""
volumeName: pv0001
resources:
requests:
storage: 8Gi
Status:
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
myclaim Bound pv0001 10Gi RWO 1m
Most helpful comment
Hey,
I just tried it and it works! I made a PersistentVolume with this config:
Then made a claim with this config:
Then used it in a pod:
I sshed in and verified the claim is mapped to the VM host disk correctly.
I'll leave this issue open to document this feature.