Minikube: Minikube v0.21.0 hostPath PVCs do not bind (Pending forever).

Created on 4 Aug 2017  路  21Comments  路  Source: kubernetes/minikube

Is this a BUG REPORT or FEATURE REQUEST? (choose one):
BUG REPORT

Minikube version (use minikube version):
minikube version: v0.21.0

Environment:

  • OS (e.g. from /etc/os-release):
    Red Hat Enterprise Linux Workstation release 7.3 (Maipo)
  • VM Driver (e.g. cat ~/.minikube/machines/minikube/config.json | grep DriverName):
    kvm

  • ISO version (e.g. cat ~/.minikube/machines/minikube/config.json | grep -i ISO or minikube ssh cat /etc/VERSION):
    minikube-v0.23.0.iso

  • Install tools:

  • Others:

What happened:
HostPath PVCs will not bind to PVs -- PVCs stuck in Pending forever.

What you expected to happen:
HostPath PVCs successfully bind, as it did in Minikube v0.20.0

How to reproduce it (as minimally and precisely as possible):
Create test.yml:

---
kind: PersistentVolume
apiVersion: v1
metadata:
  name: test-pv0
spec:
  accessModes:
    - ReadWriteOnce
  capacity:
    storage: 1Gi
  hostPath:
    path: /data/test

---

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: test-claim0
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  volumeName: test-pv0

Execute: kubectl apply -f test.yml

Anything else do we need to know:
This worked fine in v0.20.0 of minikube -- seems to be a regression in newest version. Any ideas, work-arounds would be greatly appreciated!

help wanted kindocumentation lifecyclrotten

Most helpful comment

Could you please run kubectl describe pv ?

I ran into a similar issue and resolved it by explicitly defining the storage class in my pv definition.

apiVersion: v1
kind: PersistentVolume
metadata:
  name: postgresql-storage
spec:
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: standard <----
  hostPath:
      path: /data/postgresql/

All 21 comments

@bgehman I ran into the same issue, delete the default storageclass and recreating it fixed this.

minikube addons disable default-storageclass
# storageclass.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  namespace: kube-system
  name: standard
  annotations:
    storageclass.beta.kubernetes.io/is-default-class: "true"
  labels:
    addonmanager.kubernetes.io/mode: Reconcile
provisioner: k8s.io/minikube-hostpath
kubectl apply -f storageclass.yaml

Could you please run kubectl describe pv ?

I ran into a similar issue and resolved it by explicitly defining the storage class in my pv definition.

apiVersion: v1
kind: PersistentVolume
metadata:
  name: postgresql-storage
spec:
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: standard <----
  hostPath:
      path: /data/postgresql/

@jacobfederer Defining the storageClassName in the PV definition didn't seem to have any affect.

Minikube v0.20.0 (working)

$ kubectl describe pv test-pv0
Name:       test-pv0
Labels:     <none>
StorageClass:   
Status:     Bound
Claim:      default/test-claim0
Reclaim Policy: Retain
Access Modes:   RWO
Capacity:   1Gi
Message:    
Source:
    Type:   HostPath (bare host directory volume)
    Path:   /data/test
No events.


$ kubectl describe pvc test-claim0 
Name:       test-claim0
Namespace:  default
StorageClass:   
Status:     Bound
Volume:     test-pv0
Labels:     <none>
Capacity:   1Gi
Access Modes:   RWO
No events.

Minikube v0.21.0 (broken -- with and without adding storageClassName standard to PV))

$ kubectl describe pv test-pv0    
Name:       test-pv0
Labels:     <none>
StorageClass:   
Status:     Available
Claim:      
Reclaim Policy: Retain
Access Modes:   RWO
Capacity:   1Gi
Message:    
Source:
    Type:   HostPath (bare host directory volume)
    Path:   /data/test
Events:
  FirstSeen LastSeen    Count   From                SubObjectPath   Type        Reason      Message
  --------- --------    -----   ----                -------------   --------    ------      -------
  1m        13s     8   {persistentvolume-controller }          Warning     VolumeMismatch  Volume's size is smaller than requested or volume's class does not match with claim


$ kubectl describe pvc test-claim0
Name:       test-claim0
Namespace:  default
StorageClass:   
Status:     Pending
Volume:     test-pv0
Labels:     <none>
Capacity:   0
Access Modes:   
No events.

I figured it out. My kubectl client was backlevel at version 1.5.2 while the new minikube kubernetes server version is at 1.7.0. Upgrading my client to 1.7.0 and the problem goes away. Trying to recreate the StorageClass object suggested by @spuranam was failing and the led me to the client version mismatch.

Also, had to add the storageClassName: standard to the PV definition as well (as @jacobfederer mentioned).

I guess devs can close this, or suggest upstream warning if client/server version mismatches if compatibility can't be guaranteed. Thanks for all that helped.

I'm going to leave this open and tag it as something we should add some documentation for.

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

/remove-lifecycle stale

I'm facing this issue now.

Normal ExternalProvisioning 4s (x9 over 2m) persistentvolume-controller waiting for a volume to be created, either by external provisioner "k8s.io/minikube-hostpath" or manually created by system administrator

minikube version: v0.25.0

kubectl version #=> Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.3", GitCommit:"d2835416544f298c919e2ead3be3d0864b52323b", GitTreeState:"clean", BuildDate:"2018-02-07T12:22:21Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"", Minor:"", GitVersion:"v1.9.0", GitCommit:"925c127ec6b946659ad0fd596fa959be43f0cc05", GitTreeState:"clean", BuildDate:"2018-01-26T19:04:38Z", GoVersion:"go1.9.1", Compiler:"gc", Platform:"linux/amd64"}

Docker version 18.02.0-ce, build fc4de44

/remove-lifecycle rotten

@elgalu couple things to check:

  • Ensure your kubectl client and kubectl server versions match: kubectl version --short=true, if not, change your client to match your server.
  • Check that the storageClass of the PVC and PV match: kubectl get pv and kubectl get pvc -n <namespace>

Failure to bind hostPaths typically fall into one or both of those cases.

kubectl version --short=true
#=> Client Version: v1.9.3
#=> Server Version: v1.9.0

I'm hitting this problem too, and my versions match exactly:

  Warning  FailedScheduling       4m (x2 over 4m)  default-scheduler  PersistentVolumeClaim is not bound: "pgdata-acid-minimal-cluster-0"
Client Version: v1.8.0
Server Version: v1.8.0

... this seems to be a general issue with pvcs since 0.21?

... and, nuked the minikube vm, started everything over from scratch, and now it runs without the PVC issue. No changes to manifests. Clearly there's some other triggering condition here I don't know about.

Same here, I moved away from the none driver to kvm2 and everything works!

minikube start --vm-driver=kvm2 --memory=8192 --disk-size=40g

I also meet this issue.
Days before something happened to docker-daemon and I restarted it. After that the pod storage-provisioner got status Evicted. I delete the pod and create a new one from /etc/kubernetes/addons/storage-provisioner.yaml, and all things get right.

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Client Version: v1.15.0
Server Version: v1.15.0

VM: none

E0627 22:06:09.754834   22524 logs.go:155] Failed to list containers for "storage-provisioner": running command: docker ps -a --filter="name=storage-provisioner" --format="{{.ID}}"                                                            output: WARNING: Error loading config file: /home/user/.docker/config.json: open /home/user/.docker/config.json: permission denied
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.39/containers/json?all=1&filters=%7B%22name%22%3A%7B%22storage-provisioner%22%3Atrue%7D%7D:
dial unix /var/run/docker.sock: connect: permission denied                                                                                                                                                                                     : running command: docker ps -a --filter="name=storage-provisioner" --format="{{.ID}}"
.: exit status 1

Chowning ~/.docker/config.json helped.

Was this page helpful?
0 / 5 - 0 ratings