Is this a request for help?:
Is this a BUG REPORT or FEATURE REQUEST? (choose one):
BUG REPORT
Version of Helm and Kubernetes:
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.1", GitCommit:"d4ab47518836c750f9949b9e0d387f20fb92260b", GitTreeState:"clean", BuildDate:"2018-04-13T22:27:55Z", GoVersion:"go1.9.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0", GitCommit:"fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState:"clean", BuildDate:"2018-03-26T16:44:10Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
$ helm version
Client: &version.Version{SemVer:"v2.8.2", GitCommit:"a80231648a1473929271764b920a8e346f6de844", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.8.2", GitCommit:"a80231648a1473929271764b920a8e346f6de844", GitTreeState:"clean"}
md5-2b2a21c125b558e2222bc67f3a87acff
pod has unbound PersistentVolumeClaim
md5-30740ffcf81eeb9360a3aca7d6773529
minikube start
helm init
helm install stable/mongodb-replicaset
md5-ff6136a45fad9fd47edda399f32f93da
minikube addons enable default-storageclass
I also see this error after a while:
pod has unbound PersistentVolumeClaims
Readiness probe failed: MongoDB shell version v3.7.3 connecting to: mongodb://127.0.0.1:27017 2018-04-22T11:25:29.926+0000 E QUERY [js] Error: couldn't connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Connection refused : connect@src/mongo/shell/mongo.js:251:13 @(connect):1:6 exception: connect failed
$ kubectl get pv,pvc,pod --all-namespaces
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pvc-f5719d13-461e-11e8-b66d-080027efcf3f 8Gi RWO Delete Bound default/terrifying-bumblebee-mongodb standard 7m
NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
default persistentvolumeclaim/terrifying-bumblebee-mongodb Bound pvc-f5719d13-461e-11e8-b66d-080027efcf3f 8Gi RWO standard 7m
NAMESPACE NAME READY STATUS RESTARTS AGE
default pod/terrifying-bumblebee-mongodb-b9d54b4b7-4lmkh 1/1 Running 0 7m
kube-system pod/etcd-minikube 1/1 Running 0 1h
kube-system pod/kube-addon-manager-minikube 1/1 Running 0 1h
kube-system pod/kube-apiserver-minikube 1/1 Running 2 1h
kube-system pod/kube-controller-manager-minikube 1/1 Running 0 1h
kube-system pod/kube-dns-86f4d74b45-sfj8p 3/3 Running 0 1h
kube-system pod/kube-proxy-wlhdh 1/1 Running 0 1h
kube-system pod/kube-scheduler-minikube 1/1 Running 0 1h
kube-system pod/kubernetes-dashboard-5498ccf677-2vpjw 1/1 Running 0 1h
kube-system pod/storage-provisioner 1/1 Running 0 1h
kube-system pod/tiller-deploy-df4fdf55d-cm6cc 1/1 Running 0 1h
@luisdavim I noticed that the pods are started even if the persistent volume claims are not completely initialized which will lead to a backoff restart. All I had to do was to wait until all PVCs were created and everything went good.
I experienced this as well. I solved it by googling - not finding anything but this thread - and refreshing minikube dashboard to discover my problem had vanished. Cheers.
I'm having this same problem while trying to deploy Monocular. It has been ongoing for the past 5 hours.
➜ ~ kubectl get pods
NAME READY STATUS RESTARTS AGE
exciting-donkey-nginx-ingress-controller-86f47c9d57-thg4x 1/1 Running 0 5h
exciting-donkey-nginx-ingress-default-backend-7855bb645-gtvfl 1/1 Running 0 5h
kindred-shrimp-mongodb-78dd6cdcfc-56km4 0/1 Pending 0 5h
kindred-shrimp-monocular-api-69769b9bbb-cr52d 0/1 CrashLoopBackOff 64 5h
kindred-shrimp-monocular-api-69769b9bbb-gsg95 0/1 CrashLoopBackOff 64 5h
kindred-shrimp-monocular-prerender-5f57b7d66d-fng72 1/1 Running 0 5h
kindred-shrimp-monocular-ui-5c5d5d4676-bs7nn 1/1 Running 0 5h
kindred-shrimp-monocular-ui-5c5d5d4676-zpkx7 1/1 Running 0 5h
➜ ~ kubectl describe pod kindred-shrimp-mongodb-78dd6cdcfc-56km4
Name: kindred-shrimp-mongodb-78dd6cdcfc-56km4
Namespace: default
Node: <none>
Labels: app=kindred-shrimp-mongodb
pod-template-hash=3488278797
Annotations: <none>
Status: Pending
IP:
Controlled By: ReplicaSet/kindred-shrimp-mongodb-78dd6cdcfc
Containers:
kindred-shrimp-mongodb:
Image: bitnami/mongodb:3.4.9-r1
Port: 27017/TCP
Host Port: 0/TCP
Requests:
cpu: 100m
memory: 256Mi
Liveness: exec [mongo --eval db.adminCommand('ping')] delay=30s timeout=5s period=10s #success=1 #failure=3
Readiness: exec [mongo --eval db.adminCommand('ping')] delay=5s timeout=1s period=10s #success=1 #failure=3
Environment:
MONGODB_ROOT_PASSWORD: <set to the key 'mongodb-root-password' in secret 'kindred-shrimp-mongodb'> Optional: false
MONGODB_USERNAME:
MONGODB_PASSWORD: <set to the key 'mongodb-password' in secret 'kindred-shrimp-mongodb'> Optional: false
MONGODB_DATABASE:
Mounts:
/bitnami/mongodb from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-9mkf2 (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: kindred-shrimp-mongodb
ReadOnly: false
default-token-9mkf2:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-9mkf2
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 5m (x1064 over 5h) default-scheduler pod has unbound PersistentVolumeClaims (repeated 5 times)
If I do a kubectl describe on the persistent volume claim, this is what I see:
➜ ~ kubectl describe persistentvolumeclaim kindred-shrimp-mongodb
Name: kindred-shrimp-mongodb
Namespace: default
StorageClass:
Status: Pending
Volume:
Labels: <none>
Annotations: <none>
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal FailedBinding 24s (x1283 over 5h) persistentvolume-controller no persistent volumes available for this claim and no storage class is set
After hacking at this for a while, I found a solution:
kubectl apply --filename storageclass.yml (see below)kubectl patch storageclass <your-class-name> -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: standard
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
zones: us-west-2a, us-west-2b, us-west-2c
NOTE: You'll need to specify the correct AWS Availability Zones in your storageclass.yml file
After fixing this, when I kubectl describe persistentvolumeclaim, I can see that it provisioned a new volume and attached it successfully to the MongoDB pod.
➜ eks kubectl describe persistentvolumeclaim
Name: historical-quetzal-mongodb
Namespace: default
StorageClass: standard
Status: Bound
Volume: pvc-c0247cba-8bad-11e8-bfdb-024b06a5532e
Labels: <none>
Annotations: pv.kubernetes.io/bind-completed=yes
pv.kubernetes.io/bound-by-controller=yes
volume.beta.kubernetes.io/storage-provisioner=kubernetes.io/aws-ebs
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 8Gi
Access Modes: RWO
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ProvisioningSucceeded 4m persistentvolume-controller Successfully provisioned volume pvc-c0247cba-8bad-11e8-bfdb-024b06a5532e using kubernetes.io/aws-ebs
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.
This issue is being automatically closed due to inactivity.
Hi. I'm having this same issue when trying to install the helm chart on local
same issue too
Hi all, make sure you have "storage" addon enabled. You should have a "hostpath" pod running on system
Most helpful comment
After hacking at this for a while, I found a solution:
kubectl apply --filename storageclass.yml(see below)kubectl patch storageclass <your-class-name> -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'NOTE: You'll need to specify the correct AWS Availability Zones in your
storageclass.ymlfileAfter fixing this, when I
kubectl describe persistentvolumeclaim, I can see that it provisioned a new volume and attached it successfully to the MongoDB pod.