Charts: POD is stuck initializing for ever

Created on 25 Jun 2018  路  8Comments  路  Source: helm/charts

Is this a request for help?:

YES

Is this a BUG REPORT or FEATURE REQUEST? (choose one):
BUG REPORT

Version of Helm and Kubernetes:
Client: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}

Which chart:
mongodb-replicaset

What happened:

What you expected to happen:
helm install --name test stable/mongodb-replicaset --set persistentVolume.storageClass=nfs.

I run the above command and I verify that PVC is bound to the replica-0.
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
datadir-test-mongodb-replicaset-0 Bound pvc-8417b31c-78b1-11e8-b6ef-005056b2c7b4 10Gi RWO nfs 40m

But the POD is always stuck in intialiazing and it never starts.
Error from server (BadRequest): container "mongodb-replicaset" in pod "test-mongodb-replicaset-0" is waiting to start: PodInitializing.

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know:

lifecyclstale

Most helpful comment

Is there any solution to fix this?

All 8 comments

I am facing the same issue. Is there any solution to this problem?

Unable to mount volumes for pod "mongo-dev-1-mongodb-replicaset-0_mongo-dev(xxx)": timeout expired waiting for volumes to attach/mount for pod "mongo-dev"/"mongo-dev-1-mongodb-replicaset-0". list of unattached/unmounted volumes=[datadir config init keydir workdir configdir default-token-q8gds]

same issue. no log was provided except for the 'is waiting to start: PodInitializing' message

try kubectl logs {{pod_name}} -c init-config to check the message

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.

This issue is being automatically closed due to inactivity.

Is there any solution to fix this?

try kubectl logs {{pod_name}} -c init-config to check the message

I got this error: Error from server (BadRequest): container init-config is not valid for pod

@arunkumarrspl To be sure that your pod has 'init-config" inside "Init Continers" section, just run kubectl describe pod "your pod name" and read first child from Init Conteiners section. In my case it was copy-default-config. So instead of kubectl logs {{pod_name}} -c init-config I run kubectl logs {{pod_name}} -c copy-default-config

Was this page helpful?
0 / 5 - 0 ratings