Is this a request for help?: YES
Is this a BUG REPORT or FEATURE REQUEST? (choose one):
Version of Helm and Kubernetes:
Kubernetes: 1.10.3
Helm: 2.9.1
Which chart:
stable/mongodb-replicaset version 3.5.1
What happened:
After running helm install --name flavr8 -f mongodb-values.yaml stable/mongodb-replicaset with mongodb-values.yaml containing only replicas: 1, the pod is stuck in status init:2/3. Checking the description of the pod, the bootstrap is stuck on the peer-finder.
/work-dir from workdir (rw)
bootstrap:
Container ID: docker://aad03d61b39cd66b35fee5b5eab734691ec278273732f2926730b706ca2d0542
Image: mongo:3.6
Image ID: docker-pullable://mongo@sha256:3e00936a4fbd17003cfd33ca808f03ada736134774bfbc3069d3757905a4a326
Port: <none>
Host Port: <none>
Command:
/work-dir/peer-finder
Args:
-on-start=/init/on-start.sh
-service=flavr8-mongodb-replicaset
State: Running
Started: Sun, 08 Jul 2018 13:43:34 -0500
Ready: False
Restart Count: 0
Environment:
POD_NAMESPACE: default (v1:metadata.namespace)
REPLICA_SET: rs0
Mounts:
/data/configdb from configdir (rw)
/data/db from datadir (rw)
/init from init (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-t4rrn (ro)
What you expected to happen:
The pods should start up normally even if there is only one replica (i.e. no peers). So far it has been stuck on init:2/3 for 18m.
How to reproduce it (as minimally and precisely as possible):
1) Run on a single-node k8s cluster from Docker on Windows
2) Change the replicas value to 1
3) Weep bitter tears.
Anything else we need to know:
I am running k8s from Docker for Windows (edge channel).
I am facing similar issue but with standard replicas 3. POD just stuck in bootstrap status for ever whenever I use Persistent volumes. It works fine for me if I dont use persistent volumes. But however, PVC claim is successfully bound and then it gets stuck in bootstrap status in the init script with parameters /work-dir/peerfinder
My install command - helm install --name test -f values.yaml stable/mongodb-replicaset
values.yaml -
replicas: 3
port: 27017
replicaSetName: rs0
podDisruptionBudget: {}
# maxUnavailable: 1
# minAvailable: 2
auth:
enabled: false
# adminUser: username
# adminPassword: password
# key: keycontent
# existingKeySecret:
# existingAdminSecret:
installImage:
repository: k8s.gcr.io/mongodb-install
tag: 0.6
pullPolicy: IfNotPresent
image:
repository: mongo
tag: 3.6
pullPolicy: IfNotPresent
extraVars: {}
podAnnotations: {}
securityContext:
runAsUser: 999
fsGroup: 999
runAsNonRoot: true
resources: {}
nodeSelector: {}
affinity: {}
tolerations: []
extraLabels: {}
persistentVolume:
enabled: true
## mongodb-replicaset data Persistent Volume Storage Class
## If defined, storageClassName:
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
#storageClass: "nfs"
accessModes:
- ReadWriteMany
size: 5Gi
annotations:
volume.beta.kubernetes.io/storage-class: "nfs"
serviceAnnotations: {}
tls:
# Enable or disable MongoDB TLS support
enabled: false
# Please generate your own TLS CA by generating it via:
# $ openssl genrsa -out ca.key 2048
# $ openssl req -x509 -new -nodes -key ca.key -days 10000 -out ca.crt -subj "/CN=mydomain.com"
# After that you can base64 encode it and paste it here:
# $ cat ca.key | base64 -w0
# cacert:
# cakey:
configmap:
readinessProbe:
initialDelaySeconds: 5
timeoutSeconds: 1
failureThreshold: 3
periodSeconds: 10
successThreshold: 1
livenessProbe:
initialDelaySeconds: 30
timeoutSeconds: 5
failureThreshold: 3
periodSeconds: 10
successThreshold: 1
Got the same issue (replicas=2).
Hitting this as well with 3 replicas. Doesn't happen every time but when it does it is always on the 2nd instance. Bootstrap container will sit there indefinitely.
Log from the bootstrap container
➜ ~ kubectl logs -f mongodb-1 -c bootstrap
2018/07/23 15:51:37 Peer list updated
was []
now [mongodb-0.mongodb.bravo.svc.cluster.local mongodb-1.mongodb.bravo.svc.cluster.local]
2018/07/23 15:51:37 execing: /init/on-start.sh with stdin: mongodb-0.mongodb.bravo.svc.cluster.local
mongodb-1.mongodb.bravo.svc.cluster.local
any result to this
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.
I ran exactly into this issue and in logs of bootstrap container (/work-dir/log.txt) I found this:
2018-08-27T13:38:36.050+0000 I ACCESS [main] invalid char in key file /data/configdb/key.txt: &
So it doesn't like "&" char in mongodb-replicaset-keyfile secret. Removing invalid character from secret solved the issue. Hope it helps.
We had this issue as well and was super annoying. I opened up PR #7772 specifically for issue #7417 but this issue seems like it could be related/same.
The root cause was a Quorum check failed error as seen in the posted logs of issue #7417 . Would appreciate hearing from anyone in this thread if the PR fixes some of the problems found here.
I ran exactly into this issue and in logs of bootstrap container (/work-dir/log.txt) I found this:
2018-08-27T13:38:36.050+0000 I ACCESS [main] invalid char in key file /data/configdb/key.txt: &So it doesn't like "&" char in mongodb-replicaset-keyfile secret. Removing invalid character from secret solved the issue. Hope it helps.
I also bumped into the same problem, apparently the auth.key value in values.yaml had a "-" character which was not acceptable.
Most helpful comment
My install command - helm install --name test -f values.yaml stable/mongodb-replicaset
values.yaml -
replicas: 3
port: 27017
replicaSetName: rs0
podDisruptionBudget: {}
# maxUnavailable: 1
# minAvailable: 2
auth:
enabled: false
# adminUser: username
# adminPassword: password
# key: keycontent
# existingKeySecret:
# existingAdminSecret:
Specs for the Docker image for the init container that establishes the replica set
installImage:
repository: k8s.gcr.io/mongodb-install
tag: 0.6
pullPolicy: IfNotPresent
Specs for the MongoDB image
image:
repository: mongo
tag: 3.6
pullPolicy: IfNotPresent
Additional environment variables to be set in the container
extraVars: {}
- name: TCMALLOC_AGGRESSIVE_DECOMMIT
value: "true"
Annotations to be added to MongoDB pods
podAnnotations: {}
securityContext:
runAsUser: 999
fsGroup: 999
runAsNonRoot: true
resources: {}
limits:
cpu: 500m
memory: 512Mi
requests:
cpu: 100m
memory: 256Mi
Node selector
ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
nodeSelector: {}
affinity: {}
tolerations: []
extraLabels: {}
persistentVolume:
enabled: true
## mongodb-replicaset data Persistent Volume Storage Class
## If defined, storageClassName:
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
#storageClass: "nfs"
accessModes:
- ReadWriteMany
size: 5Gi
annotations:
volume.beta.kubernetes.io/storage-class: "nfs"
Annotations to be added to the service
serviceAnnotations: {}
tls:
# Enable or disable MongoDB TLS support
enabled: false
# Please generate your own TLS CA by generating it via:
# $ openssl genrsa -out ca.key 2048
# $ openssl req -x509 -new -nodes -key ca.key -days 10000 -out ca.crt -subj "/CN=mydomain.com"
# After that you can base64 encode it and paste it here:
# $ cat ca.key | base64 -w0
# cacert:
# cakey:
Entries for the MongoDB config file
configmap:
Readiness probe
readinessProbe:
initialDelaySeconds: 5
timeoutSeconds: 1
failureThreshold: 3
periodSeconds: 10
successThreshold: 1
Liveness probe
livenessProbe:
initialDelaySeconds: 30
timeoutSeconds: 5
failureThreshold: 3
periodSeconds: 10
successThreshold: 1