Charts: [stable/mongodb-replicaset] Authentication not working

Created on 9 May 2019  路  11Comments  路  Source: helm/charts

I am trying to deploy the stable/mongodb-replicaset to GKE with authentication enabled but when I try to login with the provided credentials I am getting Authentication failed error

Helm Version: v2.13.1

Values.yml
```replicas: 3
port: 27017

replicaSetName: rs0

podDisruptionBudget: {}
# maxUnavailable: 1
# minAvailable: 2

auth:
enabled: true
adminUser: admin
adminPassword: test123
metricsUser: admin
metricsPassword: test123
key: 'M9XzBwT3WFH0OAqVDoe8YDd/xtpcaJ02SZgQQb02vAbaIj3Bs/vs/P8aM/JhKv+N
O+QfVsex2BQSr8PJIjkqh2Sf6GARgI5QsO3H2wC3EExEvm3jh6znc3NGT9EYS1Im
hAVttPOt/uvuubezPUXhx3qySzJMWqAD5lmbc2qVQTbPooDXImRp7EGvVN8VEN8o
QYC/hGBorP4N9P+aUYffd9MIaGZjqXdUp9GthqZWhB8hiJkA2jDu04jKYJUrphU4
KNipiK2p7UWLOWfqWWi7il0NoXVsGS3ro+TDzpSzeJvSPo1hOLPhBQdW1Z4hEfcH
D4Dv/DNyQ6dN1cSg7zXuLdCs27RPSy96Ehw84A7ZQ7z/HCFRleaOOypYZIrDFFMz
WgN+BlsNQHwIarOxUD4SJxDTCAfSrwgAW/KNLUPL6PaFCYe1u7uUNiunq7mJZfKA
msVTg8mWybfC2UZnvg0UmM8kOyjZpQxSe/XMnCkEj6vLq9CC7aazmCkp1zfDOV+K
+KfEJyovoeFIMojR3eRzEQ8STqs97YER9fMlvH6dOBJsRpBTi/ePcJL6SAT7Cq4/
R9qKFLaH0cYFF8LYVy0PJYweXIiTDTRYjZwkTG3+nd0mWfvZYzAgz2ccvjf1Rw27
etkZNUn+4ZaWa1EPnAAXXNjlJzh1QdUcbQyfn6jCO45V3M4aglFDnbMeM3pRS8fh
aipvrlL6GrsiOCtj+YbHUqeRpU47EOdOeDgF9OHdm71GOwObdPLKvr8T7ih7FzrO
bc+RU0n2lkPIsvb7EHLYdgwpl8bH5jRrjqKIWbGLIpoFMjDt62nSkNI6Yo4rdhDe
qG5qpQU96btdLaKDtsvOuXSq1QkMZnCEvw7hzKYmfsI+RasbrJwBNrehf5tEZTZQ
aGzdmAtuBTmrXkcVHb3BDHfX86fzpVL+BDwVLCiHUhSVOyELxn7PGs3o5VKWFpTs
k9xEo/ZDd/4GKqrX8Pj/+WIelbflunIWHLkKzU1g8HgF4pME'
# existingKeySecret:
# existingAdminSecret:
# existingMetricsSecret:

Optionally specify an array of imagePullSecrets.

Secrets must be manually created in the namespace.

ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/

#

imagePullSecrets: []

- myRegistrKeySecretName

Specs for the Docker image for the init container that establishes the replica set

installImage:
repository: unguiculus/mongodb-install
tag: 0.7
pullPolicy: IfNotPresent

Specs for the Docker image for the copyConfig init container

copyConfigImage:
repository: busybox
tag: 1.29.3
pullPolicy: IfNotPresent

Specs for the MongoDB image

image:
repository: mongo
tag: 3.6
pullPolicy: IfNotPresent

Additional environment variables to be set in the container

extraVars: {}

- name: TCMALLOC_AGGRESSIVE_DECOMMIT

value: "true"

Prometheus Metrics Exporter

metrics:
enabled: false
image:
repository: ssalaues/mongodb-exporter
tag: 0.6.1
pullPolicy: IfNotPresent
port: 9216
path: "/metrics"
socketTimeout: 3s
syncTimeout: 1m
prometheusServiceDiscovery: true
resources: {}

Annotations to be added to MongoDB pods

podAnnotations: {}

securityContext:
enabled: true
runAsUser: 999
fsGroup: 999
runAsNonRoot: true

init:
resources: {}
timeout: 900

resources: {}

limits:

cpu: 500m

memory: 512Mi

requests:

cpu: 100m

memory: 256Mi

Node selector

ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector

nodeSelector: {}

affinity: {}

tolerations: []

extraLabels: {}

priorityClassName: ""

persistentVolume:
enabled: true
## mongodb-replicaset data Persistent Volume Storage Class
## If defined, storageClassName:
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
# storageClass: "-"
accessModes:
- ReadWriteOnce
size: 10Gi
annotations: {}

Annotations to be added to the service

serviceAnnotations: {}

terminationGracePeriodSeconds: 30

tls:
# Enable or disable MongoDB TLS support
enabled: false
# Please generate your own TLS CA by generating it via:
# $ openssl genrsa -out ca.key 2048
# $ openssl req -x509 -new -nodes -key ca.key -days 10000 -out ca.crt -subj "/CN=mydomain.com"
# After that you can base64 encode it and paste it here:
# $ cat ca.key | base64 -w0
# cacert:
# cakey:

Entries for the MongoDB config file

configmap:
security:
authorization: enabled
keyFile: /keydir/key.txt

Javascript code to execute on each replica at initContainer time

This is the recommended way to create indexes on replicasets.

Below is an example that creates indexes in foreground on each replica in standalone mode.

ref: https://docs.mongodb.com/manual/tutorial/build-indexes-on-replica-sets/

initMongodStandalone: |+

db = db.getSiblingDB("mydb")

db.my_users.createIndex({email: 1})

Readiness probe

readinessProbe:
initialDelaySeconds: 5
timeoutSeconds: 1
failureThreshold: 3
periodSeconds: 10
successThreshold: 1

Liveness probe

livenessProbe:
initialDelaySeconds: 30
timeoutSeconds: 5
failureThreshold: 3
periodSeconds: 10
successThreshold: 1

**installation command**:
`helm install --name dev stable/mongodb-replicaset -f mongodb/values.yml`

Installed successfully

**Issue**
I run the below command to test the connection
`kubectl exec -it dev-mongodb-replicaset-0 -- mongo mydb -u admin -p test123 --authenticationDatabase admin`

but I got the below error, I am running this command from the GKE cluster
```MongoDB shell version v3.6.12
connecting to: mongodb://127.0.0.1:27017/mydb?authSource=admin&gssapiServiceName=mongodb
2019-05-09T06:58:39.678+0000 E QUERY    [thread1] Error: Authentication failed. :
connect@src/mongo/shell/mongo.js:263:13
@(connect):1:6
exception: connect failed
command terminated with exit code 1
lifecyclstale

Most helpful comment

Not sure it will help but in my case the issue was that the mongodbRootPassword was commented out in my mongodb-values.yml, and was re-generated on every upgrade (while the database kept the old/first one), hence the login issues. Quick solution was to fix the root password manually and properly persist it inside my mongodb-values.yml file so that it is not random anymore.

All 11 comments

I'm encountering the same issue after bumping the image version, trying to rollback.

I'm getting this as well. Following this walkthrough

I get the replicaset started, but can't login. I've tried with the secret and adminUser settings

$ cat mongo-secret.yaml                                                                                                                                         NewMod master
apiVersion: v1
kind: Secret
metadata:
  name: mongo-secret
stringData:
  user: a1c562c908718e548c45ca46c1e11c8de7b9
  password: 15e1527a2ffcdf5782f3e7331e6c41b2f804

$ helm install stable/mongodb-replicaset --name mongo --set auth.enabled=true,auth.existingKeySecret=mongo-root,auth.existingAdminSecret=mongo-secret,persistentVolume.size=2Gi

# The creds in the secret don't work

$  kubectl exec -i mongo-mongodb-replicaset-0 -- mongo --username a1c562c908718e548c45ca46c1e11c8de7b9 --password 15e1527a2ffcdf5782f3e7331e6c41b2f804           MongoDB shell version v3.6.12
connecting to: mongodb://127.0.0.1:27017/?gssapiServiceName=mongodb
2019-05-30T18:11:04.327+0000 E QUERY    [thread1] Error: Authentication failed. :
connect@src/mongo/shell/mongo.js:263:13
@(connect):1:6
exception: connect failed
command terminated with exit code 1

# Tried base64 of user/pass

$  kubectl exec -i mongo-mongodb-replicaset-0 -- mongo --username YTFjNTYyYzkwODcxOGU1NDhjNDVjYTQ2YzFlMTFjOGRlN2I5 --password MTVlMTUyN2EyZmZjZGY1NzgyZjNlNzMzMWU2YzQxYjJmODA0
MongoDB shell version v3.6.12
connecting to: mongodb://127.0.0.1:27017/?gssapiServiceName=mongodb
2019-05-30T18:21:52.031+0000 E QUERY    [thread1] Error: Authentication failed. :
connect@src/mongo/shell/mongo.js:263:13
@(connect):1:6
exception: connect failed
command terminated with exit code 1

Recreating with the adminUser and adminPassword don't work either.

$ helm install stable/mongodb-replicaset --name mongo --set auth.enabled=true,auth.existingKeySecret=mongo-root,auth.adminUser=a1c562c908718e548c45ca46c1e11c8de7b9,auth.adminPassword=15e1527a2ffcdf5782f3e7331e6c41b2f804,persistentVolume.size=2Gi

$ kubectl exec -i mongo-mongodb-replicaset-0 -- mongo --username a1c562c908718e548c45ca46c1e11c8de7b9 --password 15e1527a2ffcdf5782f3e7331e6c41b2f804 --authenticationDatabase admin 
MongoDB shell version v3.6.12
connecting to: mongodb://127.0.0.1:27017/?gssapiServiceName=mongodb
2019-05-30T18:21:52.031+0000 E QUERY    [thread1] Error: Authentication failed. :
connect@src/mongo/shell/mongo.js:263:13
@(connect):1:6
exception: connect failed
command terminated with exit code 1

I found versions here: https://console.cloud.google.com/storage/browser/kubernetes-charts?prefix=mongodb-replicaset

And installed version 3.5.7

helm install stable/mongodb-replicaset --version 3.5.7 --name mongo --set auth.enabled=true,auth.existingKeySecret=mongo-root,auth.adminUser=a1c562c908718e548c45ca46c1e11c8de7b9,auth.adminPassword=15e1527a2ffcdf5782f3e7331e6c41b2f804,persistentVolume.size=2Gi

then bash'd into the machine, and got this:

mongodb@mongo-mongodb-replicaset-0:/etc$ cat /proc/1/cmdline | sed -e "s/\x00/ /g"; echo
mongod --config=/data/configdb/mongod.conf --dbpath=/data/db --replSet=rs0 --port=27017 --bind_ip=0.0.0.0 --auth --keyFile=/data/configdb/key.txt 

mongodb@mongo-mongodb-replicaset-0:/etc$ cat /data/configdb/mongod.conf 
null

So, for whatever reason, mongo seems to be running without a config. Which is weird.

Going back to 2.3.2

helm install stable/mongodb-replicaset --version 2.3.2 --name mongo --set auth.enabled=true,auth.existingKeySecret=mongo-root,auth.adminUser=a1c562c908718e548c45ca46c1e11c8de7b9,auth.adminPassword=15e1527a2ffcdf5782f3e7331e6c41b2f804,persistentVolume.size=2Gi  

I get this:

root@mongo-mongodb-replicaset-0:/# cat /proc/1/cmdline | sed -e "s/\x00/ /g"; echo
mongod --config=/config/mongod.conf 
root@mongo-mongodb-replicaset-0:/# cat /config/mongod.conf 
net:
  bindIpAll: true
  port: 27017
replication:
  replSetName: rs0
storage:
  dbPath: /data/db

@steven-sheehy Hey, what do you think of this? I feel like I'm missing something here...

The chart changed from providing a mongod.conf to using command line args I think in 3.0.0. This allowed separation between mandatory chart provided config and user provided config.

@steven-sheehy ok, great! It felt like too many versions for it to be missed. Any thoughts on why the user can't login?

@codeman-crypto Your configmap should probably look like this:

configmap: 
  security:
    authorization: enabled
    clusterAuthMode: keyFile

Specifying keyFile is pointless since it's provided via command line args.

helm delete mongo --purge doesn't delete persistentvolumeclaims. Which is good to not delete your database.

In my case, I was reruning the chart with new auth data each time.

helm was using the previous volumes that had already been initialized. So even though I was specifying new credentials in the helm command, the chart wasn't noticing/failing when trying to configure the existing database.

It looked like this in the logs:

2019-05-31T00:50:09.104+0000 I ACCESS   [conn69] Unauthorized: not authorized on admin to execute command { endSessions: [ { id: UUID("4572d244-fe98-47f2-9dee-13e64c7c0e21") } ], $db: "admin" }
2019-05-31T00:50:09.105+0000 I NETWORK  [conn69] end connection 127.0.0.1:54546 (4 connections now open)
2019-05-31T00:50:11.884+0000 I NETWORK  [listener] connection accepted from 127.0.0.1:54552 #70 (5 connections now open)
2019-05-31T00:50:11.884+0000 I NETWORK  [conn70] received client metadata from 127.0.0.1:54552 conn70: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "3.6.12" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }

I got past this error by deleting these claims:

helm delete mongo --purge
kubectl delete persistentvolumeclaims -l app=mongodb-replicaset,release=mongo  

Not sure it will help but in my case the issue was that the mongodbRootPassword was commented out in my mongodb-values.yml, and was re-generated on every upgrade (while the database kept the old/first one), hence the login issues. Quick solution was to fix the root password manually and properly persist it inside my mongodb-values.yml file so that it is not random anymore.

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.

This issue is being automatically closed due to inactivity.

Was this page helpful?
0 / 5 - 0 ratings