Charts: [stable/mongodb] Authentication failed with root password

Created on 10 Dec 2019  路  11Comments  路  Source: helm/charts

I have installed mongodb without replica set & set root password as well as custom user

## MongoDB admin password
## ref: https://github.com/bitnami/bitnami-docker-mongodb/blob/master/README.md#setting-the-root-password-on-first-run
##
mongodbRootPassword: admin123

## MongoDB custom user and database
## ref: https://github.com/bitnami/bitnami-docker-mongodb/blob/master/README.md#creating-a-user-and-database-on-first-run
##
mongodbUsername: my-user
mongodbPassword: my-password
mongodbDatabase: my-database

Tried authentication for root user & it failed.

use admin
switched to db admin
db.auth('root','admin123')
Error: Authentication failed.
0

Authentication worked for custom user (my-user)

db.auth('my-user','my-password')
1

Troubleshooting -> Secrets are getting created with the values set as part of values.yaml

kubectl get secret --namespace default my-release-mongodb -o jsonpath="{.data.mongodb-root-password}" | base64 --decode
admin123

mongodb Pods are in running state

[mongodb]# kubectl get pod
NAME                                  READY   STATUS    RESTARTS   AGE
my-release-mongodb-6b7c6d989b-fdbvp   1/1     Running   0          79m

mongodb nodePort service is in running state

[mongodb]# kubectl get svc | grep mongo
my-release-mongodb                  NodePort       10.233.5.204    <none>                                                27017:30000/TCP 
lifecyclstale

Most helpful comment

Issue is fixed. Actually i was not cleaning up the persistent volume mount path on the worker node, thats why credential were not working.

All 11 comments

Hi, @modassarrana.

I'm not able to reproduce your issue. I'm using this values.yaml file:

$ cat values.yaml
## MongoDB admin password
## ref: https://github.com/bitnami/bitnami-docker-mongodb/blob/master/README.md#setting-the-root-password-on-first-run
##
mongodbRootPassword: admin123

## MongoDB custom user and database
## ref: https://github.com/bitnami/bitnami-docker-mongodb/blob/master/README.md#creating-a-user-and-database-on-first-run
##
mongodbUsername: user
mongodbPassword: password123
mongodbDatabase: database

Then I install the chart with the following command:

$ helm install stable/mongodb --namespace amoreno-mng --name mongo -f values.yaml

Run a client as instructed in the NOTES.txt file:

$ kubectl run --namespace amoreno-mng mongo-mongodb-client --rm --tty -i --restart='Never' --image bitnami/mongodb --command -- mongo admin --host mongo-mongodb --authenticationDatabase admin -u root -p $MONGODB_ROOT_PASSWORD

If you don't see a command prompt, try pressing enter.

>
>
> use admin
switched to db admin
> db.auth('root', 'admin123')
1

Can you share the full command you are using to install the chart and the options you are changing from default?

Regards,
Alejandro

Hello,

Do you have attached persistent volume ? If yes please remove it and recreate the cluster. I had same issue and it seems that an old password remained in that storage so when db was starting it would load that password from storage not the new one that I've provided in the configuration file.

Yes i have attached persistent volume. I have created a local storage , persistent volume before installing mongodb chart. As you have mentioned , it could be an issue. I have uninstalled the chart & deleted PV . Re-create pv & installed again. Its the same issue. Please find the persistent volume & storage class below thati am using :

``` pv.yaml

kind: PersistentVolume
apiVersion: v1
metadata:
name: pv-vol
labels:
type: local
spec:
storageClassName: local
capacity:
storage: 8Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/bitnami/mongodb"

cat sc.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer

Enabled persistenccy in values.yaml

persistence:
enabled: true
accessModes:
- ReadWriteOnce
size: 8Gi
```

Even set storage class as local in values.yaml

 mountPath: /bitnami/mongodb
 storageClass: "local"

Hi @alemorcuq ,
Command to install mongodb chart
helm install --name my-release -f values.yaml stable/mongodb

Values that i have changed are

volumePermissions:
  enabled: true
usePassword: true
# existingSecret: name-of-existing-secret

## MongoDB admin password
## ref: https://github.com/bitnami/bitnami-docker-mongodb/blob/master/README.md#setting-the-root-password-on-first-run
##
mongodbRootPassword: admin123

## MongoDB custom user and database
## ref: https://github.com/bitnami/bitnami-docker-mongodb/blob/master/README.md#creating-a-user-and-database-on-first-run
##
mongodbUsername: my-user
mongodbPassword: my-password
mongodbDatabase: my-database

service:
  ## Specify an explicit service name.
  # name: svc-mongo
  ## Provide any additional annotations which may be required.
  ## The value is evaluated as a template, so, for example, the value can depend on .Release or .Chart
  annotations: {}
  type: NodePort
  # clusterIP: None
  port: 27017

  ## Specify the nodePort value for the LoadBalancer and NodePort service types.
  ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport
  nodePort: 30000

persistence:
  enabled: true
  accessModes:
    -  ReadWriteOnce
  size: 8Gi
  storageClass: "local"

I have shared persistent volume & storage class that i have created in my earlier comment as well.

mongodb root password ->

[root@svpod mongodb]#  export MONGODB_ROOT_PASSWORD=$(kubectl get secret --namespace default my-release-mongodb -o jsonpath="{.data.mongodb-root-password}" | base64 --decode)
[root@svpod mongodb]# echo $MONGODB_ROOT_PASSWORD
admin123

command to test the connectivity with root user

[root@svpod mongodb]# kubectl run --namespace default my-release-mongodb-client --rm --tty -i --restart='Never' --image bitnami/mongodb --command -- mongo admin --host my-release-mongodb --authenticationDatabase admin -u root -p $MONGODB_ROOT_PASSWORD

MongoDB shell version v4.0.13
connecting to: mongodb://my-release-mongodb:27017/admin?authSource=admin&gssapiServiceName=mongodb
2019-12-12T02:35:31.410+0000 E QUERY    [js] Error: Authentication failed. :
connect@src/mongo/shell/mongo.js:344:17
@(connect):2:6
exception: connect failed
pod "my-release-mongodb-client" deleted

As i have mentioned , its working fine with custom user i.e. my-user , my-password
pod default/my-release-mongodb-client terminated (Error)

Please find attached bellow my configuration file.

  1. Remove volumes
  2. Use my config values (values-production.yml)
  3. Login withL user root -> password dan -> collection admin
  4. Please provide feedback
## Global Docker image parameters
## Please, note that this will override the image parameters, including dependencies, configured to use the global value
## Current available global Docker image parameters: imageRegistry and imagePullSecrets
##
# global:
#   imageRegistry: myRegistryName
#   imagePullSecrets:
#     - myRegistryKeySecretName
#   storageClass: myStorageClass

image:
  ## Bitnami MongoDB registry
  ##
  registry: docker.io
  ## Bitnami MongoDB image name
  ##
  repository: bitnami/mongodb
  ## Bitnami MongoDB image tag
  ## ref: https://hub.docker.com/r/bitnami/mongodb/tags/
  ##
  tag: 4.2.2-ol-7-r2
  ## Specify a imagePullPolicy
  ## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
  ##
  pullPolicy: IfNotPresent
  ## Optionally specify an array of imagePullSecrets.
  ## Secrets must be manually created in the namespace.
  ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
  ##
  # pullSecrets:
  #   - myRegistryKeySecretName

  ## Set to true if you would like to see extra information on logs
  ## It turns on Bitnami debugging in minideb-extras-base
  ## ref:  https://github.com/bitnami/minideb-extras-base
  debug: false

## String to partially override mongodb.fullname template (will maintain the release name)
##
# nameOverride:

## String to fully override mongodb.fullname template
##
# fullnameOverride:

# Add custom extra environment variables to all the MongoDB containers
# extraEnvVars:

## Init containers parameters:
## volumePermissions: Change the owner and group of the persistent volume mountpoint to runAsUser:fsGroup values from the securityContext section.
##
volumePermissions:
  enabled: false
  image:
    registry: docker.io
    repository: bitnami/minideb
    tag: stretch
    pullPolicy: Always
    ## Optionally specify an array of imagePullSecrets.
    ## Secrets must be manually created in the namespace.
    ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
    ##
    # pullSecrets:
    #   - myRegistryKeySecretName
  resources: {}

## Enable authentication
## ref: https://docs.mongodb.com/manual/tutorial/enable-authentication/
#
usePassword: true
# existingSecret: name-of-existing-secret

## MongoDB admin password
## ref: https://github.com/bitnami/bitnami-docker-mongodb/blob/master/README.md#setting-the-root-password-on-first-run
##
mongodbRootPassword: dan

## MongoDB custom user and database
## ref: https://github.com/bitnami/bitnami-docker-mongodb/blob/master/README.md#creating-a-user-and-database-on-first-run
##
mongodbUsername: cristi
mongodbPassword: test
mongodbDatabase: test

## Whether enable/disable IPv6 on MongoDB
## ref: https://github.com/bitnami/bitnami-docker-mongodb/blob/master/README.md#enabling/disabling-ipv6
##
mongodbEnableIPv6: false

## Whether enable/disable DirectoryPerDB on MongoDB
## ref: https://github.com/bitnami/bitnami-docker-mongodb/blob/master/README.md#enabling/disabling-directoryperdb
##
mongodbDirectoryPerDB: true

## MongoDB System Log configuration
## ref: https://github.com/bitnami/bitnami-docker-mongodb#configuring-system-log-verbosity-level
##
mongodbSystemLogVerbosity: 0
mongodbDisableSystemLog: false

## MongoDB additional command line flags
##
## Can be used to specify command line flags, for example:
##
## mongodbExtraFlags:
##  - "--wiredTigerCacheSizeGB=2"
mongodbExtraFlags: []

## Pod Security Context
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
##
securityContext:
  enabled: true
  fsGroup: 1001
  runAsUser: 1001

## Kubernetes Cluster Domain
clusterDomain: cluster.local

## Kubernetes service type
service:
  ## Specify an explicit service name.
  # name: svc-mongo
  ## Provide any additional annotations which may be required.
  ## The value is evaluated as a template, so, for example, the value can depend on .Release or .Chart
  annotations: {}
  type: ClusterIP
  # clusterIP: None
  port: 27017

  ## Specify the nodePort value for the LoadBalancer and NodePort service types.
  ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport
  ##
  # nodePort:
  ## Specify the externalIP value ClusterIP service type.
  ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#external-ips
  # externalIPs: []
  ## Specify the loadBalancerIP value for LoadBalancer service types.
  ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer
  ##
  # loadBalancerIP:
  ## Specify the loadBalancerSourceRanges value for LoadBalancer service types.
  ## ref: https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/#restrict-access-for-loadbalancer-service
  ##
  # loadBalancerSourceRanges: []

## Setting up replication
## ref: https://github.com/bitnami/bitnami-docker-mongodb#setting-up-a-replication
#
replicaSet:
  ## Whether to create a MongoDB replica set for high availability or not
  enabled: true
  useHostnames: true

  ## Name of the replica set
  ##
  name: rs0

  ## Key used for replica set authentication
  ##
  # key: key

  ## Number of replicas per each node type
  ##
  replicas:
    secondary: 1
    arbiter: 1

  ## Pod Disruption Budget
  ## ref: https://kubernetes.io/docs/concepts/workloads/pods/disruptions/
  pdb:
    enabled: true
    minAvailable:
      secondary: 1
      arbiter: 1
    # maxUnavailable:
    # secondary: 1
    # arbiter: 1

# Annotations to be added to the deployment or statefulsets
annotations: {}

# Additional labels to apply to the deployment or statefulsets
labels: {}

# Annotations to be added to MongoDB pods
podAnnotations: {}

# Additional pod labels to apply
podLabels: {}

## Use an alternate scheduler, e.g. "stork".
## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/
##
# schedulerName:

## Configure resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
resources:
  limits:
    cpu: 500m
    memory: 2048Mi
# requests:
#   cpu: 100m
#   memory: 256Mi

# Define separate resources per arbiter, which are less then primary or secondary
# used only when replica set is enabled
resourcesArbiter:
  limits:
    cpu: 500m
    memory: 512Mi
# requests:
#   cpu: 100m
#   memory: 256Mi

## Pod priority
## https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
# priorityClassName: ""

## Node selector
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
nodeSelector: {}

## Affinity
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
affinity: {}
# Define separate affinity for arbiter pod
affinityArbiter: {}

## Tolerations
## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
tolerations: []

## updateStrategy for MongoDB Primary, Secondary and Arbitrer statefulsets
## ref: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#update-strategies
updateStrategy:
  type: RollingUpdate

## Enable persistence using Persistent Volume Claims
## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
##
persistence:
  enabled: true
  ## A manually managed Persistent Volume and Claim
  ## Requires persistence.enabled: true
  ## If defined, PVC must be created manually before volume will be bound
  ##
  # existingClaim:

  ## The path the volume will be mounted at, useful when using different
  ## MongoDB images.
  ##
  mountPath: /bitnami/mongodb

  ## The subdirectory of the volume to mount to, useful in dev environments
  ## and one PV for multiple services.
  ##
  subPath: ""

  ## mongodb data Persistent Volume Storage Class
  ## If defined, storageClassName: <storageClass>
  ## If set to "-", storageClassName: "", which disables dynamic provisioning
  ## If undefined (the default) or set to null, no storageClassName spec is
  ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
  ##   GKE, AWS & OpenStack)
  ##
  # storageClass: "-"
  accessModes:
    - ReadWriteOnce
  size: 8Gi
  annotations: {}

## Configure the ingress resource that allows you to access the
## MongoDB installation. Set up the URL
## ref: http://kubernetes.io/docs/user-guide/ingress/
##
ingress:
  ## Set to true to enable ingress record generation
  enabled: false

  ## Set this to true in order to add the corresponding annotations for cert-manager
  certManager: false

  ## Ingress annotations done as key:value pairs
  ## For a full list of possible ingress annotations, please see
  ## ref: https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md
  ##
  ## If tls is set to true, annotation ingress.kubernetes.io/secure-backends: "true" will automatically be set
  ## If certManager is set to true, annotation kubernetes.io/tls-acme: "true" will automatically be set
  annotations:
  #  kubernetes.io/ingress.class: nginx

  ## The list of hostnames to be covered with this ingress record.
  ## Most likely this will be just one host, but in the event more hosts are needed, this is an array
  hosts:
    - name: mongodb.local
      path: /

  ## The tls configuration for the ingress
  ## see: https://kubernetes.io/docs/concepts/services-networking/ingress/#tls
  tls:
    - hosts:
        - mongodb.local
      secretName: mongodb.local-tls

  secrets:
  ## If you're providing your own certificates, please use this to add the certificates as secrets
  ## key and certificate should start with -----BEGIN CERTIFICATE----- or
  ## -----BEGIN RSA PRIVATE KEY-----
  ##
  ## name should line up with a tlsSecret set further up
  ## If you're using cert-manager, this is unneeded, as it will create the secret for you if it is not set
  ##
  ## It is also possible to create and manage the certificates outside of this helm chart
  ## Please see README.md for more information
  # - name: airflow.local-tls
  #   key:
  #   certificate:

## Configure the options for init containers to be run before the main app containers
## are started. All init containers are run sequentially and must exit without errors
## for the next one to be started.
## ref: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/
# extraInitContainers: |
#   - name: do-something
#     image: busybox
#     command: ['do', 'something']

## Configure extra options for liveness and readiness probes
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes)
livenessProbe:
  enabled: true
  initialDelaySeconds: 30
  periodSeconds: 10
  timeoutSeconds: 5
  failureThreshold: 6
  successThreshold: 1
readinessProbe:
  enabled: true
  initialDelaySeconds: 5
  periodSeconds: 10
  timeoutSeconds: 5
  failureThreshold: 6
  successThreshold: 1

# Define custom config map with init scripts
initConfigMap: {}
#  name: "init-config-map"

## Entries for the MongoDB config file. For documentation of all options, see:
##   http://docs.mongodb.org/manual/reference/configuration-options/
##
configmap:
#  # where and how to store data.
#  storage:
#    dbPath: /bitnami/mongodb/data/db
#    journal:
#      enabled: true
#    directoryPerDB: false
#  # where to write logging data.
#  systemLog:
#    destination: file
#    quiet: false
#    logAppend: true
#    logRotate: reopen
#    path: /opt/bitnami/mongodb/logs/mongodb.log
#    verbosity: 0
#  # network interfaces
#  net:
#    port: 27017
#    unixDomainSocket:
#      enabled: true
#      pathPrefix: /opt/bitnami/mongodb/tmp
#    ipv6: false
#    bindIpAll: true
#  # replica set options
#  #replication:
#    #replSetName: replicaset
#    #enableMajorityReadConcern: true
#  # process management options
#  processManagement:
#     fork: false
#     pidFilePath: /opt/bitnami/mongodb/tmp/mongodb.pid
#  # set parameter options
#  setParameter:
#     enableLocalhostAuthBypass: true
#  # security options
#  security:
#    authorization: disabled
#    #keyFile: /opt/bitnami/mongodb/conf/keyfile

## Prometheus Exporter / Metrics
##
metrics:
  enabled: true

  image:
    registry: docker.io
    repository: bitnami/mongodb-exporter
    tag: 0.10.0-debian-9-r49
    pullPolicy: IfNotPresent
    ## Optionally specify an array of imagePullSecrets.
    ## Secrets must be manually created in the namespace.
    ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
    ##
    # pullSecrets:
    #   - myRegistryKeySecretName

  ## String with extra arguments to the metrics exporter
  ## ref: https://github.com/dcu/mongodb_exporter/blob/master/mongodb_exporter.go
  extraArgs: ""

  ## Metrics exporter resource requests and limits
  ## ref: http://kubernetes.io/docs/user-guide/compute-resources/
  ##
  # resources: {}

  ## Metrics exporter liveness and readiness probes
  ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes)
  livenessProbe:
    enabled: true
    initialDelaySeconds: 15
    periodSeconds: 5
    timeoutSeconds: 5
    failureThreshold: 3
    successThreshold: 1
  readinessProbe:
    enabled: true
    initialDelaySeconds: 5
    periodSeconds: 5
    timeoutSeconds: 1
    failureThreshold: 3
    successThreshold: 1

  ## Metrics exporter pod Annotation
  podAnnotations:
    prometheus.io/scrape: "true"
    prometheus.io/port: "9216"

  ## Prometheus Service Monitor
  ## ref: https://github.com/coreos/prometheus-operator
  ##      https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md
  serviceMonitor:
    ## If the operator is installed in your cluster, set to true to create a Service Monitor Entry
    enabled: false

    ## Specify a namespace if needed
    # namespace: monitoring

    ## Used to pass Labels that are used by the Prometheus installed in your cluster to select Service Monitors to work with
    ## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#prometheusspec
    additionalLabels: {}

    ## Specify Metric Relabellings to add to the scrape endpoint
    ## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#endpoint
    # relabellings:

    alerting:
      ## Define individual alerting rules as required
      ## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#rulegroup
      ##      https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/
      rules: {}

      ## Used to pass Labels that are used by the Prometheus installed in your cluster to select Prometheus Rules to work with
      ## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#prometheusspec
      additionalLabels: {}

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.

This issue is being automatically closed due to inactivity.

Was your issue fixed, @modassarrana ?

Issue is fixed. Actually i was not cleaning up the persistent volume mount path on the worker node, thats why credential were not working.

@modassarrana Yes i have the same issue but after deleting pv it's working fine.
kubectl delete pvc --all

Was this page helpful?
0 / 5 - 0 ratings