Charts: [stable/mysql] ERROR 1045 (28000): Access denied for user 'root'@'...' (using password: YES)

Created on 12 Dec 2018  路  11Comments  路  Source: helm/charts

Is this a BUG REPORT or FEATURE REQUEST?:

  • Bug report

Version of Helm and Kubernetes:

  • Helm version:
Client: &version.Version{SemVer:"v2.10.0", GitCommit:"9ad53aac42165a5fadc6c87be0dea6b115f93090", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"}
  • Kubeadm version:
kubeadm version: &version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.7", GitCommit:"0c38c362511b20a098d7cd855f1314dad92c2780", GitTreeState:"clean", BuildDate:"2018-08-20T09:56:31Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
````

- Kubectl version:

Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.7", GitCommit:"0c38c362511b20a098d7cd855f1314dad92c2780", GitTreeState:"clean", BuildDate:"2018-08-20T10:09:03Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.5", GitCommit:"753b2dbc622f5cc417845f0ff8a77f539a4213ea", GitTreeState:"clean", BuildDate:"2018-11-26T14:31:35Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}

**Which chart**:
- [stable/mysql](https://github.com/helm/charts/tree/master/stable/mysql)

**What happened**:
Once the chart is installed we are unable to login into the mysql server despite following the steps in the NOTES, which are :

NOTES:
MySQL can be accessed via port 3306 on the following DNS name from within your cluster:
logd-chembl-mysql.labinf.svc.cluster.local

To get your root password run:

MYSQL_ROOT_PASSWORD=$(kubectl get secret --namespace labinf logd-chembl-mysql -o jsonpath="{.data.mysql-root-password}" | base64 --decode; echo)

To connect to your database:

  1. Run an Ubuntu pod that you can use as a client:

    kubectl run -i --tty ubuntu --image=ubuntu:16.04 --restart=Never -- bash -il

  2. Install the mysql client:

    $ apt-get update && apt-get install mysql-client -y

  3. Connect using the mysql cli, then provide your password:
    $ mysql -h logd-chembl-mysql -p
    `` We are not attempting to access the database outside the k8s cluster and when running$ mysql -h logd-chembl-mysql -p(by typing then theMYSQL_ROOT_PASSWORD`) we are getting the following error:

ERROR 1045 (28000): Access denied for user 'root'@'10.42.1.37' (using password: YES)

What you expected to happen:

  • To run a pod to be used as a client and connect to the database via mysql-client

How to reproduce it (as minimally and precisely as possible):

  • Install the stable/mysql chart
  • Run an Ubuntu pod that you can use as a client within the same cluster and namespace
  • Install the mysql-client via apt
  • Connect using the mysql cli, then provide the automatically, randomly generated password.

Anything else we need to know:

  • As described in the configuration section, we are enabling persistence and passing an existingClaim. The claim is correctly bounded and mounted in the pod.
  • We are not passing customized root and/or user password, just using the random generated one.
  • Our default namespace is called labinf and we set it via the related kubectl config ... command

Most helpful comment

Hello Folks! We may have got it why we couldn't access the fresh installed mysql server by using the root user. As described above, we are passing an existing claim (thus an existing pv) to the chart. However during testing and debugging we were only purging the chart, not the content in the PV which was left by the chart (that is, config file under /var/lib/mysql). Perhaps a little overlooking on our side for we thought that basic config file would have been overwritten each time the existing claim was passed, but it doesn't seem to be the case. Which it might require some improvements and fine tuning at the chart level.

In conclusion, by starting from a clean slate we are able to connect to the mysql server via a pod used as mysql-cli. This means (within the same cluster and namespace):

  • create a PV
  • create a PVC of the previous created PV
  • install helm chart stable/mysql by passing the previous, already existing PVC as a new value in the values.yaml file

Hope these thoughts could help both you and other users. Cheers!

All 11 comments

Hello Folks! We may have got it why we couldn't access the fresh installed mysql server by using the root user. As described above, we are passing an existing claim (thus an existing pv) to the chart. However during testing and debugging we were only purging the chart, not the content in the PV which was left by the chart (that is, config file under /var/lib/mysql). Perhaps a little overlooking on our side for we thought that basic config file would have been overwritten each time the existing claim was passed, but it doesn't seem to be the case. Which it might require some improvements and fine tuning at the chart level.

In conclusion, by starting from a clean slate we are able to connect to the mysql server via a pod used as mysql-cli. This means (within the same cluster and namespace):

  • create a PV
  • create a PVC of the previous created PV
  • install helm chart stable/mysql by passing the previous, already existing PVC as a new value in the values.yaml file

Hope these thoughts could help both you and other users. Cheers!

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.

same problem here. It shows up only when Persistent Volumes are used. It happens even on a new namespace for me.

I have the same problem as that from @alston-dmello 's case. One colleague suggest me using statefulset instead of deployment for the mysql pod when using persistent volume claim via access mode ReadWriteOnce as default (relica has its).

I dig deeper and found one problem might be the order of provisioning since usually the provisioning speed of PVC is slower than that of a pod. Although the pod is in status pending before PVC status is bound, I've noticed that the final provisioned pod does NOT contains db initialization log at all (plus lots of ERROR that suggest missing system tables), so I suspect it has initialized a db under /var/lib/mysql within the container BEFORE the storage is mounted to this path.

For the original issue, since the PV is reused, the password contained in the secret may not be the same as the actual root password. I suspect this is the same as quite a few similar issues. See #5167. Really recommend providing a password instead of using one generated by the helm chart for production use. The follow-up post strengthens my suspicion.

The other issue mentioned here, with the pod starting before the PV is mounted, sounds unlikely. Or some bug with the Kubernetes setup. If this was an issue to people in general, I'd expect a lot more bugs on this. Either way, I suspect the two problems are not related.

The problem for me was the slowness of a PV. The liveness probe fails before mysql can come up. This causes the pod to restart and the second time it cannot come up as the data (or password) in the PV is not proper.

To resolve this issue, I just increased the initialDelaySeconds of the livenessProbe to 120. This was enough for mysql to startup.

@alston-dmello Thanks for the explanation. That falls in the same category as the original post, I think. Problems caused by auto-generating the password secret.

To resolve this issue, I just increased the initialDelaySeconds of the livenessProbe to 120. This was enough for mysql to startup.

I tried this fix and it works for me (with longer initialDelaySeconds/periodSeconds). When following pod logs, I've found kubelet restarted pod after initialDelaySeconds but before the database initialization completes (which takes less than 30s locally but 5min when using PVC). For example, you can find the provisioned mysql pod is never ready in this case, and the first line of log of mysql pod is NOT Initializing database

The other issue mentioned here, with the pod starting before the PV is mounted, sounds unlikely. Or some bug with the Kubernetes setup.

I've examined the PVC provisioning and pod containers provisioning timeline via kubectl events -o yaml and I can confirm this is not my case (initialContainer created right AFTER PVC provisioning success).

@alston-dmello @olemarkus According to the case that IO performance of PV may impact the db initialization result, do you think it's helpful to add document to chart mysql about choosing reasonable livenessProbe.initialDelaySeconds and/or livenessProbe.periodSeconds? For instance:

Notice: when enabling persistence via using persistent volume, the IO performance of provisioned PVC may interrupt the mysql database initialization process by restarting pod if db initialization can not complete within 60 seconds (livenessProbe.initialDelaySeconds + livenessProbe.periodSeconds x livenessProbe. failureThreshold). You may need to change the value of livenessProbe.initialDelaySeconds to make sure kubelet does not restart pod during db initialization.

Great that we figured this one out. I think documenting this is very much a good idea. A PR would be awesome

Included the doc in 鈽濓笍PR based on input from @alston-dmello and my experience.

Was this page helpful?
0 / 5 - 0 ratings