Charts: mariadb installs fails and pod enters crashloopbackoff

Created on 23 Jan 2017  路  9Comments  路  Source: helm/charts

I tried to install the latest mariadb chart.

I am on minikube v0.14.0 which runs k8s v1.5.1

the chart deploys but I get in a crashloopbackoff:

$ kubectl get pods
NAME                                 READY     STATUS             RESTARTS   AGE
juiced-dog-mariadb-700647841-174wc   0/1       CrashLoopBackOff   33         1h

the describe shows that the probes don't succeed and the pod is restarted:

 1h     2m      127 {kubelet minikube}  spec.containers{juiced-dog-mariadb} Warning     Unhealthy   Readiness probe failed: mysqladmin: connect to server at 'localhost' failed
error: 'Can't connect to local MySQL server through socket '/opt/bitnami/mariadb/tmp/mysql.sock' (2 "No such file or directory")'
Check that mysqld is running and that the socket: '/opt/bitnami/mariadb/tmp/mysql.sock' exists!

  1h    2m  36  {kubelet minikube}  spec.containers{juiced-dog-mariadb} Warning Unhealthy   Liveness probe failed: mysqladmin: connect to server at 'localhost' failed
error: 'Can't connect to local MySQL server through socket '/opt/bitnami/mariadb/tmp/mysql.sock' (2 "No such file or directory")'
Check that mysqld is running and that the socket: '/opt/bitnami/mariadb/tmp/mysql.sock' exists!
`

All 9 comments

Can you check if there's anything in the container logs?

kubectl logs -p juiced-dog-mariadb-700647841-174wc

It looks like this issue is the same as https://github.com/kubernetes/charts/issues/228, which on Minikube appears to be caused strangely by a chown line in the entrypoint, which has since been removed (https://github.com/bitnami/bitnami-docker-mariadb/commit/9c858dfd2f55b86896f297c6f68a3d5b285a2a46). https://github.com/kubernetes/charts/pull/427 includes the fix for Minikube at least, or you can use helm install stable/mariadb --set image=bitnami/mariadb:10.1.21-r0 in the meantime.

ok using the new image works.

not sure how changes to the chart will trickle if it is used as a dependency. i.e joomla depends on mariadb. So joomla fails with the 0.5.6 mariadb chart.

thanks for confirming @sebgoa - you're right, once #427 is merged, the dependent charts will also need to be updated to pull in the new version.

@prydonius #427 is merged. I made a PR to update one of the charts #619, then saw your note so I'm going to update that PR to bump them all.

@prydonius Hi, I'm running into the same connection error when running the JasperReports KubeApp chart on Google Container Engine. Its using the MariaDB 10.1.21-r0 image you suggested earlier but I'm still getting the error when I do "kubectl describe "

mysqladmin: connect to server at 'localhost' failed
error: 'Can't connect to local MySQL server through socket '/opt/bitnami/mariadb/tmp/mysql.sock' (2 "No such file or directory")'
Check that mysqld is running and that the socket: '/opt/bitnami/mariadb/tmp/mysql.sock' exists!

whereas the MariaDB server seems to be running fine when I do "kubectl logs "

2017-04-19 19:48:34 139756416919424 [Note] /opt/bitnami/mariadb/sbin/mysqld: ready for connections.
Version: '10.1.21-MariaDB' socket: '/opt/bitnami/mariadb/tmp/mysql.sock' port: 3306 Source distribution

Any ideas what I'm doing wrong? I'm not sure what extra details I should provide...

@rbrecheisen are the healthchecks passing for the MariaDB pod? Note that kubectl describe may show warnings about previous healthchecks failing (whilst MariaDB is starting up), even though eventually the healthchecks should pass.

@prydonius Hmmm, good question and thanks for the quick reply. I'll check and let you know asap. As far as I can remember from the loggings the MariaDB does pass the health check. The DB server starts as it should, however, at some point it tries to connect to the server using mysqladmin and it fails with the above-mentioned error. The JasperReports pod does _not_ pass the health check because it fails in a readiness probe:

Get http://172.17.0.5:8080/jasperserver/: net/http: request canceled (Client.Timeout exceeded while awaiting headers)

Closing due to inactivity, feel free to re-open if this is still an issue.

Was this page helpful?
0 / 5 - 0 ratings