Charts: BUG: rabbitmq-ha chart fails to deploy on K8S 1.9.4 due to ConfigMaps now being mounted RO

Created on 15 Mar 2018  Â·  3Comments  Â·  Source: helm/charts

BUG REPORT
Version of Helm and Kubernetes:
Helm v2.8.1 / K8S v1.9.4-gke.1

Which chart:
rabbitmq-ha

What happened:
First pod fails to start.

➜ k logs jx-staging-vrs-mq-0                 
sed: can't create temp file '/etc/rabbitmq/rabbitmq.confXXXXXX': Read-only file system

What you expected to happen:
/etc/rabbitmq/rabbitmq.conf is expected to mount with file permissions 0644, according to the yaml.

How to reproduce it (as minimally and precisely as possible):
helm install to any default K8S 1.9.4 cluster.

Anything else we need to know:
As of 1.9.4, ConfigMaps and Secrets are mounted RO. See the following for details:

https://github.com/kubernetes/kubernetes/pull/58720

Most helpful comment

I've worked around this issue by using a busybox initContainer on the StatefulSet with a command to copy the files from the ConfigMap to an emptyDir volume. I'm not sure if this is the right way to go, but I'd be happy to submit a PR.

All 3 comments

I've worked around this issue by using a busybox initContainer on the StatefulSet with a command to copy the files from the ConfigMap to an emptyDir volume. I'm not sure if this is the right way to go, but I'd be happy to submit a PR.

Think this is the same as #4261

For the ones interested, the MR is on going by @svmaris, thanks to him! Here is the reference: #4169.

It worked for me by copying its changes. As mentioned @etiennetremel do not forget to do:

$ export ERLANGCOOKIE=$(kubectl get secrets -n <NAMESPACE> <HELM_RELEASE_NAME>-rabbitmq-ha -o jsonpath="{.data.rabbitmq-erlang-cookie}" | base64 --decode)
$ helm upgrade --name <HELM_RELEASE_NAME> \
    --set rabbitmqErlangCookie=$ERLANGCOOKIE \
    stable/rabbitmq-ha

Otherwise you will get this error:
** Connection attempt from disallowed node 'rabbitmqcli61@rabbitmq-ha-rabbitmq-ha-0.rabbitmq-ha-rabbitmq-ha.default.svc.cluster.local' **

If you already have done a helm delete <release> --purge to try to reset the rabbitmq cluster, and you think you are locked or if you do not have access to your previous ERLANGCOOKIE value, a solution is to perform helm delete <release> --purge and then delete all the previous pvc regarding rabbitmq: kubectl delete pvc data-broker-rabbitmq-ha-0 data-broker-rabbitmq-ha-1 .... Of course the hidden goal is to release the volumes to get new ones. To do so you the "delete" policy has to be set on your pv. As a result, the helm install will work again as far as a new ERLANGCOOKIE will be generated and copy to the new volumes.

This solution is not appropriate if you are in production and if you have some data in these volumes (data you do not wanna lose). I think another solution would be to mount the PV in a pod, check the cookie on it and perform the upgrade as mentioned previously...

Was this page helpful?
0 / 5 - 0 ratings