Is this a request for help?: No
Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT
Version of Helm and Kubernetes:
Client: &version.Version{SemVer:"v2.7.0", GitCommit:"08c1144f5eb3e3b636d9775617287cc26e53dba4", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.7.0", GitCommit:"08c1144f5eb3e3b636d9775617287cc26e53dba4", GitTreeState:"clean"}
GKE Kubernetes
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.3", GitCommit:"f0efb3cb883751c5ffdbe6d515f3cb4fbe7b7acd", GitTreeState:"clean", BuildDate:"2017-11-08T18:39:33Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"8+", GitVersion:"v1.8.1-gke.1", GitCommit:"aba494e68a76583d2d7d1b9c97e4a97a19c3a920", GitTreeState:"clean", BuildDate:"2017-10-27T23:54:39Z", GoVersion:"go1.8.3b4", Compiler:"gc", Platform:"linux/amd64"}
Which chart:
stable/rabbitmq
What happened:
Simple test, this helm chart works well without a PVC on GKE. With a PVC it fails.
Here is the error message:
Readiness probe failed: Status of node rabbit@localhost Error: unable to connect to node rabbit@localhost: nodedown DIAGNOSTICS =========== attempted to contact: [rabbit@localhost] rabbit@localhost: * connected to epmd (port 4369) on localhost * epmd reports: node 'rabbit' not running at all other nodes on localhost: ['rabbitmq-cli-86'] * suggestion: start the node current node details: - node name: 'rabbitmq-cli-86@dev-rabbitmq-rabbitmq-689c6f57cb-5sjtb' - home dir: /opt/bitnami/rabbitmq/.rabbitmq - cookie hash: LczRqz4DmQrqdzWYMchcog==
What you expected to happen:
Should work with PVC
How to reproduce it (as minimally and precisely as possible):
I have create a claim and a storage class before I did this.
Here are the steps to reproduce it:
helm install --name dev-rabbitmq -f rabbitmq-dev-values.yaml stable/rabbitmq
persistence:
enabled: true
existingClaim: pvc-rabbitmq
Anything else we need to know:
Same thing here. Any updates on this issue?
I have same problem on AWS (cluster was created via kops).
helm version:
Client: &version.Version{SemVer:"v2.7.2", GitCommit:"8478fb4fc723885b155c924d1c8c410b7a9444e6", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.7.2", GitCommit:"8478fb4fc723885b155c924d1c8c410b7a9444e6", GitTreeState:"clean"}
kubectl version:
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.4", GitCommit:"9befc2b8928a9426501d3bf62f72849d5cbcd5a3", GitTreeState:"clean", BuildDate:"2017-11-20T05:28:34Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.4", GitCommit:"9befc2b8928a9426501d3bf62f72849d5cbcd5a3", GitTreeState:"clean", BuildDate:"2017-11-20T05:17:43Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Error message:
Readiness probe failed: Status of node rabbit@localhost Error: unable to connect to node rabbit@localhost: nodedown DIAGNOSTICS =========== attempted to contact: [rabbit@localhost] rabbit@localhost: * connected to epmd (port 4369) on localhost * epmd reports: node 'rabbit' not running at all other nodes on localhost: ['rabbitmq-cli-95'] * suggestion: start the node current node details: - node name: 'rabbitmq-cli-95@rabbitmq-server-rabbitmq-7b6cd75458-xl4qc' - home dir: /opt/bitnami/rabbitmq/.rabbitmq - cookie hash: REDACTED==
How was chart installed:
helm install --name rabbitmq-server --set rabbitmqUsername=admin,rabbitmqPassword=REDACTED stable/rabbitmq
Same for me.
Any suggestions?
Hi,
I hit this one today, and I think it is related to the rabbitmqDiskFreeLimit flag. As a default it is set to 8G, the same size as the PVC created for persistence. In my case I'm using iSCSI provisioner, so after FS has been created in the LUN, remaining space is less than 8G and Rabbit refuses to start.
Installing with the following command worked for me:
$ helm install --name rabbit --set persistence.size=10Gi stable/rabbitmq
Oddly enough, setting rabbitmqDiskFreeLimit="1GB" for example, does not work.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale
Same issue +1
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Most helpful comment
Hi,
I hit this one today, and I think it is related to the
rabbitmqDiskFreeLimitflag. As a default it is set to 8G, the same size as the PVC created for persistence. In my case I'm using iSCSI provisioner, so after FS has been created in the LUN, remaining space is less than 8G and Rabbit refuses to start.Installing with the following command worked for me:
Oddly enough, setting
rabbitmqDiskFreeLimit="1GB"for example, does not work.