There are problems with the current consul helm chart (not entirely the fault of the chart, but limitations due to the nature/bugs in consul) that results in failed clusters due to leader election failures. There are few issues already reported and fixes proposed but none of them resolve the underlying issue in a reliable manner. Therefore, I think it's misleading to have consul in the stable stream in it's current condition.
See: https://github.com/kubernetes/charts/issues/1404 https://github.com/kubernetes/charts/issues/1289 https://github.com/kubernetes/charts/issues/1143 https://github.com/kubernetes/charts/pull/1556
Update: the underlying issue related to this problem (https://github.com/hashicorp/consul/issues/1580) has been resolved. Consul version 9.3rc2 and up should not experience it. May be the chart needs to be updated to use the latest consul version.
@rajiteh maybe we can override the version on values.yaml file.
Consul 1.0.0 has been released. This is something that we were very interested in. Using this chart with consul:1.0.0 has solved most of the issues that we were experiencing.
We run consul across a WAN and every time we did a kubernetes upgrade, we would have to delete both consul clusters & their persistent storage because when they came back up, they would have trouble electing a leader. We've just tested using the 1.0.0 release and we were able to take down a full consul cluster in k8s and bring it back up with the same persistent storage and everything worked fine across the WAN as well. Leader election took a little while, but it stabilized without any manual intervention.
I'd be interested in others testing this out as well, because it would be cool to move this chart to version 1.0.0 to use the new release. Moving to 1.0.0 should also help to close out this issue.
This is awesome!
On Wed, 18 Oct 2017 at 11:59 PM Jonathan Stacks notifications@github.com
wrote:
Consul 1.0.0 https://github.com/hashicorp/consul/releases/tag/v1.0.0
has been released. This is something that we were very interested in. Using
this chart with consul:1.0.0 has solved most of the issues that we were
experiencing.We run consul across a WAN and every time we did a kubernetes upgrade, we
would have to delete both consul clusters & their persistent storage
because when they came back up, they would have trouble electing a leader.
We've just tested using the 1.0.0 release and we were able to take down a
full consul cluster in k8s and bring it back up with the same persistent
storage and everything worked fine across the WAN as well. Leader election
took a little while, but it stabilized without any manual intervention.I'd be interested in others testing this out as well, because it would be
cool to move this chart to version 1.0.0 to use the new release. Moving to
1.0.0 should also help to close out this issue.—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/kubernetes/charts/issues/1892#issuecomment-337640028,
or mute the thread
https://github.com/notifications/unsubscribe-auth/ABhWeTjmnrE4lpiQ5WbRWA2knxCSXpVzks5stiBFgaJpZM4PHkEt
.>
Eduardo D. Bergavera, Jr.
Linux Admin
Email: [email protected]
OpenID: https://launchpad.net/~edbergavera
Github: https://github.com/edbergavera
Most helpful comment
Update: the underlying issue related to this problem (https://github.com/hashicorp/consul/issues/1580) has been resolved. Consul version 9.3rc2 and up should not experience it. May be the chart needs to be updated to use the latest consul version.