Charts: [stable/mariadb] stateful sets break with Helm v3

Created on 28 Nov 2019  路  14Comments  路  Source: helm/charts

Describe the bug
After using helm 2to3 to migrate a release to helm v3 we encountered an issue with the StatefulSets in MariaDB. Labels added to the volumeClaimTemplate include a k-v pair for "Heritage". In Helm v2 this resolved to "Tiller" but in Helm v3 it now resolves to "Helm". This blocks our ability to upgrade because some of these key fields in a StaefulSet are meant to be immutable.
e.g.
https://github.com/helm/charts/blob/c5838636973a5546196db6e48ae46f99a55900c4/stable/mariadb/templates/master-statefulset.yaml#L259

Version of Helm and Kubernetes:
Kubernetes 1.14.8
Helm 3.0.0

Which chart:
https://github.com/helm/charts/tree/master/stable/mariadb

What happened:
Upgrades now fail because of the Heritage label in the volumeClaimTemplate.

What you expected to happen:
No release name changes occured so it is expected that the Helm release with the Stateful sets can be upgraded.

How to reproduce it (as minimally and precisely as possible):
See above. This may affect other charts using Stateful sets.

Anything else we need to know:
I don't know of a work-around apart from migrating to a new DB.

lifecyclstale

Most helpful comment

All 14 comments

Hi @jkirkham-ratehub

After using helm 2to3 to migrate a release to helm v3

I guess you're talking about this tool: https://github.com/helm/helm-2to3

This blocks our ability to upgrade because some of these key fields in a StatefulSet are meant to be immutable.

You're 100% right! We need to find a way to support upgrading from releases that were moved from helm2 to helm3.

I guess we could use something like this as a workaround:

$ kubectl patch statefulset my-release-mariadb-master --type=json -p='[{"op": "remove", "path": "/spec/selector/matchLabels/heritage"}]'
$ kubectl patch statefulset my-release-mariadb-slave  --type=json -p='[{"op": "remove", "path": "/spec/selector/matchLabels/heritage"}]'
...
$ helm upgrade my-release ...

@carrodher @javsalgar @tompizmor what do you think?

I was about to submit another ticket when I saw this one. Here are concrete steps to reproduce:

helm2 install --name test stable/mariadb
helm3 2to3 convert test
helm3 upgrade test stable/mariadb

The last step gives the error:

Error: UPGRADE FAILED: cannot patch "test-mariadb-master" with kind StatefulSet: StatefulSet.apps "test-mariadb-master" is invalid: spec: Forbidden: updates to statefulset spec for fields other than 'replicas', 'template', and 'updateStrategy' are forbidden && cannot patch "test-mariadb-slave" with kind StatefulSet: StatefulSet.apps "test-mariadb-slave" is invalid: spec: Forbidden: updates to statefulset spec for fields other than 'replicas', 'template', and 'updateStrategy' are forbidden

Thanks for sharing the exact steps to reproduce the issue @floretan
Did you try the workaround I shared before? (patching the k8s objects removing the label)

I tried the patch above, but the problem is actually the label on the statefulset's volumeClaimTemplates, not the label of the statefulset itself. I updated the path command to match, but the operation is not permitted:

 kubectl patch statefulset test-mariadb-master --type=json -p='[{"op": "remove", "path": "/spec/volumeClaimTemplates/0/metadata/labels"}]'
The StatefulSet "test-mariadb-master" is invalid: spec: Forbidden: updates to statefulset spec for fields other than 'replicas', 'template', and 'updateStrategy' are forbidden

I see two options, both of which are not really ideal:

  • Delete and recreate the statefulset. The underlying persistent volume claim is persisted, but it's still not a pleasant thing to do.
  • Trick helm 3 into setting .Release.Service to be "Tiller". I haven't found a way to do that though.

Hi @floretan @skaji

I tried the patch above, but the problem is actually the label on the statefulset's volumeClaimTemplates, not the label of the statefulset itself. I updated the path command to match, but the operation is not permitted

Oh crap... You're right. Probably you need to use new PVCs and clone the content of the old PVs (https://kubernetes.io/blog/2019/06/21/introducing-volume-cloning-alpha-for-kubernetes/) so you don't lose the data.

This issue also affects the redis chart.

This issue affects almost every chart in the stable repo (since they were meant to be installed with Helm 2 when they were created)

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.

Do not stale

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.

We have the same problems in all our clusters. isn't there a less painfull way (then cloning hundreds of PVC's) to solve this problem?

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.

Hi,

Given the stable deprecation timeline, this Bitnami maintained Helm chart is now located at bitnami/charts. Please visit the bitnami/charts GitHub repository to create Issues or PRs, in this case, if the problem persist with latest version of bitnami/mariadb, please don't hesitate to report it in the bitnami/charts GH repo.

In this issue, we tried to explain more carefully the reasons and motivations behind this transition, please don't hesitate to add a comment in this issue if you have any question related to the migration itself.

Regards,

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.

This issue is being automatically closed due to inactivity.

Was this page helpful?
0 / 5 - 0 ratings