Helm: no matches for kind "Deployment" in version "apps/v1beta1"

Created on 12 Dec 2019  路  53Comments  路  Source: helm/helm

Output of helm version:

$ helm version
Client: &version.Version{SemVer:"v2.12.1", GitCommit:"02a47c7249b1fc6d8fd3b94e6b4babf9d818144e", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.15.2", GitCommit:"8dce272473e5f2a7bf58ce79bb5c3691db54c96b", GitTreeState:"dirty"}

Output of kubectl version:

$ ./kubectl version
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.1", GitCommit:"d647ddbd755faf07169599a625faf302ffc34458", GitTreeState:"clean", BuildDate:"2019-10-02T17:01:15Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.2", GitCommit:"c97fe5036ef3df2967d086711e6c0c405941e14b", GitTreeState:"clean", BuildDate:"2019-10-15T19:09:08Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}

Cloud Provider/Platform (AKS, GKE, Minikube etc.):
minikube


Over the past few months there have been numerous issues opened for this very issue[1] and in the end they were mostly closed as "working as designed" / "this is a k8s problem". When looking at the k8s issues linked, k8s also seems to be "working as designed".

[1]https://github.com/helm/helm/issues/6583
https://github.com/helm/helm/issues/6969


I've been digging through the helm code the past couple days to better understand the problem / try to hack a solution together. My understanding of the problem is (simplified):

  1. Install k8s 1.15
  2. helm install a chart w/ deployment @ apps/v1beta1
  3. helm stores these templates in a config map for later operations
  4. upgrade k8s to 1.16
  5. k8s automagically "upgrades" the deployment to apps/v1
  6. Update chart to use apps/v1
  7. helm upgrade

The helm upgrade fails as it attempts to create a diff patch from steps 2 to 4. The problem is that helm doesn't know that it's source is no longer valid for the current version of k8s.

There have been a number of solutions proposed, but I wonder if there is a k8s API that helm could call to "upgrade" the API in the same manner that happens when moving from 1.15 -> 1.16? Perhaps there is a way to do that when calling BuildUnstructured ? I'm game for any and all suggestions.

Another thought... Perhaps have a new helm flag that would force update based off the current template values (put rather than patch). This assume that the new template has a 'valid' api version specified.

A few other comments:

  • Right now I'm talking about deployments, but I'm fairly certain this is a common problem across any number of other apis.
  • Destructive operations are a no-go due to use of statefulsets.
  • This might be a crappy, but workable workaround[1] ?

[1] https://github.com/helm/helm/issues/6969#issuecomment-565043434

Stale bug

Most helpful comment

I don't think this is a feature, it's a bug.

Putting the bug vs feature discussion aside, is there anything else that can be done? I have lots of customers in production that will run into this issue. Net is that any release that refers to an API that gets removed from k8s will not be able to upgrade (so long as k8s is upgraded to the release where the API is removed).

Delete/re-install the release isn't an answer, so I need to come up with something here. Is it reasonable to reach into the config map that stores the source manifest files and "upgrade" them in place?

All 53 comments

Perhaps have a new helm flag that would force update based off the current template values (put rather than patch).

helm upgrade --force performs a PUT operation, so that's already been implemented. However, since the two schemas are different internally, a PUT operation will not work. The only operation we've discovered so far is a DELETE and a CREATE. #7082 demonstrated this option does not work in this case.

If you can find an alternative solution that allows one to upgrade apiVersions from Kubernetes' point of view, we'd love to hear about it.

helm upgrade --force performs a PUT operation, so that's already been implemented.

It looks --force under the covers does a delete followed by create?[1] Also it first tries to build a patch, and that won't work in this case as we're unable to load the source due to api version.

If you can find an alternative solution that allows one to upgrade apiVersions from Kubernetes' point of view, we'd love to hear about it.

I did a bit of hacking yesterday and sort of came up with a proof of concept[2] that seems to work?

PTAL to see if this approach has any merits. Obviously something like this would need to be gated by some sort of a flag as doing this has a number of repercussions. (can't rollback to prior release, it will clobber any manifest changes that were made outside of helm, .... and I'm sure the list continues)

I think that is interesting after doing an upgrade with my patched tiller is that doing a helm get of the release where my deployment api was apps/v1beta1, shows apps/v1.

[1] https://github.com/helm/helm/blob/release-2.15/pkg/kube/client.go#L738
[2] https://github.com/curtisr7/helm/commit/139544c1e3208f9e42114a43098985b1c29f274f

It looks --force under the covers does a delete followed by create?

For Helm 2, but not for Helm 3. See https://github.com/helm/helm/issues/7082#issuecomment-559558318

For Helm 2, but not for Helm 3. See #7082 (comment)

+1

I'd like to see about doing something in the 2.16.x branch?

We're no longer accepting features for Helm 2. Sorry.

I don't think this is a feature, it's a bug.

Putting the bug vs feature discussion aside, is there anything else that can be done? I have lots of customers in production that will run into this issue. Net is that any release that refers to an API that gets removed from k8s will not be able to upgrade (so long as k8s is upgraded to the release where the API is removed).

Delete/re-install the release isn't an answer, so I need to come up with something here. Is it reasonable to reach into the config map that stores the source manifest files and "upgrade" them in place?

is there anything else that can be done?

As mentioned before, If you can find an solution within Kubernetes that allows one to upgrade apiVersions without having to re-create the object, we'd love to hear about it. We are currently unaware of an external endpoint available for third party tools like Helm to "convert" an object from one apiVersion to the next. The only option we're aware of from Kubernetes' API is to delete and create the object.

f you can find an solution within Kubernetes that allows one to upgrade apiVersions without having to re-create the object, we'd love to hear about it

I guess I don't follow...? When k8s is upgraded from 1.15 --> 1.16, the existing apps/v1beta1 apis are somehow upgraded to apps/v1? With the PR I linked to above, I was able to update these auto-upgraded apis?

Yeah, I would say this is a Helm problem - it should be able to see that you're on Kubernetes 1.16, where e.g. apps/v1beta1 is removed, and upgrade the release information in the secret to use apps/v1 instead (which Kubernetes automatically has done to the actual Deployment). Helm is just choking because the Release in the secret still contains apps/v1beta1.

It's unfortunate that the secret isn't editable manually. It seems to be base-64 encoded protobuf data instead of something more user-friendly.

We are currently unaware of an external endpoint available for third party tools like Helm to "convert" an object from one apiVersion to the next

Not an api endpoint, but kubectl convert does what we need? (I assume you're already way of this)

... though it looks like it's moving to a standalone binary - https://github.com/kubernetes/kubectl/issues/725

kubectl convert converts local files from one version to the next. It doesn't interact with resources live in the cluster.

https://github.com/kubernetes/kubernetes/blob/5cb1ec5fea6c4cafee6b8de3d09ca65361063451/pkg/kubectl/cmd/convert/convert.go#L41-L49

For anyone else stumbling upon this, we hit a similar situation where charts were not updated before a new version of k8s hit. We also did not want to delete the release (ie: actually remove running/functioning resources). Our solution was to:

  1. Remove all old configmaps for old revisions (effectively removing all history it ever existed from helm's perspective)
  2. Run helm install --replace --no-hooks ... once to get the release to a FAILED state (it fails because it sees resources that already exist)
  3. Run step 2 a second time to get a good deployment
  4. Remove all configmaps again (ie: revision 1) except the last good revision
  5. Run helm upgrade --install ... as usual

Alternatively, if you want to retain the same sequencing on the revision numbers do something to get a revision to fail, then start at step 3.

All of this of course assumes the chart(s) have actually been updated to use correct apiVersion etc and would legitimately install if starting from scratch.

kubectl convert converts files from one version to the next. It doesn't interact with resources live in the cluster.

Yep, I guess my point is that kubectl convert converts the apis to the new level.

The net of this problem is that helm stores release manifests and treats those as the "source" when upgrading to a new release. Normally this isn't a problem, except in cases where the stored release manifests are invalidated because underlying k8s has been upgraded and the running resources were automatically bumped to the newer level. This is the problem.

This morning I hacked some code together that reaches into the existing config maps and changes apps/v1beta2 --> apps/v1 then I was able to run helm upgrade without jumping through other hoops. I'm going to give this approach some more thought as it seemed to least painful.

@travisghansen ha yes, this will also do it. The key is that you're getting rid of the "source" release that has the removed api.

@curtisr7 yeah, we hit some really bizarre behavior with this too honestly. For example, even without deleting any of the old revisions we were able to force through a helm install --replace ... such that the most recent revision was in a 'good' (deployed) state. I was shocked when a subsequent helm upgrade --install ... was then failing with the same error. I have no idea what logic is getting used to find the 'source' revision, but for some reason it was skipping those.

Updating all old revisions (configmaps) seems potentially error prone, but certainly doable if old revisions is critical.

I know there are several scripts floating around to remove old configmaps and dump them to the filesystem. I can share what I used if anyone cares to have it.

Regarding the patching of helm deployment configmaps in helm v2 I created a shellscript for that (https://gist.github.com/Arabus/40ed5189e81fb10fc8a93f1f568aca65).

It's not perfect because you still need all kinds of tools installed, need to checkout protobuf and helm repos and in the end switch helms deployment information base to an 'incorrect state' i.e., the configmap is not consistent anymore because the chart version does not match the stored content.

It effectively solves the problem though...

thanks @Arabus.

@curtisr7 does the shell script provided above solve your issue? It appears that it will de-serialize the release object in-place, modify the rendered templates then save it. With some tweaking, that should work as a solution to "migrate" objects that were modified on the backend when you upgraded from 1.15 to 1.16.

@bacongobbler - I haven't tried the shell script. I ended up prototyping (and will go forward with) a go based command that calls helm via vendored dependency to essentially do the same thing.

@bacongobbler net of what I did is the following:

  • For a given release, find lastDeployed and lastReleased (using driver.ConfigMaps)
  • Parse manifests for lastDeployed. "Migrate" apiVersion for given kind/apiVersion to non-removed apiVersion
  • Mark lastDeployed as superseded
  • Create new (deployed) release with "migrated" manifests and version that is lastReleased+1

That sounds more reasonable than replacing the old manifest because it retains the deployment history. Also integrating this with helm or as a helm plugin sounds a little cleaner than my hacky bash script ;-)

Is this also the right place to resolve the similar issue when the upstream chart makes such a change, i.e. this which just bit me on k8s 1.15?

(I bounced here from https://github.com/helm/helm/issues/6646#issuecomment-547650430 which I suspect is incorrect for resources like this, where k8s exposes the same resource via multiple apiGroups, it must be keeping them compatible internally in some useful way.)

I suspect the general case is that Helm's data storage sees one GVK, and k8s responds to that, but it's consuming the same name slot in another GVK. I'm not really familiar with the k8s API, but is there a way to discover that programmatically, and hence have Helm at least aware that the problem is not "resource exists unexpectedly" but "resource changed GVK', and hence be able to handle that with a PUT against the new API version using --force, ignoring what Helm thinks the current resource is.

My understanding that PUT for the new GVK would work comes from https://github.com/helm/helm/issues/6583#issuecomment-539102989, which quotes that the APIVersions in such cases are round-trippable, and hence even if the object was originally created with v1beta1, accessing it via v1 should return v1-valid data.

@curtisr7 can you share your go commands? I'm stuck with 10++ deployments that use deprecated kubernetes APIs. So no continuous deployment at the moment :P

Is this also the right place to resolve the similar issue when the upstream chart makes such a change, i.e. this which just bit me on k8s 1.15?

Yes. This bug applies to all apis that were removed from 1.16

@curtisr7 can you share your go commands? I'm stuck with 10++ deployments that use deprecated kubernetes APIs. So no continuous deployment at the moment :P

I'm going to try to find some time to write a plugin that will do this migration.

To follow up on my earlier thought, K8s v1.14 introduced a field storageVersionHash into v1.meta.APIResource which will be identical for different APIResources that are aliases for each other.

It's used by the storage migrator, which is the feature that causes this issue, by doing no-op in-place updates to resources to the appropriate API version. The design doc suggests it was to go GA in 1.16, but it's still marked alpha in the 1.17 API doc.

Sadly, this feature is too new for Helm itself to rely on, but it might be useful for the plugin being developed, although it could just-as easily work by fetching all the resources in the helm release, and updating the data for those which come back with a different GVK than they were created with.

However, we'd still need to handle when a chart is upgraded and still refers to (some) resources by the old name, or we're back in this position again.

For example, while fixing apps/v1/*, we have to be careful not to break extensions/v1beta1/*, which has an overlapping but different deprecation period.

An alternative to using a plugin to migrate chart metadata is to teach Helm to recognise certain GVKs as equivalent. That information could be extracted from the StorageHashes information, albeit it'd probably have to be done statically, as the online values are only available from 1.14 onwards, and (for example) the apps.v1beta2 APIGroup values were removed (incorrectly?) in 1.16, even though you can re-enable app.v1beta2 by kubelet command line up until 1.18.

It might also help to have Helm record when creating a resource, if the resource that was created actually game back with a different GVK (as is the case for Deployments, for example), although that doesn't help if the Helm execution was before the time the new GVK was added, or (as is the case for Ingress) the "canonical" GVK is changed during the time they both exist, after the chart was deployed.

In the end, some static operation is needed as, e.g., in k8s 1.16, there's no way to know from online data that a Helm metadata record for an apps.v1beta2.Deployment resource has now magically an apps.v1.Deployment resource in the back-end, as there's no way to query information about an apps.v1beta2.Deployment that was created before k8s 1.8.

Thanks for looking further into this, @TBBle! Are there any concrete solutions that we may be able to work on today, or is this all exploratory/alpha work? Judging by your tone, it sounds like this is all work that may eventually become stable, but there are enough gotchas that might put user's workloads at risk.

If you happen to explore more with these ideas, let us know what you find out.

It's entirely theory at the moment. I don't expect to have much time to try this out in the near future, but it's one of my ongoing pain points so it's remaining in my line-of-sight.

So all hope, and no commitment. ^_^

Is it possible add ability for redeploy the same chart but with new chart version?
for example helm get values able to retrieve values for specific release and delete/create the same release with new apiVersion and new version of helm chart with values from previous release.
sounds like plugin for migration

Cant wait fix, so here workaround for helm3 that worked for me:

  • get and backup latest deployed release by kubectl get secret SECRET_NAME -o yaml > release.bak && cp latest.bak release
  • decode release by cat release | grep release: | grep -oP '(?<=release: ).*' > release.data | base64 -d | base64 -d | gzip -d - > release.data.decoded
  • change api versions in all templates.data (they also need to be base64 decoded and decoded back after changes) and in all manifests by some text editor
  • save file
  • encode it back cat release.data.decoded | gzip | base64 | base64
  • replace data.release value in you first file release
  • apply file to namespace

cc @marckhouzam (I think you mentioned this on the Helm dev call)

#/bin/bash

# This script fixes helmcharts secrets that was deployed with API objects paths, that was deprecated.
# More info here https://github.com/helm/helm/issues/7219
# Use it own risks and only if your know what for is this!
# After exeecuting you need to perform redeploy

NamespaceName=$1
echo ${NamespaceName}

echo "Starting to fix k8s objects $(date -R)"
mkdir -p /tmp/${NamespaceName}
cd /tmp/${NamespaceName}
echo "Getting latest deployed helm release secret"
kubectl config set-context --current --namespace=${NamespaceName}
kubectl get secret -l owner=helm,status=deployed -o yaml > ${NamespaceName}.release.bak
cp ${NamespaceName}.release.bak ${NamespaceName}.release
cat ${NamespaceName}.release | grep -oP '(?<=release: ).*' | base64 -d | base64 -d | gzip -d - > ${NamespaceName}.release.data

# Change that sed replacement to you case
echo "Replacing wrong path in k8s API"
sed -i '' "s#web-deployment.yaml\\\napiVersion: apps/v1beta1#web-deployment.yaml\\\napiVersion: apps/v1#g" ${NamespaceName}.release.data

echo "Encoding release back"
gzip ${NamespaceName}.release.data --to-stdout | base64 | base64 > ${NamespaceName}.release.data.fixed
FIXED_DATA=$(cat ${NamespaceName}.release.data.fixed)

echo "Replacing release in ${NamespaceName}.release yaml file"
sed -i '' "s#release: .*#release: ${FIXED_DATA}#g" ${NamespaceName}.release

echo "Applying fixed release"
kubectl apply -f ${NamespaceName}.release

echo "Helm release fixed. You need to perform redeploy"

here bash script for changing helm release

Not to nitpick but next time you might want to run your script through shellcheck https://www.shellcheck.net/

Fixed Version:

#!/bin/bash

# This script fixes helmcharts secrets that was deployed with API objects paths, that was deprecated.
# More info here https://github.com/helm/helm/issues/7219
# Use it own risks and only if your know what for is this!
# After exeecuting you need to perform redeploy

NamespaceName=$1
echo "${NamespaceName}"

echo "Starting to fix k8s objects $(date -R)"
mkdir -p /tmp/"${NamespaceName}"
cd /tmp/"${NamespaceName}" || exit
echo "Getting latest deployed helm release secret"
kubectl config set-context --current --namespace="${NamespaceName}"
kubectl get secret -l owner=helm,status=deployed -o yaml > "${NamespaceName}".release.bak
cp "${NamespaceName}".release.bak "${NamespaceName}".release
grep -oP '(?<=release: ).*' "${NamespaceName}".release | base64 -d | base64 -d | gzip -d - > "${NamespaceName}".release.data

# Change that sed replacement to you case
echo "Replacing wrong path in k8s API"
sed -i '' "s#web-deployment.yaml\\\napiVersion: apps/v1beta1#web-deployment.yaml\\\napiVersion: apps/v1#g" "${NamespaceName}".release.data

echo "Encoding release back"
gzip "${NamespaceName}".release.data --to-stdout | base64 | base64 > "${NamespaceName}".release.data.fixed
FIXED_DATA=$(cat "${NamespaceName}".release.data.fixed)

echo "Replacing release in ${NamespaceName}.release yaml file"
sed -i '' "s#release: .*#release: ${FIXED_DATA}#g" "${NamespaceName}".release

echo "Applying fixed release"
kubectl apply -f "${NamespaceName}".release

echo "Helm release fixed. You need to perform redeploy"

I've seen a lot of talk about this issue when doing upgrade. But I want to pitch in that this is not only an issue during upgrade. I get this problem when trying to install jfrog/xray.

I agree with guys that's not a feature it's a bug and will be good to get a fix for 2.x version as well. We got a lot problems with Production environment for our customers :(
Any news to get a fix with next new minor releases ?

I spent a few days diving into this issue. When Kubernetes removes an API version the go libraries can't parse the object. This is true for kubectl as well. To get around breaking versioning, kubernetes automatically modified the resources. Because the Release data was not modified to match the new API version it will not parse. Helm depends on the Kubernetes libraries with the broken versioning for parsing so it cannot read the last Release when performing an upgrade.

Helm Releases would need to be updated to match what kubernetes did using old libraries.

To fix cleanly you would need to rollback your cluster to a version that supports the API then update the API versions in the chart to a supported API resource version of the Kubernetes version that you are upgrading to.

Yuck.

If downgrading and re-upgrading isn't for you, the manifests will need to be manually changed outside of Helm.

I wish there was a better solution but this is the hand we were dealt.

Not to nitpick but next time you might want to run your script through shellcheck https://www.shellcheck.net/

Fixed Version:

#!/bin/bash

# This script fixes helmcharts secrets that was deployed with API objects paths, that was deprecated.
# More info here https://github.com/helm/helm/issues/7219
# Use it own risks and only if your know what for is this!
# After exeecuting you need to perform redeploy

NamespaceName=$1
echo "${NamespaceName}"

echo "Starting to fix k8s objects $(date -R)"
mkdir -p /tmp/"${NamespaceName}"
cd /tmp/"${NamespaceName}" || exit
echo "Getting latest deployed helm release secret"
kubectl config set-context --current --namespace="${NamespaceName}"
kubectl get secret -l owner=helm,status=deployed -o yaml > "${NamespaceName}".release.bak
cp "${NamespaceName}".release.bak "${NamespaceName}".release
grep -oP '(?<=release: ).*' "${NamespaceName}".release | base64 -d | base64 -d | gzip -d - > "${NamespaceName}".release.data

# Change that sed replacement to you case
echo "Replacing wrong path in k8s API"
sed -i '' "s#web-deployment.yaml\\\napiVersion: apps/v1beta1#web-deployment.yaml\\\napiVersion: apps/v1#g" "${NamespaceName}".release.data

echo "Encoding release back"
gzip "${NamespaceName}".release.data --to-stdout | base64 | base64 > "${NamespaceName}".release.data.fixed
FIXED_DATA=$(cat "${NamespaceName}".release.data.fixed)

echo "Replacing release in ${NamespaceName}.release yaml file"
sed -i '' "s#release: .*#release: ${FIXED_DATA}#g" "${NamespaceName}".release

echo "Applying fixed release"
kubectl apply -f "${NamespaceName}".release

echo "Helm release fixed. You need to perform redeploy"

This is for v3 right, would the same work for v2. But then editing / patching the configmaps in kube-system ?

yes and no - the configmaps in helm v2 are encoded via protbuf in addition to being base64 encoded and gzipped. You need a little tooling to edit them properly.

I have provided a script to do the api migration from k8s 1.15 to 1.16 for helm v2 here: https://gist.github.com/Arabus/40ed5189e81fb10fc8a93f1f568aca65

Feel free to adapt it to your needs

@Vandersteen
@behoof4mind
@Arabus

We had helm 2.9.1 and upgraded our k8s cluster to 1.6.8 from 1.5.3 . After that our Helm-managed projects became broken (failed during upgrade). We converted out releases to Helm3 but still have the same error.

I rewrote some parts of this nice script to fix "extensions/v1beta1 -> apps/v1" in the deployments.

Example of running:

./helm_rewrite_history.sh "mynamespace" "helm-release-name" "deployment.yaml"

(deployment.yaml - that name of helm template fie we need to know)
I also use ggrep (brew install grep) on my MacOS.

#!/bin/bash

set -e
set -o pipefail

# This script fixes helmcharts secrets that was deployed with API objects paths, that was deprecated.
# More info here https://github.com/helm/helm/issues/7219
# Use it own risks and only if your know what for is this!
# After exeecuting you need to perform redeploy
# This is fixed by Victor Yagofarov version of:
# https://github.com/helm/helm/issues/7219#issuecomment-590122046

NamespaceName=$1
ReleaseName="$2"  
ResourceName="$3" # for example: deployment.yaml

echo "${NamespaceName}"

if [[ -z "${NamespaceName}" ]]; then
  echo wrong input "(namespace)"
  exit 2
fi
if [[ -z "${ReleaseName}" ]]; then
  echo wrong input "(release name)"
  exit 2
fi
if [[ -z "${ResourceName}" ]]; then
  echo wrong input "(resource name to modify)"
  exit 2
fi

echo "Starting to fix k8s objects $(date -R)"
mkdir -p /tmp/"${NamespaceName}"
cd /tmp/"${NamespaceName}" || exit
echo "Getting latest deployed helm release secret"
kubectl config set-context --current --namespace="${NamespaceName}"
kubectl --namespace="${NamespaceName}" get secret -l owner=helm,status=deployed,name="${ReleaseName}" -o yaml > "${NamespaceName}"."${ReleaseName}".release.bak
cp "${NamespaceName}"."${ReleaseName}".release.bak "${NamespaceName}"."${ReleaseName}".release
ggrep -oP '(?<=release: ).*' "${NamespaceName}"."${ReleaseName}".release | base64 -d | base64 -d | gzip -d - > "${NamespaceName}"."${ReleaseName}".release.data

# Change that sed replacement to you case
echo "Replacing wrong path in k8s API"
sed -i '' "s#${ResourceName}\\\napiVersion: extensions/v1beta1#${ResourceName}\\\napiVersion: apps/v1#g" "${NamespaceName}"."${ReleaseName}".release.data

echo "Encoding release back"
gzip "${NamespaceName}"."${ReleaseName}".release.data --to-stdout | base64 | base64 > "${NamespaceName}"."${ReleaseName}".release.data.fixed
FIXED_DATA=$(cat "${NamespaceName}"."${ReleaseName}".release.data.fixed)

echo "Replacing release in ${NamespaceName}."${ReleaseName}".release yaml file"
sed -i '' "s#release: .*#release: ${FIXED_DATA}#g" "${NamespaceName}"."${ReleaseName}".release

echo "Applying fixed release"
kubectl --namespace="${NamespaceName}" apply -f "${NamespaceName}"."${ReleaseName}".release

echo "Helm release fixed. You need to perform redeploy"

@curtisr7 @abrenneke @travisghansen @Arabus @TBBle @matfiz @alexandrsemak @behoof4mind @mr-yaky @Vandersteen @Nastradamus An update you on where the issue is currently at.

A number of core maintainers have looked at this problem and have not found any good solution to solve it. This is inline with @adamreese feedback in https://github.com/helm/helm/issues/7219#issuecomment-594824635. The issue will remain open for now and is open to all in the community to push a PR if you think you have a solution for it.

I pushed a doc PR to cover this topic for now: https://github.com/helm/helm-www/pull/559. Add your feedback to the PR if you want.

A number of tools to workaround the issue have already been floated. Not to overload the space but I have just done a Helm plugin called mapkubeapis which does something similar. It works for Helm 3 at the moment but I hope to extend it to Helm2 soon.

@hickeyma I'm 100% on-board with helm doing nothing here...trying to come up with a solution that covers all scenarios etc seems like it would be very brittle and potentially dangerous.

The only thing I can think of would be to document perhaps a series of 'no really, do what I tell you' commands for when people hit this (and/or provide if necessary some additional flags to helm that facilitate the 'no really, do what I tell you' goal).

Sounds reasonable as this is a rather hard nut to crack. To alleviate the issue one could add extra warnings when using deprecated APIs during helm deployments, try to improve the error messages so it is clear what is actually the issue here and let them link to a documented workaround. In addition some kind of "helm edit" command, that would allow to edit an existing manifest without doing the download, decode, edit, encode, upload dance might be useful.

Thanks for all the effort you put in this tool, it still is very useful for my daily business and so far without equal in the k8s universe.

I understand that this is a fairly tricky issue to solve, and that it might be unfeasible for Helm to fix right now. But would it be possible to detect and give a warning?

I updated Nastradamus' script to loop through every namespace and fix the history on all releases. It also allows you to pass in just a single namespace and update all releases in that namespace.

resource="deployment.yaml" was hardcoded for our situation

example ./k8s-api-fix.sh kube-system
^ this will fix only helm deployments in kube-system

Additionally jq and obviously helm is required

#!/bin/bash

set -e
set -o pipefail

# This script fixes helmcharts secrets that was deployed with API objects paths, that was deprecated.
# More info here https://github.com/helm/helm/issues/7219
# Use it own risks and only if your know what for is this!
# After exeecuting you need to perform redeploy
# This is fixed by Victor Yagofarov version of:
# https://github.com/helm/helm/issues/7219#issuecomment-590122046

NamespaceName=($(kubectl get ns -o json | jq '.items[].metadata.name' | tr -d '"'))
if [ ! -z "$1" ]
then
    unset NamespaceName
    NamespaceName=( $1 )
fi
resource="deployment.yaml"

for namespace in "${NamespaceName[@]}"
do :
    unset ReleaseName
    ReleaseName=($(helm -n $namespace ls -o json | jq '.[].name' | tr -d '"'))
    for release in "${ReleaseName[@]}"
    do :
        echo "Starting to fix k8s objects $(date -R)"
        mkdir -p /tmp/"$namespace"
        cd /tmp/"$namespace" || exit
        echo "Getting latest deployed helm release secret"
        kubectl config set-context --current --namespace="$namespace"

        kubectl --namespace="$namespace" get secret -l owner=helm,status=deployed,name="$release" -o yaml > "$namespace"."$release".release.bak
        cp "$namespace"."$release".release.bak "$namespace"."$release".release
        ggrep -oP '(?<=release: ).*' "$namespace"."$release".release | base64 -d | base64 -d | gzip -d - > "$namespace"."$release".release.data

        # Change that sed replacement to you case
        echo "Replacing wrong path in k8s API"
        sed -i '' "s#$resource\\\napiVersion: extensions/v1beta1#$resource\\\napiVersion: apps/v1#g" "$namespace"."$release".release.data

        echo "Encoding release back"
        gzip "$namespace"."$release".release.data --to-stdout | base64 | base64 > "$namespace"."$release".release.data.fixed
        FIXED_DATA=$(cat "$namespace"."$release".release.data.fixed)

        echo "Replacing release in $namespace."$release".release yaml file"
        sed -i '' "s#release: .*#release: ${FIXED_DATA}#g" "$namespace"."$release".release

        echo "Applying fixed release"
        kubectl --namespace="$namespace" apply -f "$namespace"."$release".release

        echo "Helm release fixed. You need to perform redeploy"
    done
done

@travisghansen @Arabus @lindhe Pushed PR #7925 to add an improved error message for the user.

given scripts didn't work for me or I didn't have proper tooling so here is another version of the same.. ;) (used in windows terminal ubuntu)

#!/bin/bash

# This script fixes helmcharts secrets that was deployed with API objects paths, that was deprecated.
# More info here https://github.com/helm/helm/issues/7219
# Use it own risks and only if your know what for is this!
# After exeecuting you need to perform redeploy

NamespaceName=$1
echo "${NamespaceName}"

echo "Starting to fix k8s objects $(date -R)"
mkdir -p ./"${NamespaceName}"
cd ./"${NamespaceName}" || exit
echo "Getting latest deployed helm release secret"
kubectl config set-context --current --namespace="${NamespaceName}"

for NamespaceName in `kubectl get secret -l owner=helm,status=deployed | grep / | awk '{ print $1 }'` ; do 

    kubectl get secret ${NamespaceName} -o yaml > "${NamespaceName}".release.bak

    cp "${NamespaceName}".release.bak "${NamespaceName}".release
    grep -oP '(?<=release: ).*' "${NamespaceName}".release | base64 -d | base64 -d | gzip -d - > "${NamespaceName}".release.data

    # Change that sed replacement to you case
    echo "Replacing wrong path in k8s API"
    sed -i -e "s#deployment.yaml\\\napiVersion: extensions/v1beta1#deployment.yaml\\\napiVersion: apps/v1#g" "${NamespaceName}".release.data

    echo "Encoding release back"
    gzip "${NamespaceName}".release.data --to-stdout | base64 | base64 | tr -d "\n\r" > "${NamespaceName}".release.data.fixed 
    FIXED_DATA=$(cat "${NamespaceName}".release.data.fixed)

    echo "Replacing release in ${NamespaceName}.release yaml file"
    sed -i -e "s#release: .*#release: ${FIXED_DATA}#g" "${NamespaceName}".release
    sed -i -e "s#release\":\"[^\"]*#release\":\"${FIXED_DATA}#g" "${NamespaceName}".release

    echo "Applying fixed release"
    kubectl apply -f "${NamespaceName}".release

    echo "Helm release fixed. You need to perform redeploy"

done

To help people with this issue around Helm and removed Kubernetes APIs,:

  • Doc PR https://github.com/helm/helm-www/pull/559 currently is updated and in review again. Please take a look and your feedback is welcome. This hopefully will provide context around Helm and k8s APIs.
  • PR #7925 merged with an improved error message for the user (going to be in Helm 3.2 release)
  • Helm mapkubeapis plugin updated to support Helm v2 releases and improved documentation

Hope this helps.

@razorsk8jz @RomanDvorsky Helm mapkubeapis plugin available as an alternative to scripting it.

@hickeyma
Thanks a lot for providing that documentation, this helped a lot in fixing the issue running on Helm 3.1.2 and recently upgraded the Cluster to Kubernetes 1.16.

As I'm working on a Mac I experienced two minor issues when applying the fix described by you:

  1. The plugin didn't seem to find deprecated APIs even though they were visible when decoding it manually. Not sure if this actually is the root casue, but at least within my helm charts there was a \r included for the API Version strings e.g. "apiVersion: apps/v1beta1\r\nkind: Deployment".
  1. The manual steps worked perfect, but I needed to adapt the decoding command a bit. Changed it to this (as well as used release.json instead of release.data.decoded in the steps afterwards):
    kubectl get secrets <release_version_secret_name> -n <release_version_namespace> -o jsonpath='{.data.release}' | base64 -D | base64 -D | gzip -d > release.json

Thanks for feedback @BenteleFlorian. I will take a look at the plugin and the issue with Mac when I get a chance.

I'm running into this issue on a clean kind cluster running v1.17 whilst installing a helm chart that doesn't have any v1beta Deployments as far as I can see. It just crashes with this message.

Is there any way to further debug this? This is a completely clean slate cluster created from scratch with kind and it happens on the first helm chart I install. It would be useful if Helm would tell where the error message is origination from and what api object is causing it.

Edit:
After manually extracing the release I found out that it's actually a _dependency_ of the chart that is failing. It would be extremely useful if Helm would tell me _which_ manifest it failed to apply
https://github.com/helm/helm/pull/7925 doesn't seem to address that on first glance.

This was really obscure to find out; for the common usecase of "oh a dependency doesnt support the apiVersions that your cluster supports".

```
$ ag v1beta1

charts/redis-ephemeral/charts/redis/templates/deployment.yaml
1:apiVersion: extensions/v1beta1

charts/redis-ephemeral/charts/redis/templates/_helpers.tpl
23:{{- print "extensions/v1beta1" -}}

@arianvp Do you mind creating a new issue for https://github.com/helm/helm/issues/7219#issuecomment-620066026? It doesn't seem to be related to this issue.

This issue has been marked as stale because it has been open for 90 days with no activity. This thread will be automatically closed in 30 days if no further activity occurs.

Given the end result of the last few months of research into this topic, I鈥檓 closing this as something we cannot reasonably solve with the tools we have today. We鈥檝e looked into this from multiple angles, and there does not appear to be a way where we can cleanly migrate resources from one API schema to the next without some form of manual intervention from the user.

The scripts and plugins shared above can hopefully alleviate some of these pain points, and I鈥檇 encourage others to see if there鈥檚 something that can be done from the Kubernetes side of things.

Thank you all for your feedback and help getting to the bottom of this. If new information comes up, please feel free to share here and we can reconsider re-opening this topic for discussion.

Was this page helpful?
0 / 5 - 0 ratings