Bugs should be filed for issues encountered whilst operating cert-manager.
You should first attempt to resolve your issues through the community support
channels, e.g. Slack, in order to rule out individual configuration errors.
Please provide as much detail as possible.
Hi all, first off just wanted to say thanks for your hard work on cert-manager. It's a really nice tool! I would like to report on my rocky experience upgrading to v0.11, as I thought you and others may find it helpful.
Describe the bug:
A clear and concise description of what the bug is.
I followed the upgrade instructions and had no issues; everything on cert-manager's side updated seamlessly. I even went through installation validation and was able to create a self-signed certificate without issue. However, when I went to upgrade my app (let's call it my-app)'s Helm chart with the new cert-manager API versions in my cert-manager k8s resources, it failed with the following error:
Error: UPGRADE FAILED: failed decoding reader into objects: [unable to recognize "": no matches for kind "Certificate" in version "certmanager.k8s.io/v1alpha1", unable to recognize "": no matches for kind "ClusterIssuer" in version "certmanager.k8s.io/v1alpha1"]
I was able to get around this error by temporarily installing the old CRDs with the certmanager.k8s.io API, upgrading my-app's Helm release, then deleting the old CRDs afterwards. Subsequent upgrades to the Helm release succeeded without error.
Expected behavior:
A concise description of what you expected to happen.
Upgrade cert-manager to v0.11, helm upgrade my-app's chart with updated cert-manager APIs defined in my-app's k8s resources, successful deployment of my-app
Steps to reproduce the bug:
Steps to reproduce the bug should be clear and easily reproducible to help people
gain an understanding of the problem.
Anything else we need to know?:
Not entirely sure if this behavior is by design or if it's a bug, but it wasn't immediately clear to me that the old CRD/APIs still had to be installed in the cluster in order to update my-app's Helm chart. As such I spent a lot more time than I would've liked on the cert-manager upgrade.
Environment details::
/kind bug
Hi @trenslow, sorry this caused you trouble.
It looks like the resources you were creating with your app are using the old API version and group from <v0.11 - certmanager.k8s.io/v1alpha1. These have now moved to cert-manager.io/v1alpha2. This can be quite painful, mainly because of the group change.
Going forward we will be implementing a conversion webhook on the new group so that the upgrade processes will be much more seamless in the future.
There is some explanation on why this change was needed here and you can see the current progress on implementing the conversion webhook for future API versions here
Hi @JoshVanL,
Thanks for the prompt response.
It's clear that the problem was due to the old API version and group and I understand the reasoning for changing them. After all, the error occurred while I was simply changing the API version in the templates of the ClusterIssuer and Certifcate resources that are part of my app's Helm chart.
I don't totally understand the inner workings of Helm, but it seemed that Helm needed to contact the old API first before updating the Issuer and Certificate resources that belong to my app. This is a bit of a pain, seeing as the instructions to upgrade cert-manager require the removal of the old CRDs, which in turn removes the old API, which Helm then cannot contact in order to update a chart's templates to the new API version and group.
It would've been nice to have a little heads-up in the documentation, at least untill the conversion webhook is up and running.
Thanks for the feedback. This wasn't something we were aware of at the time of writing, so a note in our upgrade guide to say that for users of Helm, deleting and then re-creating the old CRDs until you've finished upgrading other charts that define cert-manager resources would be good.
Out of curiosity, will the conversion webhooks require a minimum cert-manager version e.g. if we upgrade from 0.9 to 0.12?
I just ran into this issue and easily fixed it by making sure I didn't delete the v0.9.0 CRDs until after I was done with the full upgrade, before restoring my backups.
@migueloller Can you please explain a bit how did you achieve that?
what we did was
v1alpha2, helm say v1alpha1 is not availalble.Thanks very much
Update
I think i figured it out. just re-apply the old crds. Upgrade all your helm-charts. Then remove the old-crds. Done.
@munnerz and @JoshVanL
i think this is a potential issue for a lot of people. Maybe it is possible to add this as a gotcha in the upgrade documentation of version 0.10.0 to 0.11.0 around point 3
Ensure the old cert-manager CRD resources have also been deleted: kubectl get crd | grep certmanager.k8s.io
https://cert-manager.io/docs/installation/upgrading/upgrading-0.10-0.11/
@yra-wtag, your update has it right. When doing the first upgrade, I didn't remove the CRDs. I removed them after as you pointed out. It might be a good idea to document this gotcha.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to jetstack.
/lifecycle stale
Most helpful comment
@yra-wtag, your update has it right. When doing the first upgrade, I didn't remove the CRDs. I removed them after as you pointed out. It might be a good idea to document this gotcha.