Cert-manager: customresourcedefinitions.apiextensions.k8s.io "certificates.certmanager.k8s.io" already exists

Created on 6 Sep 2018  Β·  38Comments  Β·  Source: jetstack/cert-manager

Bugs should be filed for issues encountered whilst operating cert-manager.
You should first attempt to resolve your issues through the community support
channels, e.g. Slack, in order to rule out individual configuration errors.
Please provide as much detail as possible.

Describe the bug:
I've installed two cert-manager into different custom namespaces (stage, demo).
the first installment into stage works flawless.

helm upgrade --namespace stage --install --wait --set ingressShim.defaultIssuerName=letsencrypt-prod --set ingressShim.defaultIssuerKind=Issuer --set rbac.create=false --set serviceAccount.create=false stage-cert-manager stable/cert-manager

the second installment into demo fails

helm upgrade --namespace demo --install --wait --set ingressShim.defaultIssuerName=letsencrypt-prod --set ingressShim.defaultIssuerKind=Issuer --set rbac.create=false --set serviceAccount.create=false demo-cert-manager stable/cert-manager
> Error: release demo-cert-manager failed: customresourcedefinitions.apiextensions.k8s.io "certificates.certmanager.k8s.io" already exists

That's the existing customresourcedefinitions in my cluster:

kubectl get customresourcedefinitions --all-namespaces=true
NAME                                AGE
apprepositories.kubeapps.com        2d
certificates.certmanager.k8s.io     19m
clusterissuers.certmanager.k8s.io   19m
issuers.certmanager.k8s.io          19m

And this the definition of certificates.certmanager.k8s.io:

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  creationTimestamp: 2018-09-06T09:22:39Z
  generation: 1
  labels:
    app: cert-manager
    chart: cert-manager-v0.4.1
    heritage: Tiller
    release: demo-cert-manager
  name: certificates.certmanager.k8s.io
  resourceVersion: "7379719"
  selfLink: /apis/apiextensions.k8s.io/v1beta1/customresourcedefinitions/certificates.certmanager.k8s.io
  uid: 668f36bf-b1b6-11e8-a174-ee96761aa8f6
spec:
  additionalPrinterColumns:
  - JSONPath: .metadata.creationTimestamp
    description: |-
      CreationTimestamp is a timestamp representing the server time when this object was created. It is not guaranteed to be set in happens-before order across separate operations. Clients may not set this value. It is represented in RFC3339 form and is in UTC.

      Populated by the system. Read-only. Null for lists. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
    name: Age
    type: date
  group: certmanager.k8s.io
  names:
    kind: Certificate
    listKind: CertificateList
    plural: certificates
    shortNames:
    - cert
    - certs
    singular: certificate
  scope: Namespaced
  version: v1alpha1
  versions:
  - name: v1alpha1
    served: true
    storage: true
status:
  acceptedNames:
    kind: Certificate
    listKind: CertificateList
    plural: certificates
    shortNames:
    - cert
    - certs
    singular: certificate
  conditions:
  - lastTransitionTime: 2018-09-06T09:22:39Z
    message: no conflicts found
    reason: NoConflicts
    status: "True"
    type: NamesAccepted
  - lastTransitionTime: null
    message: the initial names have been accepted
    reason: InitialNamesAccepted
    status: "True"
    type: Established
  storedVersions:
  - v1alpha1

I guess the certificate is now assigned with the demo instance, right? -> release: demo-cert-manager

Expected behaviour:
I guess the cert-manager should be installable without issues in two namespaces.

Steps to reproduce the bug:
see above.

Anything else we need to know?:

Environment details::

  • Kubernetes version (e.g. v1.10.2): 1.11.2
  • Cloud-provider/provisioner (e.g. GKE, kops AWS, etc): azure
  • cert-manager version (e.g. v0.4.0): 0.4.1
  • Install method (e.g. helm or static manifests): helm

/kind bug

kinbug

Most helpful comment

I fixed worked around this with:

kubectl get customresourcedefinition
kubectl delete customresourcedefinition xxxxxxxx

But that may well do horrible things if used in production, I don't know.

All 38 comments

I could disable the creation of the custom resource with "createCustomResource=false" but I can't find any documentation what would be the impact of it. could you help me out?

The issue with this is that the issuer type is now not recognizable...
error: unable to recognize "/home/vsts/work/r1/a/_HelmConfigs/shared/issuer.yaml": no matches for kind "Issuer" in version "certmanager.k8s.io/v1alpha1" /opt/hostedtoolcache/kubectl/1.11.2/x64/kubectl failed with return code: 1

I found the reason why it fails with helm but it needs some adjustments on cert-manager side:

https://github.com/helm/helm/blob/master/docs/charts_hooks.md#automatically-delete-hook-from-previous-release

Automatically delete hook from previous release

When helm release being updated it is possible, that hook resource already exists in cluster. By default helm will try to create resource and fail with "... already exists" error.

One might choose "helm.sh/hook-delete-policy": "before-hook-creation" over "helm.sh/hook-delete-policy": "hook-succeeded,hook-failed" because:

  • It is convenient to keep failed hook job resource in kubernetes for example for manual debug.
  • It may be necessary to keep succeeded hook resource in kubernetes for some reason.
  • At the same time it is not desirable to do manual resource deletion before helm release upgrade.

"helm.sh/hook-delete-policy": "before-hook-creation" annotation on hook causes tiller to remove the hook from previous release if there is one before the new hook is launched and can be used with another policy.

@SeriousM for what it's worth, I had this issue when running cert-manager chart version 0.4.1 Once I updated my help repo & deployed 0.5.0 this issue disappeared.

I am getting this issue even on 0.5.0

Same issue.

Same issue on v0.5.0, is there a workaround?

Same here on v0.5.0. This is creating some headaches for us when re-deploying and upgrading our umbrella helm chart which contains cert-manager.

Same issue on v0.5.0

After upgrading 0.4.1 to 0.5.0 all CRD were gone from our setup, with all certificates included. Only helm delete --purge and reinstall helped.

I fixed worked around this with:

kubectl get customresourcedefinition
kubectl delete customresourcedefinition xxxxxxxx

But that may well do horrible things if used in production, I don't know.

I fixed worked around this with:

kubectl get customresourcedefinition
kubectl delete customresourcedefinition xxxxxxxx

But that may well do horrible things if used in production, I don't know.

This works.. Thanks for sharing

Is there any way this can be tackled by the cert-manager team? This is really painful in practice to have to delete things by hand and co.

There seems to be a beginning of solution in this comment: https://github.com/jetstack/cert-manager/issues/870#issuecomment-419092047

@munnerz ? :)

There's an issue tracking this upstream: https://github.com/helm/helm/issues/4259

The only solution for mitigating the issue currently is to delete the CRD, which will delete all data.

The 'proper' workaround for the time being is something like:

$ kubectl get issuer,clusterissuer,certificate -o yaml --all-namespaces > cert-manager-resources-backup.yaml
$ kubectl delete crd issuer,clusterissuer,certificate
$ {run cert-manager install via Helm}
$ kubectl create -f cert-manager-resources-backup.yaml

Workaround.

I have add a simple check before making Helm release for our deployments.

set +e
kubectl api-resources -o name | grep certificates.certmanager.k8s.io
create_custom_resource=$( if [[ $? == 0 ]]; then echo false; else echo true; fi; )
set -e
helm upgrade --install \
...
 --set cert-manager.createCustomResource=$create_custom_resource
...

I had this problem because I ran the initial command and forgot to set rbac=false - helm managed to install these custom resources and purge did not clean up after it. πŸ€”

I tried with --set cert-manager.createCustomResource=false but was still the same error πŸ€”

So i followed the tip kubectl get issuer,clusterissuer,certificate -o yaml --all-namespaces > cert-manager-resources-backup.yaml and the yaml was empty.. so nothing important there πŸ˜„

I just deleted the custom resources and did the helm package again with rbac=false and it installed properly.

Who has the responsibility of cleaning up of custom resources after delete / purge?

We’ve changed the way we install the CRDs in the helm chart for the next
release due to issues with Helm. You can see a bit more info here!
https://github.com/jetstack/cert-manager/pull/1138

On Thu, 6 Dec 2018 at 10:44, Piotr Kula notifications@github.com wrote:

I had this problem because I ran the initial command and forgot to set
rbac=false - helm managed to install these custom resources and purge did
not clean up after it. πŸ€”

I tried with `--set cert-manager.createCustomResource=false' but was still
the same error πŸ€”

So i followed the tip kubectl get issuer,clusterissuer,certificate -o
yaml --all-namespaces > cert-manager-resources-backup.yaml and the yaml
was empty.. so nothing important there πŸ˜„

I just deleted the custom resources and did the helm package again with
rbac=false and it installed properly.

Who has the responsibility of cleaning up of custom resources after delete
/ purge?

β€”
You are receiving this because you were mentioned.

Reply to this email directly, view it on GitHub
https://github.com/jetstack/cert-manager/issues/870#issuecomment-444829107,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAMbP_f6SDvz-Y2imDZteoma04aDDsViks5u2PT0gaJpZM4WcmsP
.

Ran into these symptoms deploying on a new/clean aks cluster today and the above workarounds didn't work. I was able to work-around by first doing a helm install --set createCustomResource=false, then following that up with a helm upgrade without setting that variable.

I had the same problem with a fresh install on a aws cluster. As @michaelsteven I solved it with:
helm install --name cert-manager --namespace yournamespace stable/cert-manager --set createCustomResource=false
helm upgrade --install --namespace yournamespace cert-manager stable/cert-manager --set createCustomResource=true

Same issue with a fresh cluster in GKE, this works perfectly!

I had the same problem with a fresh install on a aws cluster. As @michaelsteven I solved it with:
helm install --name cert-manager --namespace yournamespace stable/cert-manager --set createCustomResource=false
helm upgrade --install --namespace yournamespace cert-manager stable/cert-manager --set createCustomResource=true

I had the same problem with a fresh install on a aws cluster. As @michaelsteven I solved it with:
helm install --name cert-manager --namespace yournamespace stable/cert-manager --set createCustomResource=false
helm upgrade --install --namespace yournamespace cert-manager stable/cert-manager --set createCustomResource=true

Yep, works for me as well. But what about setting ingressShim.defaultIssuerName and ingressShim.defaultIssuerKind?

This is a bug introduced on Helm v2.12.0 and is now corrected on v2.12.1. Just upgrade Helm and it will work.

Is this still required after the 2.12.1 revert, or should #1138 be removed? https://github.com/helm/charts/pull/10255 open to remove the existing kubectl apply requirement

Also having this issue, using Windows and it doesn't look like 2.12.1 is out on the Win binary release yet. It states its there but when you download, unzip and upgrade its still on 2.12.0.

2.12.1 still has this issue.

I used it and worked flawless here... on 2.12.0 I had the issue

2.12.1 worked for me as well.

Upgrading to 2.12.1 (both Helm and Tiller) didn't work for me.

Did you already have cert-manager installed? If so, try to remove it
completelly with the --purge option and then install again

Em seg, 7 de jan de 2019 Γ s 11:44, rbq notifications@github.com escreveu:

Upgrading to 2.12.1 (both Helm and Tiller) didn't work for me.

β€”
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/jetstack/cert-manager/issues/870#issuecomment-451938816,
or mute the thread
https://github.com/notifications/unsubscribe-auth/ABKx01cZ7JxV3-DJ-tTSQNNb96L5jQTSks5vA083gaJpZM4WcmsP
.

--
Raphael Derosso Pereira

Same issue with fresh cluster & Helm v2.12.1 :(

Same issue here with an up-to-date Helm and a cluster already containing the cert-manager. I don't really want to have to remove cert-manager each time I add a new Certificate in my cluster. πŸ˜•

My shortcut to purge:

kubectl delete customresourcedefinitions clusterissuers.certmanager.k8s.io issuers.certmanager.k8s.io certificates.certmanager.k8s.io
helm delete --purge cert-manager
kubectl delete namespaces cert-manager

Ok I found how to fix my problem.

I'm deploying cert-manager thanks to Helm in my "infrastructure as code" repo deployment process (which will create the k8s instances, etc thanks to Terraform).

For each app, I use an Helm chart which I use to deploy it.
In these app charts, I used to declare cert-manager as a dependency.
If I remove this dependency, the deployment of my apps work.

helm version: v2.12.1

clean setup:

Update helm REPO

    helm repo update

Create nginx ingress

    helm install \
    --name nginx-ingress stable/nginx-ingress  \
    --version=1.0.2 \
    --set rbac.create=true \
    --namespace nginx-ingress

Install cert-manager

    helm install \
    --name cert-manager stable/cert-manager \
    --version 0.5.2 \
    --set ingressShim.defaultIssuerName=letsencrypt-prod \
    --set ingressShim.defaultIssuerKind=ClusterIssuer \
    --set createCustomResource=true \
    --namespace kube-system

Create Cluster Issuer

    kubectl apply -f cert-manager/cert-issuer.yaml 

Any ideas when this can be fixed?

We've extended our upgrading documentation and installation documentation to include troubleshooting tips.

Please carefully read these, including the troubleshooting page, if you are running into issues πŸ˜„

The #cert-manager slack channel on slack.k8s.io is also a great place to get support if you're still having trouble πŸ˜„

My shortcut to purge:

kubectl delete customresourcedefinitions clusterissuers.certmanager.k8s.io issuers.certmanager.k8s.io certificates.certmanager.k8s.io
helm delete --purge cert-manager
kubectl delete namespaces cert-manager

1 minute after I've deleted that customresourcedefinitions (CDRs) something creates all of it again. Every time. Have any idea what is it?

I try to install cert-manager / release-0.11, it uses *.cert-manager.io CDRs but then I try to create apiVersion: cert-manager.io/v1alpha2 kind: ClusterIssuer it doesn't create instance becouse of old clusterissuers.certmanager.k8s.io .

How I can stop recreating of *.certmanager.k8s.io CDRs ?
Thanks

Was this page helpful?
0 / 5 - 0 ratings

Related issues

Azylog picture Azylog  Β·  3Comments

f-f picture f-f  Β·  4Comments

gaieges picture gaieges  Β·  3Comments

Stono picture Stono  Β·  3Comments

matthew-muscat picture matthew-muscat  Β·  4Comments