Cert-manager: ClusterIssuer not found

Created on 27 Dec 2019  路  18Comments  路  Source: jetstack/cert-manager

Describe the bug:
When attempting to request a certificate the operation fails.
kubectl describe certificaterequest xx

Name:         xx-staging-3078285176
Namespace:    default
Labels:       cattle.io/creator=norman
Annotations:  cert-manager.io/certificate-name:xx-staging
              cert-manager.io/private-key-secret-name: xx-staging
API Version:  cert-manager.io/v1alpha2
Kind:         CertificateRequest
Metadata:
  Creation Timestamp:  2019-12-26T23:40:35Z
  Generation:          1
  Owner References:
    API Version:           cert-manager.io/v1alpha2
    Block Owner Deletion:  true
    Controller:            true
    Kind:                  Certificate
    Name:                  xx-staging
    UID:                   1719a084-ad5d-4a8c-a89e-0e66906103fc
  Resource Version:        663190
  Self Link:               /apis/cert-manager.io/v1alpha2/namespaces/default/certificaterequests/xx-staging-3078285176
  UID:                     b00bfa66-592c-4255-b7e2-e182af064449
Spec:
  Csr:  --
  Issuer Ref:
    Group:  cert-manager.io
    Kind:   ClusterIssuer
    Name:   letsencrypt-staging
Status:
  Conditions:
    Last Transition Time:  2019-12-26T23:40:35Z
    Message:               Referenced issuer does not have a Ready status condition
    Reason:                Pending
    Status:                False
    Type:                  Ready
Events:
  Type    Reason          Age                From          Message
  ----    ------          ----               ----          -------
  Normal  IssuerNotFound  19m (x5 over 19m)  cert-manager  Referenced "ClusterIssuer" not found: clusterissuer.cert-manager.io "letsencrypt-staging" not found

kubectl describe clusterissuers letsencrypt-

Name:         letsencrypt-staging
Namespace:
Labels:       <none>
Annotations:  <none>
API Version:  cert-manager.io/v1alpha2
Kind:         ClusterIssuer
Metadata:
  Creation Timestamp:  2019-12-26T23:43:38Z
  Generation:          1
  Resource Version:    663214
  Self Link:           /apis/cert-manager.io/v1alpha2/clusterissuers/letsencrypt-staging
  UID:                 ad0fb84d-cf45-4b47-87ba-c44539d54acb
Spec:
  Acme:
    Email: xx
    Private Key Secret Ref:
      Name:  letsencrypt-staging-account-key
    Server:  https://acme-staging-v02.api.letsencrypt.org/directory
    Solvers:
      Http 01:
        Ingress:
          Class:  nginx
Status:
  Acme:
  Conditions:
    Last Transition Time:  2019-12-26T23:45:47Z
    Message:               Failed to verify ACME account: Get https://acme-staging-v02.api.letsencrypt.org/directory: dial tcp: i/o timeout
    Reason:                ErrRegisterACMEAccount
    Status:                False
    Type:                  Ready
Events:
  Type     Reason                Age                  From          Message
  ----     ------                ----                 ----          -------
  Warning  ErrVerifyACMEAccount  4m28s (x9 over 23m)  cert-manager  Failed to verify ACME account: Get https://acme-staging-v02.api.letsencrypt.org/directory: dial tcp: i/o timeout
  Warning  ErrInitIssuer         4m28s (x9 over 23m)  cert-manager  Error initializing issuer: Get https://acme-staging-v02.api.letsencrypt.org/directory: dial tcp: i/o timeout

Anything else we need to know?:
When checking the logs for the cert-manager pod I get the following:
ACME server URL host and ACME private key registration host differ. Re-checking ACME account registration

Environment details::

  • Kubernetes version (e.g. v1.10.2): 1.16.3
  • cert-manager version (e.g. v0.4.0): v0.13.0-alpha.0
  • Install method (e.g. helm or static manifests): helm

/kind bug

triagsupport

Most helpful comment

Notes from a serial upgrader

So here's a bizarre twist.

$ kubectl get clusterissuers
No resources found in default namespace.

BUT

I can list my certificates and they all come back as ready. This is after and upgrade from 9-12 directly.

Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2019-12-13T11:52:32Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"15+", GitVersion:"v1.15.7-gke.23", GitCommit:"06e05fd0390a51ea009245a90363f9161b6f2389", GitTreeState:"clean", BuildDate:"2020-01-17T23:10:45Z", GoVersion:"go1.12.12b4", Compiler:"gc", Platform:"linux/amd64"}

In general it seems that the cert-manager pod has to come online before kubectl will find your new/old resource version.

You'll see:

$ kubectl get clusterissuers
Error from server (NotFound): Unable to list "certmanager.k8s.io/v1alpha1, Resource=clusterissuers": the server could not find the requested resource (get clusterissuers.certmanager.k8s.io)

Until the upgraded version is running.


Generally it seems you need to restart the cert-manager pod in order for this to work. Not sure why that is. Not sure how many times is necessary.


Another fun thing: kubectl caches the CRDs I think? So client-go looking for specific resources and versions works great, but kubectl gets:

$ kubectl get certificates --all-namespaces
Error from server (NotFound): Unable to list "certmanager.k8s.io/v1alpha1, Resource=certificates": the server could not find the requested resource (get certificates.certmanager.k8s.io)

Running kubectl api-resources will get it to resync the resources.

All 18 comments

I have the same problem
. Kubernetes version v1.17.0 same with v1.16.3
Cert-manager version: v0.12.0
Install method: static manifest and Helm after uninstall Static method

Work with selfSigned but failed with Acme letsencrypt Staging & production

I have the same problem
. Kubernetes version v1.17.0 same with v1.16.3
Cert-manager version: v0.12.0

Same problem with 1.17, installed with helm

Solved!
It s dns problem.
Change setting of install cert manager and add :
Podnsoptions:
Options:
ndots="2"

"ClusterIssuer not found" can also occur from a problem in a subtle upgrade detail where Helm charts are involved. When the annotations on the Helm chart (or certificate indication of issuer) don't match with the latest versions of cert-manager.

https://cert-manager.io/docs/installation/upgrading/upgrading-0.10-0.11/

In particular you need ingress definitions in Helm chart ingress annotations to change from "k8s.io" to "cert-manager.io"

e.g.
certmanager.k8s.io/cluster-issuer:
to
cert-manager.io/cluster-issuer:

I'm adding this because the issue name is confusing and the problem can crop up for multiple reasons. OP looks like a timeout issue reaching the host.

Me and my Kubernetes were similarly confused by CRD remains from an earlier cert-manger install.
Deleting these old CRDs fixed by problems.

Notes from a serial upgrader

So here's a bizarre twist.

$ kubectl get clusterissuers
No resources found in default namespace.

BUT

I can list my certificates and they all come back as ready. This is after and upgrade from 9-12 directly.

Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2019-12-13T11:52:32Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"15+", GitVersion:"v1.15.7-gke.23", GitCommit:"06e05fd0390a51ea009245a90363f9161b6f2389", GitTreeState:"clean", BuildDate:"2020-01-17T23:10:45Z", GoVersion:"go1.12.12b4", Compiler:"gc", Platform:"linux/amd64"}

In general it seems that the cert-manager pod has to come online before kubectl will find your new/old resource version.

You'll see:

$ kubectl get clusterissuers
Error from server (NotFound): Unable to list "certmanager.k8s.io/v1alpha1, Resource=clusterissuers": the server could not find the requested resource (get clusterissuers.certmanager.k8s.io)

Until the upgraded version is running.


Generally it seems you need to restart the cert-manager pod in order for this to work. Not sure why that is. Not sure how many times is necessary.


Another fun thing: kubectl caches the CRDs I think? So client-go looking for specific resources and versions works great, but kubectl gets:

$ kubectl get certificates --all-namespaces
Error from server (NotFound): Unable to list "certmanager.k8s.io/v1alpha1, Resource=certificates": the server could not find the requested resource (get certificates.certmanager.k8s.io)

Running kubectl api-resources will get it to resync the resources.

Having the same issue, with any version of cert-manager, using RKE v1.1.0 and Kubernetes v1.17.4 on Ubuntu 18.04, installed with Helm 3.

Same Kubernetes v1.18.1

Having the same issue, with any version of cert-manager, using RKE v1.1.0 and Kubernetes v1.17.4 on Ubuntu 18.04, installed with Helm 3.

In my case was a network issue of the Kube cluster.

same issue in k8s v1.17.3.
So, It is related to network issue ?

Actually there was a problem in my cluster that prevented services on different nodes to communicate using the internal 10.0.0.0 network. It was caused by a bug in Canal when having a network interface with two IP addresses.

Note: if your nodes communicate with the external using an HTTP proxy, do not forget to specify it in the Helm chart options.

The issue here can be seen in the original post:

    Message:               Failed to verify ACME account: Get https://acme-staging-v02.api.letsencrypt.org/directory: dial tcp: i/o timeout

It seems like there's a networking issue preventing your cert-manager instance connecting to Let's Encrypt staging environment. You should double check other applications in your cluster are also working okay, and also run the Sonobuoy conformance test suite to ensure your cluster is configured correctly!

I'm going to close this as there's not really anything actionable on our part 馃槃

I am receiving this error: 1 controller.go:158] cert-manager/controller/orders "msg"="re-queuing item due to error processing" "error"="error reading (cluster)issuer "letsencrypt": issuer.cert-manager.io "letsencrypt" not found" "key"="

Can anyone help?

@munnerz I'm still running into this same failure.

  Normal  IssuerNotFound  114s (x5 over 114s)  cert-manager  Referenced "Issuer" not found: issuer.cert-manager.io "letsencrypt-staging" not found

Me as well. I feel like I have to install every cert-manager with every nginx-ingress till one works.

Referenced "ClusterIssuer" not found: clusterissuer.cert-manager.io "letsencrypt" not found

As for me, I check all related to certs resources and reconfigure Issuer Ref correctly.

I've noticed that using the cert-manager documentation is critical as different versions you either need to manually install CRDs or not. It's super confusing. I have gone through numerous tutorials and I have crafted this version Link 1 of a basic ingresscert-manager deployment that you can sort of find on Azure (where they demo two endpoints and two different apps). I've hooked this up to a flask app serving "hello world" to port 80, though with services you can have the app serve to any port and then connect to the services port 80. This demo Link 2 basically works as well and is one of the main sources for me. The main diff between the two is the first I create a static IP to connect the ingress to. You need to supply it to the ingress via IP address. The later (second link) hooks up the ingress to the IP provided by the AKS cluster. These are different things so pay attention to the little details. Hope this helps a little.

just came across this issue and the problems was specifying the right kind of issuer in your certificate

apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: <name>
spec:
  secretName: <secret>
  issuerRef:
    name: <my-ref>
    kind: <ClusterIssuer | Issuer >
  dnsNames:
  ...
Was this page helpful?
0 / 5 - 0 ratings

Related issues

Stono picture Stono  路  3Comments

jbartus picture jbartus  路  4Comments

jbeda picture jbeda  路  4Comments

howardjohn picture howardjohn  路  3Comments

matthew-muscat picture matthew-muscat  路  4Comments