Describe the bug:
I've created the LetsEncrypt staging and production ClusterIssuers. Getting certificates works great on staging LE, but it gets stuck past OrderCreated on production. There are no challenges created.
kubectl describe clusterissuer letsencrypt-prod
Name: letsencrypt-prod
Namespace:
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"certmanager.k8s.io/v1alpha1","kind":"ClusterIssuer","metadata":{"annotations":{},"name":"letsencrypt-prod"},"spec":{"acme":...
API Version: certmanager.k8s.io/v1alpha1
Kind: ClusterIssuer
Metadata:
Creation Timestamp: 2019-06-05T10:14:35Z
Generation: 2
Resource Version: 15506938
Self Link: /apis/certmanager.k8s.io/v1alpha1/clusterissuers/letsencrypt-prod
UID: b7f915ba-877a-11e9-b573-c63045eb03ad
Spec:
Acme:
Email: [email protected]
Http 01:
Private Key Secret Ref:
Name: letsencrypt-prod
Server: https://acme-v02.api.letsencrypt.org/directory
Status:
Acme:
Uri: https://acme-v02.api.letsencrypt.org/acme/acct/58592957
Conditions:
Last Transition Time: 2019-06-05T10:14:37Z
Message: The ACME account was registered with the ACME server
Reason: ACMEAccountRegistered
Status: True
Type: Ready
Events: <none>
kubectl describe ingress
Name: lx-dev-backend-api-ingress
Namespace: default
Address:
Default backend: default-http-backend:80 (<none>)
TLS:
lx-dev-backend-api-certificate terminates api.luxon.dev.varunit.com
Rules:
Host Path Backends
---- ---- --------
api.luxon.dev.varunit.com
/ lx-dev-backend-api:80 (<none>)
Annotations:
certmanager.k8s.io/cluster-issuer: letsencrypt-prod
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/proxy-body-size: 8m
certmanager.k8s.io/acme-challenge-type: http01
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal CREATE 19s nginx-ingress-controller Ingress default/lx-dev-backend-api-ingress
Normal CreateCertificate 19s cert-manager Successfully created Certificate "lx-dev-backend-api-certificate"
kubectl describe certificate
Name: lx-dev-backend-api-certificate
Namespace: default
Labels: app.kubernetes.io/instance=lx-dev-backend-api
app.kubernetes.io/managed-by=Tiller
app.kubernetes.io/name=AppChart
helm.sh/chart=AppChart-20190605.3
Annotations: <none>
API Version: certmanager.k8s.io/v1alpha1
Kind: Certificate
Metadata:
Creation Timestamp: 2019-06-05T10:15:29Z
Generation: 3
Owner References:
API Version: extensions/v1beta1
Block Owner Deletion: true
Controller: true
Kind: Ingress
Name: lx-dev-backend-api-ingress
UID: d8512020-877a-11e9-b573-c63045eb03ad
Resource Version: 15507075
Self Link: /apis/certmanager.k8s.io/v1alpha1/namespaces/default/certificates/lx-dev-backend-api-certificate
UID: d86222c7-877a-11e9-b573-c63045eb03ad
Spec:
Acme:
Config:
Domains:
api.luxon.dev.varunit.com
Http 01:
Ingress Class: nginx
Dns Names:
api.luxon.dev.varunit.com
Issuer Ref:
Kind: ClusterIssuer
Name: letsencrypt-prod
Secret Name: lx-dev-backend-api-certificate
Status:
Conditions:
Last Transition Time: 2019-06-05T10:15:29Z
Message: Certificate issuance in progress. Temporary certificate issued.
Reason: TemporaryCertificate
Status: False
Type: Ready
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Generated 23s cert-manager Generated new private key
Normal GenerateSelfSigned 23s cert-manager Generated temporary self signed certificate
Normal OrderCreated 23s cert-manager Created Order resource "lx-dev-backend-api-certificate-1000487569"
kubectl describe order
Name: lx-dev-backend-api-certificate-1000487569
Namespace: default
Labels: acme.cert-manager.io/certificate-name=lx-dev-backend-api-certificate
app.kubernetes.io/instance=lx-dev-backend-api
app.kubernetes.io/managed-by=Tiller
app.kubernetes.io/name=AppChart
helm.sh/chart=AppChart-20190605.3
Annotations: <none>
API Version: certmanager.k8s.io/v1alpha1
Kind: Order
Metadata:
Creation Timestamp: 2019-06-05T10:15:30Z
Generation: 1
Owner References:
API Version: certmanager.k8s.io/v1alpha1
Block Owner Deletion: true
Controller: true
Kind: Certificate
Name: lx-dev-backend-api-certificate
UID: d86222c7-877a-11e9-b573-c63045eb03ad
Resource Version: 15507073
Self Link: /apis/certmanager.k8s.io/v1alpha1/namespaces/default/orders/lx-dev-backend-api-certificate-1000487569
UID: d8b92e92-877a-11e9-b573-c63045eb03ad
Spec:
Config:
Domains:
api.luxon.dev.varunit.com
Http 01:
Ingress Class: nginx
Csr: MIICtzCCAZ8CAQAwOzEVMBMGA1UEChMMY2VydC1tYW5hZ2VyMSIwIAYDVQQDExlhcGkubHV4b24uZGV2LnZhcnVuaXQuY29tMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAxnckWzfWGREPJNteagF5YiSNpR5WDJqWLUjfdjJZa9ElPTlgPHTCidWB1Ye7zZ6Bp7ACEmNTJ9m/l66qPUa1zbh/GzUTdcWAwqD7yCRgwWzVlZ0TmuBoPNqfiKD0eJQ5nMDB3nd6eDP1s4YhJQWo5P8GxNE6iQiaCcZXHqg7pgnArgjkXzCqMXDs6RNU0GP8Gufdm4BxFtUgq2cEXVhwCnJjpzIZBSmz2kf5VfFErtX0pW6ycMu6Nn3SN1CWJ9uxlZZHbD5K7pH4qWfY1tRW/m3z+sMJsz4qO+hj/rn1s/jQv2GI4k6wlmf3zXPy9XOHIj3PkCeIJJLPFF/e3Wf8VwIDAQABoDcwNQYJKoZIhvcNAQkOMSgwJjAkBgNVHREEHTAbghlhcGkubHV4b24uZGV2LnZhcnVuaXQuY29tMA0GCSqGSIb3DQEBCwUAA4IBAQCYuDHOEajpFdfBaVehPA4m+Kx8g7GxlF/fjGYJCj/PSOERiNAZIgDtCOIamdFZzZ1ctFJ8yWb5b/re+lGxkToAyqLnTD9I2fZmJrV91eTS5AfuKsI8z2RUMTMAOxkOup6PFFwxatkhJp5dvhr7szTEViW8oXo+jZmYNEvAUNdhPu1TrISvR25NF7URb4q+j11vnDQ76t01anCvD3gx3CBWU9WQdNeltuxKTyuBbwkpRyG0nYVZ3Pc9+S4WLX6xR7W8T9A9lgWbjNDTplpLGjxg4JLENs8SRxHiD09NL08QTQg/AlWG2qAnxKd2EeQRRdvpNt+D8iIFfaoG3n2PmXR7
Dns Names:
api.luxon.dev.varunit.com
Issuer Ref:
Kind: ClusterIssuer
Name: letsencrypt-prod
Status:
Events: <none>
Expected behaviour:
According to the documentation, I would expect a Challenge to be created and a certificate should be issued correctly.
Environment details::
/kind bug
I am experience the same exact issue
Experiencing on GKE. Previously fine deployment, now failing to create challenges from Order
Me too. Is it the production rate limiting maybe? Or does that throw a specific event
OK the solution to this was to NOT create the Certificate resource and rely on Ingress Shim to manage it automatically.
hmm, I tried the ingress shim but encountered the same issue
Just had the same problem.
I followed @turowicz approach which ended at the same problem again.
Then did kubectl get challenges --all-namespaces which prints all challenges.
I used kubectl describe challenge <name> which then showed me that there was a misconfiguration in my secret for my secret for the solver. Fixing that solved the issue for me.
Still the same for me. I've tried @turowicz approach and @tobiasbeck. I have no challenges when i do kubectl get challenges --all-namespaces and no events when I do kubectl describe certificate tls-secret.
The type is 'Ready' status is 'False' and Reason is 'TemporaryCertificate' with the message 'Certificate issuance in progress. Temporary certificate issued.'
If I delete the certificate, shim creates another instantly but exactly the same result occurs.
I ended up downgrading to v0.7.2, which "fixed" the issue for me. With @tobiasbeck's comment I'm curious which solver providers you all are using, maybe there's a trend there? I'm using clouddns,
for reference here's my ClusterIssuer config I _attempted_ to use with v0.8.0
apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: [email protected]
privateKeySecretRef:
name: letsencrypt-prod
# old way, for certs with ``certificate.spec.acme`` stanza.
dns01:
providers:
- name: prod-dns
clouddns:
project: PROJECT_NAME
serviceAccountSecretRef:
name: clouddns-cert-manager-service-account
key: service-account.json
# new way, for certs without the ``certificate.spec.acme`` stanza.
solver:
- dns01:
clouddns:
project: PROJECT_NAME
serviceAccountSecretRef:
name: clouddns-cert-manager-service-account
key: service-account.json
and here's my working conf with v0.7.2
apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: [email protected]
privateKeySecretRef:
name: letsencrypt-prod
dns01:
providers:
- name: prod-dns
clouddns:
project: PROJECT_NAME
serviceAccountSecretRef:
name: clouddns-cert-manager-service-account
key: service-account.json
What has helped me with the problem was opening kubectl logs <pod> on the cert-manager pod responsible for certificate acquisition. In my case the log said I've exceeded some limits and I needed to wait for 7 days.
if anyone here wonders what I was doing wrong, spec.solver should be spec.solvers (plural) :man_facepalming:
Per @turowicz, checking the cert-manager log is the first thing I would do. It usually tells you why it cannot acquire a certificate:
logs -n cert-manager deploy/cert-manager -f
In my case, I was facing the same issue, after looking into the logs from Cert manager "msg"="propagation check failed" "error"="failed to perform self check GET request ...
The issue was caused by a missing subdomain included in the certificate, I had not created the subdomain.
Most helpful comment
What has helped me with the problem was opening
kubectl logs <pod>on thecert-managerpod responsible for certificate acquisition. In my case the log said I've exceeded some limits and I needed to wait for 7 days.