Cert-manager: Force cert-manager to renew the certificate

Created on 3 Mar 2020  路  14Comments  路  Source: jetstack/cert-manager

There has been an issue with letsencrypt and therefore some certificates need to be renewed.

This needs to be done by tomorrow, because certificates will be revoked in the CRLs.

Unfortunately, this means we need to revoke the certificates that were affected
by this bug, which includes one or more of your certificates. To avoid
disruption, you'll need to renew and replace your affected certificate(s) by
Wednesday, March 4, 2020. We sincerely apologize for the issue.

I've read through the issues on github, but did not find a clear path to renew the certificates.
Is there a way to mark a certificate as expired, if you use ingress with a cluster issuer?

I tried to delete the secret, but as the certificate is not expired yet, no new certificate was issued.

Environment details (if applicable):

  • Kubernetes version (e.g. v1.10.2): v1.14.10
  • Cloud-provider/provisioner (e.g. GKE, kops AWS, etc): GKE
  • cert-manager version (e.g. v0.4.0): cert-manager-v0.11.0
  • Install method (e.g. helm or static manifests): helm

/kind feature

areapi kinfeature prioritimportant-soon

Most helpful comment

I have quite a few certificates so for each namespace of concern I'm running

kubectl get certs --no-headers=true | awk '{print $1}' | xargs -n 1 kubectl patch certificate --patch '
- op: replace
  path: /spec/renewBefore
  value: 1440h
' --type=json

Waiting until the certs renew and then:

kubectl get certs --no-headers=true | awk '{print $1}' | xargs -n 1 kubectl patch certificate --patch '
- op: remove
  path: /spec/renewBefore
' --type=json

Note that this means I'm removing the renewBefore parameter altogether as I don't typically have it set. Depending on your setup you may want to use a different number of hours - 2112h for example.

All 14 comments

This issue also affects me in a "gitlab self managed" kubernetes cluster on AWS

I just reissued my cert by setting renewBefore: 1440h to the cert object, then removing it once the new cert is issued.

I deleted the secret first and then the certificate. When cert-manger created the certificate again it issued a new one.
The problem with this solution is that you will be a few seconds without a valid certificate.

I did not configure the certificate resource, but cert-manager created this automatically from the ingress controller. I was able to renew the certificate by changing renewBefore to 1440h. Thanks for that @whs

I have quite a few certificates so for each namespace of concern I'm running

kubectl get certs --no-headers=true | awk '{print $1}' | xargs -n 1 kubectl patch certificate --patch '
- op: replace
  path: /spec/renewBefore
  value: 1440h
' --type=json

Waiting until the certs renew and then:

kubectl get certs --no-headers=true | awk '{print $1}' | xargs -n 1 kubectl patch certificate --patch '
- op: remove
  path: /spec/renewBefore
' --type=json

Note that this means I'm removing the renewBefore parameter altogether as I don't typically have it set. Depending on your setup you may want to use a different number of hours - 2112h for example.

I've attempted the renewBefore option, but it doesn't appear to be working. We're using v0.11.0 also. Our cert doesn't expire for another 73 days, so im guessing something is detecting that its >30 days and thats taking precedence over the renewBefore option.

The log message I'm seeing says "certificate does not require re-issuance. certificate renewal scheduled near expiry time", which appears to be located here (wrapped in this conditional): https://github.com/jetstack/cert-manager/blob/master/pkg/controller/certificates/sync.go#L290

And that variable is being set by c.certificateRequiresIssuance on line 270 of that same file. I don't know the language well enough to dig much deeper, or to be confident in what I've so far to be honest. But my assumption is I can't force renew this cert, and I'll need to delete the tls secret and cert, and deal with downtime while i wait for the issuance of a new cert.

If anyone has any other solutions please let me know, I don't have much time to get this resolved because the notice from LetsEncrypt gave me until tomorrow (with no time, or even timezone indication) so I'm hopeful for a no-downtime solution, but I'm preparing for the worst given the timeframe.

Here's the more detailed notice about the reissuing of the certificates:
https://community.letsencrypt.org/t/revoking-certain-certificates-on-march-4/114864

Q: What timezone will the revocations start on 04 March 2020
A: UTC. We have not fully set a start time for revocations but the earliest it will occur is 00:00 UTC

so this is really quite short notice...

So in case you are using Gitlab AutoDevOps and Certmanager, you can just delete the secret in the respective namespace to have certmanager regenerate a new cert. Each of your deployed apps should have a secret called staging-auto-deploy-tls (staging is my env in this case). Once deleted, cert manager pods will come up and request a new cert. Be aware that the sites will throw a certificate warning during the period in which certmanager created the new certificates. For me it took roughly 15-20 seconds.

@jceresini : the 1440h time needs to be bigger than the number of hours left to renew.

So when editing the cert, if you see renewal is in 1800 hours, make it 1900. That worked for me.

Just deleting the secret worked for me; a new certificate is immediately requested and the secret is recreated shortly afterward.

For those for whom renewBefore doesn't seem to work make sure that you have only one set of CRDs defined.

# Check for new CRD names.
kubectl get crd | grep cert-manager
certificaterequests.cert-manager.io            2020-02-06T21:31:35Z
certificates.cert-manager.io                   2020-02-06T21:31:35Z
challenges.acme.cert-manager.io                2020-02-06T21:31:35Z
clusterissuers.cert-manager.io                 2020-02-06T21:31:36Z
issuers.cert-manager.io                        2020-02-06T21:31:36Z
orders.acme.cert-manager.io                    2020-02-06T21:31:36Z
# Check for old CRD names
kubectl get crd | grep certmanager
# This command should return nothing.

If you still have the old certificate CRDs (certificates.certmanager.k8s.io) then when you issue kubectl commands they might be working on those resources and not the new ones.

The new 0.13 version should automatically remove old CRDs if new ones are already present or at least warn the user somehow.

We have now created this tool to make it easier to handle the specific case of Let's Encrypt revoking ~2% of issued certificates 馃槄 .

It will analyse your current installed certificates, and automatically trigger renewals for those affected.

https://github.com/jetstack/letsencrypt-caa-bug-checker

Thanks, that's very helpful, just checked 3 clusters.

People should still check for old CRDs messing with kubectl.

This has been implemented as part of https://github.com/jetstack/cert-manager/pull/2753

We will also have a new CLI tool with a renew subcommand as part of the v0.15 release 馃槃 https://github.com/jetstack/cert-manager/pull/2803

This requires the 'experimental' certificates controller feature gate to be enabled, which will hopefully be default for v0.16. Please keep the recently released alpha.1 a try if you're keen to give this a go!

Was this page helpful?
0 / 5 - 0 ratings