Describe the bug:
We've been seeing a lot of errors in our logs like this:
cert-manager/secret-mapper "msg"="unable to fetch certificate that owns the secret" "error"="Certificate.certmanager.k8s.io \"example-tls4\" not found" "certificate"={"Namespace":"default","Name":"example-tls4"} "secret"={"Namespace":"default","Name":"example-tls4"
I believe cert-manager is looking at the secret and trying to do something(?) with it. However the resource which owned the secret has been deleted.
In this case only the secret remained. Nothing under kubectl get certificates.
Would it not make sense for cert-manager to automatically clean up these secrets once the resource/ingress has been deleted?
Expected behavior:
Orphaned secrets would be garbage collected/deleted
Steps to reproduce the bug:
Delete an ingress resource, the secret will remain.
Anything else we need to know?:
Environment details::
/kind bug
This is impacting me as well. We strive for the cluster to be in the same pre apply and post delete. Right now the secrets are the only artifact left over.
This is supported, however is not the default (as it has the downside of taking down your production services if you accidentally delete any CRDs).
To enable it, add the --enable-certificate-owner-ref flag to your controller: https://github.com/jetstack/cert-manager/blob/f1d591a5317fda693a8df755e4c9ceaece998dbb/cmd/controller/app/options/options.go#L275-L277
Hope that helps! 馃槃
Script to clean these up if you dont have that flag set:
https://github.com/richstokes/k8s-scripts/tree/master/clean-orphaned-secrets-cert-manager
@richstokes you just saved my ass! I found so little information search for the errors that certmanager was throwing and finally came upon this using the logs from the injector pod
Ha. Glad you find it helpful!
Most helpful comment
Script to clean these up if you dont have that flag set:
https://github.com/richstokes/k8s-scripts/tree/master/clean-orphaned-secrets-cert-manager