Cert-manager: Add advice on how to sync a secret between namespaces to documentation

Created on 19 Apr 2018  路  58Comments  路  Source: jetstack/cert-manager

I gave a try to cert-manager v3.0 to create wildcard certificates and it's working, thanks for this awesome project :)

I'd like to use the wildcard certificate in multiple Ingress and namespaces but I don't know how to keep the tls secret in sync using cert-manager.

Of course I can do this using a cronjob, dumping the secret and copying to another namespace like: kubectl get secrets -o json --namespace kube-system tls-cert | jq '.metadata.namespace = "new"' | kubectl create-f - but this is not the ideal scenario.

What about clusterIssuer or certificate crd support an extra field to sync secrets between namespaces?

- apiVersion: certmanager.k8s.io/v1alpha1
  kind: ClusterIssuer
  metadata:
    name: letsencrypt.domain.com
    namespace: ""
  spec:
    acme:
      dns01:
        providers:
        - name: route53
          route53:
            accessKeyID: AAAAA
            hostedZoneID: Z2AFCYVIUTNMRA
            region: eu-west-1
            secretAccessKeySecretRef:
              key: secret-access-key
              name: letsencrypt-route53-clusterissuer
      email: [email protected]
      privateKeySecretRef:
        key: ""
        name: letsencrypt.domain.com
      server: https://acme-v02.api.letsencrypt.org/directory
  syncNamespacesTLS:  <= THIS
    - logging
    - kube-system
    - monitoring
    - prod
    - apps

Related to https://github.com/kubernetes/ingress-nginx/issues/2170 and https://github.com/kubernetes/ingress-nginx/issues/2371

good first issue help wanted kindocumentation lifecyclrotten prioritbacklog

Most helpful comment

My current solution is:

1. Generate one certificate in kube-system namespace, with all required domains using this certificate:

apiVersion: v1
items:
- apiVersion: certmanager.k8s.io/v1alpha1
  kind: Certificate
  metadata:
    name: wildcard.apps.my.domain.com
    namespace: kube-system
  spec:
    acme:
      config:
      - dns01:
          provider: apps-my-domain-com
        domains:
        - apps.my.domain.com
        - monitoring.apps.my.domain.com
        - logging.apps.my.domain.com
        - kube-system.apps.my.domain.com
    commonName: ""
    dnsNames:
    - '*.apps.my.domain.com'
    - '*.monitoring.apps.my.domain.com'
    - '*.logging.apps.my.domain.com'
    - '*.kube-system.apps.my.domain.com'
    issuerRef:
      kind: ClusterIssuer
      name: letsencrypt.apps.my.domain.com
    secretName: wildcard.apps.my.domain.com
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""

2. keep in sync the tls secret with all namespaces using this cronjob

apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: cert-manager-cronjob
spec:
  schedule: "* */4 * * *"
  jobTemplate:
    spec:
      template:
        metadata:
          labels:
            app: kube-system
            release: kube-system
        spec:
          restartPolicy: OnFailure
          serviceAccountName: cert-manager-cronjob
          containers:
            - name: hyperkube
              command: ["/bin/bash"]
              image: "quay.io/coreos/hyperkube:v1.7.6_coreos.0"
              args: ["-c", "for i in $(./kubectl get ns -o json |jq -r \".items[].metadata.name\" |grep  -v kube-system); do ./kubectl get secret -o json --namespace kube-system wildcard.apps.my.domain.com --export |jq 'del(.metadata.namespace)' |./kubectl apply -n ${i}-f -;  done"]

and have fun :)

looking forward to nginx ingress supports cross namespace secrets

All 58 comments

As an alternative, can you try creating multiple Certificate resources (in each namespace) that reference the same issuer?

The ACME server itself should deduplicate the orders, and so not waste quotas etc. It does mean a bit more management (you now need to create a Certificate in each namespace), but it should all work fine 馃槃

@munnerz didn't work, I hit on 2 issues.

First is a race condition between the certificates trying to change the same DNS record and after 5 minutes all the certificates passed the challenge and were created by cert-manager.

I0419 10:39:05.275653       1 sync.go:241] Error preparing issuer for certificate ingress/wildcard.apps.dta.a.domain.com: Failed to change Route 53 record set: InvalidChangeBatch: Tried to delete resource record set [name='_acme-challenge.domain.com.', type='TXT'] but it was not found
    status code: 400, request id: e1d87819-43bd-11e8-9c70-19a06b7c29d8
E0419 10:39:05.286260       1 sync.go:168] [ingress/wildcard.apps.dta.a.domain.com] Error getting certificate 'wildcard.apps.dta.a.domain.com': secret "wildcard.apps.dta.a.domain.com" not found
E0419 10:39:05.286303       1 controller.go:186] certificates controller: Re-queuing item "ingress/wildcard.apps.dta.a.domain.com" due to error processing: Failed to change Route 53 record set: InvalidChangeBatch: Tried to delete resource record set [name='_acme-challenge.domain.com.', type='TXT'] but it was not found
    status code: 400, request id: e1d87819-43bd-11e8-9c70-19a06b7c29d8

The second issue is in each namespace, letsencrypt returned a new certificate and most probably the previous one will be revoked, right?

my certificates

Giancarlos-MBPro:~ grubio$ cat a
apiVersion: v1
items:
- apiVersion: certmanager.k8s.io/v1alpha1
  kind: Certificate
  namespace: logging
  metadata:
    name: wildcard.apps.dta.a.domain.com
  spec:
    acme:
      config:
      - dns01:
          provider: route53
        domains:
        - "domain.com"
    dnsNames:
    - "*.domain.com"
    issuerRef:
      kind: ClusterIssuer
      name: letsencrypt.domain.com
    secretName: wildcard.apps.dta.a.domain.com
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""
Giancarlos-MBPro:~ grubio$ cat b
apiVersion: v1
items:
- apiVersion: certmanager.k8s.io/v1alpha1
  kind: Certificate
  namespace: monitoring
  metadata:
    name: starmonitoring.apps.dta.a.domain.com
  spec:
    acme:
      config:
      - dns01:
          provider: route53
        domains:
        - "domain.com"
    dnsNames:
    - "*.domain.com"
    issuerRef:
      kind: ClusterIssuer
      name: letsencrypt.domain.com
    secretName: starmonitoring.apps.dta.a.domain.com
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""

Generated tls namespace logging

Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number:
            fa:d3:02:ac:25:a0:7c:4d:7d:9e:ab:08:83:1a:67:04:ef:97
    Signature Algorithm: sha256WithRSAEncryption
        Issuer: CN=Fake LE Intermediate X1
        Validity
            Not Before: Apr 19 09:39:45 2018 GMT
            Not After : Jul 18 09:39:45 2018 GMT
        Subject: CN=*.domain.com
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                Public-Key: (2048 bit)
...

Generated tls namespace monitoring

Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number:
            03:a1:32:a8:52:e2:43:d3:6b:fd:58:ee:b5:27:68:b1:c0:04
    Signature Algorithm: sha256WithRSAEncryption
        Issuer: C=US, O=Let's Encrypt, CN=Let's Encrypt Authority X3
        Validity
            Not Before: Apr 19 09:31:32 2018 GMT
            Not After : Jul 18 09:31:32 2018 GMT
        Subject: CN=*.domain.com
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                Public-Key: (2048 bit)
...

I don't think the previous one will be revoked.

Due to a limitation in the underlying acme library we use, we cannot currently request the same certificate. The authorizations will be reused however.

What happens right now when you create the same certificate twice:

  • Both Certificates get processed, they both create new orders
  • Because the ACME server performs deduplication of orders, it will return the same order URL for both Certificates
  • Both certificates will present challenges. Because of deduplication of authorizations performed by the ACME server, they will both be setting/presenting the same values
  • They will both be validated/succeed authorization at around the same time (depending on internal rate limits/workqueue backoffs)
  • They will both call FinalizeOrder
  • Because of this check: https://github.com/jetstack/cert-manager/blob/master/third_party/crypto/acme/acme.go#L231 - only one of the orders will succeed finalizing
  • The order that finalized first will succeed and the first cert will be issued.
  • The other certificate will then recreate a new order as the old one is classed as failed
  • This new order will have 0 pending authorizations (as authorizations are associated with the ACME account, and not the order), so it will immediately proceed to FinalizeOrder, which will succeed and a second certificate will be issued.

At no point do we call revoke on the previous certificate, and as far as I'm aware creating a new order for the same DNS names does not revoke previous certificates.

Isn't there a rate limit on the number of certs you can issue for the same domain? I have a need to keep a cert synced between 30+ namespaces It would be nice if it only issued one cert but then made it available in all or a subset of namespaces. Otherwise, I will need to do something like @gianrubio says and issue the certs in one namespace and have a cron job copy them to other namespaces.

@from-nibly There鈥檚 a good discussion on nginx ingress repo about allow the controller to read secrets from other namespaces, I鈥檇 like to invite you to read and contribute there.

https://github.com/kubernetes/ingress-nginx/issues/2371

My current solution is:

1. Generate one certificate in kube-system namespace, with all required domains using this certificate:

apiVersion: v1
items:
- apiVersion: certmanager.k8s.io/v1alpha1
  kind: Certificate
  metadata:
    name: wildcard.apps.my.domain.com
    namespace: kube-system
  spec:
    acme:
      config:
      - dns01:
          provider: apps-my-domain-com
        domains:
        - apps.my.domain.com
        - monitoring.apps.my.domain.com
        - logging.apps.my.domain.com
        - kube-system.apps.my.domain.com
    commonName: ""
    dnsNames:
    - '*.apps.my.domain.com'
    - '*.monitoring.apps.my.domain.com'
    - '*.logging.apps.my.domain.com'
    - '*.kube-system.apps.my.domain.com'
    issuerRef:
      kind: ClusterIssuer
      name: letsencrypt.apps.my.domain.com
    secretName: wildcard.apps.my.domain.com
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""

2. keep in sync the tls secret with all namespaces using this cronjob

apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: cert-manager-cronjob
spec:
  schedule: "* */4 * * *"
  jobTemplate:
    spec:
      template:
        metadata:
          labels:
            app: kube-system
            release: kube-system
        spec:
          restartPolicy: OnFailure
          serviceAccountName: cert-manager-cronjob
          containers:
            - name: hyperkube
              command: ["/bin/bash"]
              image: "quay.io/coreos/hyperkube:v1.7.6_coreos.0"
              args: ["-c", "for i in $(./kubectl get ns -o json |jq -r \".items[].metadata.name\" |grep  -v kube-system); do ./kubectl get secret -o json --namespace kube-system wildcard.apps.my.domain.com --export |jq 'del(.metadata.namespace)' |./kubectl apply -n ${i}-f -;  done"]

and have fun :)

looking forward to nginx ingress supports cross namespace secrets

Maybe one can setup kubed to copy certificates as well

@gianrubio Thanks for you paste, one need to change there ${i}-f -> ${i} -f, OnFailure -> Never and add service account like so first:

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: cert-manager-cronjob

And of course hypercube image that matches your kubernetes version.

I don't think this is something we should support directly in cert-manager.

We can improve how we handle this in the ACME case, to make it less problematic wrt quotas, but there are numerous ways to do this in the Kubernetes API already, and any solution cert-manager implements will be in some way race-y.

We could link to/include some recommended options/workarounds for this in our docs perhaps? If someone can put something together 馃槃

Adding my own full solution in case it's useful :)

  • It looks for certs to sync in the current namespace.
  • It expects a kernelpay.com/sync-to-namespaces annotation with a comma-separated list of destination namespaces. If the annotation isn't there, it'll do nothing.
  • It expects the secret to be named tls- + the cert name.

Dockerfile

FROM ubuntu:16.04

ADD https://storage.googleapis.com/kubernetes-release/release/v1.10.4/bin/linux/amd64/kubectl /
RUN chmod +x /kubectl

FROM ubuntu:16.04

RUN apt-get -y update && \
    apt-get -y install jq && \
    rm -rf /var/lib/apt/lists/*

COPY --from=0 /kubectl /usr/local/bin/kubectl

ENTRYPOINT ["/kubectl"]

Manifests

apiVersion: v1
kind: ServiceAccount
metadata:
  name: cert-sync
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: cert-sync
rules:
  - apiGroups: ["certmanager.k8s.io"]
    resources: ["certificates"]
    verbs:
    - get
    - list
  - apiGroups: [""]
    resources: ["secrets"]
    verbs: ["*"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: cert-sync
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cert-sync
subjects:
  - name: cert-sync
    namespace: certs   # YOUR NAMESPACE HERE
    kind: ServiceAccount
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: cert-sync
spec:
  schedule: "0 */4 * * *"
  jobTemplate:
    spec:
      template:
        spec:
          securityContext:
            runAsUser: 1000
          restartPolicy: Never
          serviceAccountName: cert-sync
          containers:
          - name: sync
            image: "your-registry-here/kubectl:v1.10.4"  # YOUR REGISTRY HERE
            command: ["/bin/bash"]
            args:
            - -c
            - |
              for cert in $(kubectl -n $POD_NAMESPACE get certificate -o json | jq -r '.items[].metadata.name'); do
                  namespaces=$(kubectl -n $POD_NAMESPACE get certificate -o json $cert | jq -r '.metadata.annotations["kernelpay.com/sync-to-namespaces"]')
                  if [ "$namespaces" != "null" ]; then
                      for ns in ${namespaces//,/ }; do
                          echo Syncing cert $cert to namespace $ns
                          kubectl -n $POD_NAMESPACE get secret tls-$cert -o json --export | kubectl -n $ns apply -f -
                      done
                  fi
              done
            env:
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace

Example Certificate

apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
  name: kernelpay-com
  namespace: certs
  annotations:
    kernelpay.com/sync-to-namespaces: kernel,website
spec:
  secretName: tls-kernelpay-com
  issuerRef:
    kind: ClusterIssuer
    name: letsencrypt-prod
  commonName: '*.kernelpay.com'
  dnsNames:
    - 'kernelpay.com'
  acme:
    config:
    - dns01:
        provider: clouddns
      domains:
      - '*.kernelpay.com'
      - 'kernelpay.com'

As i've just (privately) discussed on slack: when you run into this problem, you're probably resource sharing across critical boundaries. Find ways to split up your namespaces into different subdomains. This is not a hard-and-fast rule, just my gut feeling. The workaround provided by @Dirbaio probably works but it's clunky and a little bit of a hack (which i'm no stranger to, but i do sometimes readjust my designs in order to prevent these kind of hacks)

As an alternative, can you try creating multiple Certificate resources (in each namespace) that reference the same issuer?

The ACME server itself should deduplicate the orders, and so not waste quotas etc. It does mean a bit more management (you now need to create a Certificate in each namespace), but it should all work fine

If anyone's paying as little attention to the detail in the other comments and is thinking of doing this don't!!! Just hit the rate limit, I should've read the rest of the thread more carefully 馃槩.

It seems like it works like this _sometimes_ (it certainly was when I tested it initially!) but most of the time it just requests a new certificate. Took about 3 days running this on two branches that had automatic deployments on commit to hit the limit.

EDIT: Ended up getting around it with https://github.com/kubernetes/ingress-nginx/issues/2170#issuecomment-392855039

I see a couple workarounds in this thread that use CronJobs to sync TLS Secrets, which is good for syncing on an interval. I found that using a Deployment with kubectl watch can lead to nearly real-time syncing of TLS Secrets.

The Deployment runs 2 kubectl containers - one to watch for new namespaces and copy the TLS Secret, and one to watch the TLS Secret for changes and apply to all namespaces. The code is here: ingress-cert-reflector.yml and I also wrote a corresponding blog post with detailed instructions.

I published the blog post how to use wildcard certs in kubernetes and sync certs across namespace with kubed https://rimusz.net/lets-encrypt-wildcard-certs-in-kubernetes/

Since there is already a distinction between Issuer and ClusterIssuer there should be either a subsection in Certificate to add namespaces or an additional ClusterCertificate entity that allows for list of namespaces. I mean in the case of wildcard shared across namespace a ClusterIssuer seems like a half of job. And syncing/duplicating of certificate in any way seems unnecessary risk increase. Most logical seems advice from @pieterlange but the idea of wildcard in the first place is to avoid per domain letsencrypt limits.

apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
  name: tlswild
# instead of single namespace suggestion??
  namespaces: 
    - demo1
    - demo2
    - ui-test
spec:
  secretName: tlswild
  issuerRef:
    name: letsencrypt-prod
    kind: ClusterIssuer
  commonName: '*.{{ settings.provision.domain }}'
  dnsNames:
  - {{ settings.provision.domain }}
  acme:
    config:
    - dns01:
        provider: route53
      domains:
      - '*.{{ settings.provision.domain }}'
      - {{ settings.provision.domain }}

EDIT: @gianrubio +1 for sync if share of single (sort of a symlink) cannot be accomplished. Entire chain of permissions will start on Ingress level most likely. Seems unnatural to be doing this on certificate but cert itself should also be granted namespace permissions (for access... not replication)

This is going around in circles somewhat. Whenever I try to follow these threads I get bounced between three different actors:

  • cert-manager - doesn't believe cert-manager should be managing them and refers (see above) to ingress-nginx
  • ingress-nginx - things like a 'default-namespace' for certs has been shot down a number of times. ingress-nginx doesn't want to do anything that is ingress specific and often points at changes to the ingress definition spec in the main kubernetes repo to fix this and allow namespace referencing
  • ingress spec in core kubernetes - this would be pretty glacial to get implemented and then there will be questions about ringfencing.

This leaves us in a circular 'not my problem loop' which either forces people to do nasty hacks to copy secrets, define the certificate in multple places, or not use namespaces and chuck all your workloads in default.

To me, the easiest place right now to implement this would be in cert-manager. However, the example above (copied in part below) seems to break the received wisdom of kubernetes resources by having multiple namespaces in the metadata block of the certificate resource.

apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
  name: tlswild
# instead of single namespace suggestion??
  namespaces: 
    - demo1
    - demo2
    - ui-test

For my 2 cents I would modify the spec.secretName to be either a list or a string. If it was a string it would default to the namespace in metadata.namespace. However, if it were a list it would be a list certificate locations like so...

spec:
  secretName:
    - name: mysecret1
      namespace: namespace1
    - name: mysecret2
      namespace: namespace2

@withnale: @rimusz wrote an article describing how to use appscode/kubed to sync certs amongst namespaces.

See the original https://github.com/jetstack/cert-manager/issues/494#issuecomment-404146795

I think the best bet here is to integrate the recent blog post by @rimusz into our own documentation so we can close this issue off.

Assuming @rimusz is okay with it, would someone be able to do the work to get this converted into rst format? 馃槃

When I replied on the 18th Jul I had read the earlier parts of the thread.

I have read @rimusz contribution but while it's useful it's in no way a clean fix since it introduces yet another moving part into the whole process. Having another out-of-bound process copying secrets just doesn't seem very strategic, which I have alluded to in my post.

The cert-manager already has the cert - I don't see why it would be contentious to make it so that it writes this to multiple secrets and make this native functionality.

I do not want to overload the Certificate resource with fields that are not particularly relevant to specifically Certificates.

The Kubernetes API is intentionally a layered design, and as such these sorts of problems are supposed to be solved by other components. For example, the Kubernetes Job resource could have added a schedule field to its spec instead of creating a CronJob. However, that would overload the Job type. Instead, CronJob was created which embeds a Job resource, adding additional fields related to timing/scheduling.

So for the time being, something like kubed can sync secret resources between namespaces.
There have been various issues raised around 'alternate delivery mechanisms' for Certificates.

For example:

  • Instead of creating a secret, setting a caBundle field (or similar) on a particular resource type (e.g. APIService)
  • Altering the 'keys' used in the stored secret (i.e. to using ca.crt instead of tls.crt)
  • Storing the secret in multiple namespaces.
  • 'Pushing' certificates directly to a service (e.g. to the pod)

IMO - all of these use-cases could be solved by some form of 'certificate projection' concept within cert-manager. This could define how we want to 'project' secrets into some desired form (e.g. into a namespace).

This is something I think we can layer on top of our existing API, and has quite far-reaching potential for users.

At some point, I'd love to see a proposal document drawn up along these lines 馃槃

My key point here however - is that I do not want to pollute the Certificate's resource spec with fields that are not directly related to Certificates, and I feel that a list of namespaces to store the Certificate in is actually a part of a larger problem around certificate delivery mechanisms (and so I'd like to see all avenues explored before we commit to something!)

I've submitted proposal of new resource that would allow to simply use ClusterSecret instead of Secret to share wildcard certificate among different namespaces: https://github.com/kubernetes/kubernetes/issues/70147

Please upvote if you like it :)

@sheerun Hej Adam. Light in a dark tunnel my friend. Having ability to share with cluster or selected namespace is good path to a better place. On my way to upvote... Thank you!!!

Just for reference, until it is resolved as ClusterSecret proposed before I also use hook that copies certificate after installation / upgrade (I install cert-manager into kube-system namespace):

{{- $ := . -}}
---
apiVersion: batch/v1
kind: Job
metadata:
  namespace: {{ $.Values.namespace }}
  name: {{ $.Values.namespace }}-cert-copy
  annotations:
    "helm.sh/hook": post-install,post-upgrade
    "helm.sh/hook-delete-policy": hook-succeeded
spec:
  template:
    spec:
      restartPolicy: Never
      serviceAccountName: {{ $.Values.namespace }}
      containers:
        - name: hyperkube
          command: ["/bin/bash"]
          image: "quay.io/coreos/hyperkube:v1.10.3_coreos.0"
          args: ["-c", "for name in $(kubectl get secrets -o json --namespace kube-system | jq -r '.items[].metadata.name' | grep wildcard); do ./kubectl get secret -o json --namespace kube-system $name --export | jq 'del(.metadata.namespace)' | ./kubectl apply -n {{ $.Values.namespace }} -f -; done"]
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
  namespace: {{ $.Values.namespace }}
  name: {{ $.Values.namespace }}-cert-copy
spec:
  schedule: "20 0 * * *"
  successfulJobsHistoryLimit: 0
  failedJobsHistoryLimit: 1
  jobTemplate:
    spec:
      template:
        spec:
          restartPolicy: Never
          serviceAccountName: {{ $.Values.namespace }}
          containers:
            - name: hyperkube
              command: ["/bin/bash"]
              image: "quay.io/coreos/hyperkube:v1.10.3_coreos.0"
              args: ["-c", "for name in $(kubectl get secrets -o json --namespace kube-system | jq -r '.items[].metadata.name' | grep wildcard); do ./kubectl get secret -o json --namespace kube-system $name --export | jq 'del(.metadata.namespace)' | ./kubectl apply -n {{ $.Values.namespace }} -f -; done"]

Would someone be able to make a pull request against our docs with some advice on this? It'll make it a lot more visible for anyone else who's trying to do the same thing in future 馃槃

I am using this as a temporary solution to synchronize my certificates: https://github.com/mittwald/kubernetes-replicator

But for the replicator you still have to create the replication secrets in all the other namespaces. That's just awful. Imagine a cluster with 30 namespaces and 5 certificates. You'd have to create 150 Secret resources... The job sounds like a more maintainable solution. (Or maybe I didn't understand the replicator)

@AndresPineros you are absolutely right, in our case we have a limited number of namespaces that have ingresses so it works fine. For a large number of namespaces you do not want to do that :)

Today I had to share wildcard certificate to another namespace and found this issue. If I copy paste secret file with their annotations and labels, will cert-manager try to renew copy pasted certificates too? If yes, if I omit these annotations and labels, can I prevent cert-manager from triggering renews on copy pasted certs?

yes, @shinebayar-g - I think I just found the same problem

I installed kubed, as suggested in previous comments, but when a TLS secret was sync'd to another namespace, the annotations obviously got copied too - and a new certificate was requested

Not what I was hoping would happen, but ties in with the original problem (perhaps I did something wrong with kubed)

@jimmythedog , Actually I've got answer for my question in #cert-manager k8s slack channel. Someone told me it's safe to copy paste tls secret with labels and annotations. I think unless you copy paste certificate resource too, duplicated certificates won't triggered to renew.
So I think it's generally safe to use kubed or some tool to automate sync tls secrets over namespaces. But I didn't really made sure it works as I told or got answer from official developers.

the annotations obviously got copied too - and a new certificate was requested

You sure new certificates was requested? How did you check it?

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to jetstack.
/lifecycle stale

Anyone have any opinion on the integration with nginx-ingress where you can --default-ssl-certificate=<namespace>/<tls-secret> to the controller and not have to worry about copying the secret to namespaces. Are there any additional risks with this option vs using something like kubed to copy secrets and keep them in-sync on updates?

Anyone have any opinion on the integration with nginx-ingress where you can --default-ssl-certificate=<namespace>/<tls-secret> to the controller and not have to worry about copying the secret to namespaces. Are there any additional risks with this option vs using something like kubed to copy secrets and keep them in-sync on updates?

my understanding of that feature is that it only affects default routes in nginx (incoming traffic which doesn't match any other rule)

@afirth it is usable for any ingress, as long as you configure tls for them (e.g. hostname) - but no secret. I use this, and use a wildcard-certificate as a number of ingress definitions follow the same domain-pattern.

Another workaround would be to keep your certs and ingresses in a single namespace, and use a cross-domain service to point your service to the ingress without having to shuffle certs around, or risk hitting rate limits.

For example, say you have a service called webapp in a namespace called frontend-develop, you can have the service also available in the namespace cert-manager, say as webapp-cross-ns

---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: webapp
  name: webapp-cross-ns
  namespace: cert-manager
spec:
  ports:
  - name: webui
    port: 4000
    targetPort: 4000
  type: ExternalName
  externalName: webapp.frontend-develop.svc.cluster.local

Then, if you have your certs in the cert-manager namespace, you can create an ingress in that namespace and use the webapp-cross-ns service.

This has been working well, though I'm thinking of migrating to copying the cert secret on the fly (since our ns creation is event driven), just because it's cleaner to have the ingresses within the namespaces for deletion.

@afirth it is usable for any ingress, as long as you configure tls for them (e.g. hostname) - but no secret. I use this, and use a wildcard-certificate as a number of ingress definitions follow the same domain-pattern.

this works great, thanks again.

e.g. with helm values.yaml

controller:
  extraArgs:
    default-ssl-certificate: "cert-manager/<name>-wildcard-tls"

I believe the issue is cert-manager allows to define name of the generated secret, but we also need to define labels and annotations to enable smooth operation with external tools like kubed or similar things. All these issues would be magically solved in very k8s-ish way if metadata for secrets could be defined along with secretName in certificate definition. Doesn't look like a lot of work. What do you think about it, community?

P. S. Hard-coding namespaces like described here https://github.com/jetstack/cert-manager/issues/494#issuecomment-405862281 doesn't solve the issue in case when namespaces are created and deleted dynamically, e.g. new namespace-scoped installations which share several domain names are created by demand and deleted after some time. It can be refactored to select namespace by labels rather than by name, but newly created namespace won't get its own copy of certificate in either case.

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to jetstack.
/lifecycle rotten
/remove-lifecycle stale

Will be adding an entry in the new FAQ documentation about using kubed to sync secrets.
Consolidating issues

/close

@JoshVanL: Closing this issue.

In response to this:

Will be adding an entry in the new FAQ documentation about using kubed to sync secrets.
Consolidating issues

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

As pointed out by @estambakio-sc, if you really want to put that issue to bed, please let users define labels and annotations for the generated secrets so that tools like kubed can work consistently in a 100% declarative k8s way.

Agreed with @tlvenn this is specially problematic with a gitops approach, have to manually patch annotations.

I feel I should be able to specify to which namespaces wildcard certificate should be cloned and cert-manager should do it for me, instead of employing other tools and complicating setup...

Maybe reopen?

btw. Here's my newest version of synchronizing script. It just copies all certificates from kube-system to given namespace on install/update and then every week on Monday:

{{- $ := . -}}
---
apiVersion: batch/v1
kind: Job
metadata:
  namespace: {{ $.Values.namespace }}
  name: {{ $.Values.namespace }}-cert-copy
  annotations:
    "helm.sh/hook": post-install,post-upgrade
    "helm.sh/hook-delete-policy": hook-succeeded,before-hook-creation
spec:
  template:
    spec:
      restartPolicy: Never
      serviceAccountName: {{ $.Values.namespace }}
      containers:
        - name: hyperkube
          command: ["/bin/bash"]
          image: "k8s.gcr.io/hyperkube:v1.14.9"
          args: ["-c", "for name in $(kubectl get secrets -n kube-system --field-selector type=kubernetes.io/tls -o custom-columns=:metadata.name --no-headers); do kubectl get secret --namespace=kube-system $name --export -o yaml | grep -v namespace | kubectl apply --namespace={{ $.Values.namespace }} -f -; done"]
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
  namespace: {{ $.Values.namespace }}
  name: {{ $.Values.namespace }}-cert-update
spec:
  schedule: "0 12 * * 1"
  successfulJobsHistoryLimit: 0
  failedJobsHistoryLimit: 1
  jobTemplate:
    spec:
      template:
        spec:
          restartPolicy: Never
          serviceAccountName: {{ $.Values.namespace }}
          containers:
            - name: hyperkube
              command: ["/bin/bash"]
              image: "k8s.gcr.io/hyperkube:v1.14.9"
              args: ["-c", "for name in $(kubectl get secrets -n kube-system --field-selector type=kubernetes.io/tls -o custom-columns=:metadata.name --no-headers); do kubectl get secret --namespace=kube-system $name --export -o yaml | grep -v namespace | kubectl apply --namespace={{ $.Values.namespace }} -f -; done"]

I'm also finding a need for a solution to managing certificates across namespaces. I run a development cluster with a wildcard certificate where environments can launch under different subdomains many times a day. Issuing individual certificates for each of them is unproductive, and yet it would be really bad practice to run them all in the same namespace.

A definitive solution for sharing the wildcard cert is needed!

Hi @martaver, this is layed out in the documentation.

https://cert-manager.io/docs/faq/kubed/

Hi @JoshVanL okay missed that in the docs! Thanks for that... :)

After a few hours of fiddling, though... This isn't exactly a fun solution... I can install kubed via helm chart easily enough, but then it regenerates these certificates every time I run a plan on terraform/pulumi, so I never have a stable state. There's no way to disable the api server either and remove the need for the certificates. Only way to get around this now is to pre-generate certificates and supply them to kubed.

So now I'm generating certificates, so that I can run a service who's sole job is to copy secrets for certificates from one namespace to another.

Rabbit hole anyone?

Is it really that hard to get cert-manager to use a cert from another namespace?

FWIW @martaver, having multiple applications using the same keys and certificates is an anti-pattern, and not something we would actively support.

What Issuer are you using? What is the blocker for each of these applications to have a separate key/certificate as per their subdomain?

What is the blocker for each of these applications to have a separate key/certificate as per their subdomain?

wildcard certificate.

@JoshVanL I totally get that... Our prod environment doesn't take this approach.

I'm using a wildcard certificate in our development and pre-prod environments, where we regularly spin up multiple stacks side-by-side under different subdomains for testing or demonstration purposes. Issuing certificates for all these services would be a waste of our quota and it's just slow.

In case you missed it, both traffic and ingress-nginx support a default SSL certificate, which you can populate into their namespace with cert-manager and then front anything with. There's a comment further up with more info, saved my bacon last year. But I agree, it would be great if this was in tree. Hide it behind --unsafe-multi-namespace-secrets , or only allow it for certificate resources created in some flagged namespace if required for everyone to sleep at night.

Actually that might be exactly what we need. Thanks very much @afirth!!

Shout out to @afirth and @davidkarlsen for the --default-ssl-certificate approach. This is much more convenient for environments where wildcard certificates are appropriate! Thanks very much!

To @JoshVanL I'd suggest that this option should be mentioned in the documentation!

Shout out to @afirth and @davidkarlsen for the --default-ssl-certificate approach. This is much more convenient for environments where wildcard certificates are appropriate! Thanks very much!

To @JoshVanL I'd suggest that this option should be mentioned in the documentation!

PRs are welcome! 馃檪

Doc PR: cert-manager/website#313

have any of you actually made kubed to work properly?
In my last attempt it copies the initial secret which does not yet include the certificate data from lets-encrypt, and when cert-manager does its thing and puts the new secret, kubed does nothing. Furthermore it also copies the secret to the default namespace when I am just using a special label as selector for specific namespaces... Do you have better experiences?

@sheerun which permissions do you need in order to run the script? I am assuming it does not work out of the box considering that you are coping a resource from one namespace to another?

here another alternative cmd to @sheerun's if you want to sync all secrets to the namespaces that matches a label, in this example app=sync. If we add a label to the secrets too we could of course only sync those secrets, basically what kubed is supposed to do but running in a cronjob instead:

for name in $(kubectl get secrets -n cert-manager --field-selector type=kubernetes.io/tls -o custom-columns=:metadata.name --no-headers); 
do 
  for namespace in $(kubectl get namespaces -l app=sync -o custom-columns=:metadata.name --no-headers);
    do kubectl get secret --namespace=cert-manager $name --export -o yaml | grep -v namespace | kubectl apply --namespace=$namespace -f -;
    done
done
Was this page helpful?
0 / 5 - 0 ratings

Related issues

caiobegotti picture caiobegotti  路  4Comments

kragniz picture kragniz  路  4Comments

jbouzekri picture jbouzekri  路  4Comments

matthew-muscat picture matthew-muscat  路  4Comments

timblakely picture timblakely  路  4Comments