Cert-manager: Proposal: Add LE ClusterIssuers for prod/staging to Helm chart

Created on 17 Jan 2018  路  18Comments  路  Source: jetstack/cert-manager

/kind feature

Assuming most people will use cert-manager with Let's Encrypt ACME endpoints, I think the helm chart should make it easy to have these ClusterIssuers.

I'm thinking --set issuers.letsencrypt=true --set [email protected] would install ClusterIssuers letsencrypt-prod and letsencrypt-staging automatically so I can get on with my business.

I think right now the Quickstart experience is not great.

I actually sat down and created a ClusterIssuer myself from the example by copying the YAML from https://github.com/jetstack/cert-manager/blob/master/docs/user-guides/cluster-issuers.md I spent 40 minutes (mostly because GCLB changes take time to take effect) only to realize I forgot to change the LE endpoint from staging to prod. 馃槩

I think the Quickstart can be actually quick by adding this.

areacme kinfeature lifecyclrotten

Most helpful comment

Just submitted #394 would be happy if you could test!

All 18 comments

I've played with this recently. I think you can't really do this in one go in a Helm chart. It will fail because Helm tries to create CRDs before the controller is ready. I think there's two options:

  • Create one chart that can either install the controller or one or multiple CRDs
  • Create separate charts for controller and CRDs

Either way you'd have to do multiple Helm installs. But in the end, I agree that the installation experience must be improved.

@munnerz I was gonna discuss this topic with you anyways now that my PR (#229) got merged. I'm willing to help with this if you like.

Separate Helm chart can be a good solution.

I'm trying to put together a "cert-manager for GKE" guide repo. In one of these steps I'll probably do a kubectl create -f http:// with the ClusterIssuers for prod/staging. So it's not a big of a deal for me. But I think most users are going to need this.

Users should be specifying their own email addresses on ClusterIssuer/Issuer resources @ahmetb - so you may not be able to make it as simple as kubectl create -f http://...

Crazy thought: Helm templating as an HTTP service would be awesome.

curl https://template.service/cert-manager-default-issuers/[email protected]

and/or

$ curl -XPOST https://template.service/cert-manager-default-issuers/generate -d '{
  "email": "[email protected]"
}'
{
  "apiVersion": "certmanager.k8s.io/v1alpha1",
  "kind": "ClusterIssuer",
  ...
}

It could just return the YAML/JSON that should be applied (to make it easy to use directly with kubectl)

Yeah I'll probably do a curl | sed in my tutorial.

That said, what's the significance of the email in case of ACME/LE (other than being able to get expiry notifications)?

The Let's Eencrypt ToS basically I think.

I definitely got expiration emails before from staging or prod before, so there's that. But good to know ToS is another reason.

I second this as I've had similar questions trying to install cert-manager without a clear understanding of when or how Issuers should be created during the installation cycle.

Is the proposed workaround just to issue an additional helm create ... for the Issuer manifest after the cert-manager chart is installed?

@ahmetb Looking forward to your cert-manager GKE tutorial if you get around to it!

@unguiculus as I understand the problem boils down to helm not being able to validate the chart which defines and uses CRD together (discussion). They discuss that this could be implemented in helm 2 but until then we are out of luck here, either two chars or just add a yaml with LE ClusterIssuers to the directory as an example of how users can go about installing their own.

Just submitted #394 would be happy if you could test!

Copying my comment from https://github.com/jetstack/cert-manager/pull/394#issuecomment-399929552


Hi, since this PR was created, Helm added support for CRDs with a hook. I created a feature request #651 to start using the hook to make default issuers easier to setup

I am making use of this in the validation PR (#478).

As a result of the validation webhook, I think we will still be unable to create default Issuer/ClusterIssuer types as part of our Helm chart.

This is because after validation support is merged, we will need to wait up to 2minutes for the PKI/Certificate for the Certificate to be provisioned. During this time, Helm/Tiller will be unable to create any cert-manager resources (as the validating webhook will not be running, consequently causing the validation of those resources to fail).

I can't really see a clean way around this, aside from temporarily disabling validation, which opens up a whole can of worms (and makes validation itself less useful, as a user may still submit a broken resource which could potentially break everything). This same webhook model will in future be used for policy control too, which will make it even more critical we don't allow some way to bypass it in a non-admin controlled way.

For the time being, I think we are best to not accept a change such as this into the main Helm chart.

In order to make it easier for users to get started with cert-manager, I'd love to see a cert-manager-letsencrypt-pack Helm chart (or something similar) which would define the Issuer/ClusterIssuer resources. Users will then need to helm install the second chart after cert-manager is installed.

Perhaps in future we will have more flexibility in both Helm (with v3 being worked on) and also validation, but in the meantime I just don't think there is a viable solution that does not impede other important project developments (such as validation and policy control).

I'm going to close this PR, as I don't think we can accept it in the current form. I'd really like to see the relevant parts transplanted out of this PR though and into a standalone chart. We can then get that synced across to the main Helm chart repo too 馃槃

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to jetstack.
/lifecycle stale

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to jetstack.
/lifecycle rotten
/remove-lifecycle stale

This is a good example of a good install doc... the helm thing is an afterthought... doing the chart straight through is much simpler. https://kubernetes.github.io/ingress-nginx/deploy/

Could we add a job that get's kicked off 2 minutes after install and use that to create the Issuers and/or ClusterIssuers?

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to jetstack.
/close

@retest-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to jetstack.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Was this page helpful?
0 / 5 - 0 ratings