Cert-manager: GitOps support

Created on 11 Oct 2019  路  12Comments  路  Source: jetstack/cert-manager

It would be very nice if cert-manager could be installed via fluxcd. However, because
cert-manager appears to require a special flag (to turn off validation), this appears not to
be possible. Is there a way to install cert-manager without that flag?

aredeploy kinfeature prioritimportant-longterm

Most helpful comment

@munnerz have you considered not including the validation or entire CRD in the manifests directory and have the controller create or update the CRDs after startup?

I'm not entirely sure what x-kubernetes-preserve-unknown-fields: true does, but if simply removing it is a viable workaround why is it even in the manifest?

All 12 comments

83774 has been opened upstream to see if kubectl can be adapted to not hard fail on this. I'd advise in the meantime opening an issue in the flux repo and linking the flux developers to this issue (and the upstream k8s one!) too.

If you don't mind a bit of hacking at resources, you should be able to install them still if you remove the following field:
https://github.com/jetstack/cert-manager/blob/4ec682db20d2f030c5bbaac67c4f2f5301a7b081/deploy/manifests/00-crds.yaml#L373

and all other occurrences of this field in our CRDs. I think it features 2 or 3 times, and it is what is causing your error 馃槃

I tried what you suggested. It did not work. To be honest, I did not spend much time after that initial attempt.

There is an outstanding issue with flux on providing additional configuration options. https://github.com/fluxcd/flux/issues/2113

I appreciate your tool and the work that you put into it. However, I must say that the installation and removal requirements make it difficult to use, especially in a GitOps environment. I would appreciate any work you can do to enable easy installation and upgrade.

@munnerz have you considered not including the validation or entire CRD in the manifests directory and have the controller create or update the CRDs after startup?

I'm not entirely sure what x-kubernetes-preserve-unknown-fields: true does, but if simply removing it is a viable workaround why is it even in the manifest?

If anyone is still on Kubernetes version < 1.15 and is using flux with kustomize, this is a strategic merge patch, that worked for me:
```apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: certificaterequests.cert-manager.io
spec:
preserveUnknownFields: null


apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: certificates.cert-manager.io
spec:
preserveUnknownFields: null


apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: challenges.acme.cert-manager.io
spec:
preserveUnknownFields: null
validation:
openAPIV3Schema:
properties:
spec:
properties:
solver:
properties:
dns01:
properties:
webhook:
properties:
config:
x-kubernetes-preserve-unknown-fields: null


apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: clusterissuers.cert-manager.io
spec:
preserveUnknownFields: null
validation:
openAPIV3Schema:
properties:
spec:
properties:
acme:
properties:
solvers:
items:
properties:
dns01:
properties:
webhook:
properties:
config:
x-kubernetes-preserve-unknown-fields: null


apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: issuers.cert-manager.io
spec:
preserveUnknownFields: null
validation:
openAPIV3Schema:
properties:
spec:
properties:
acme:
properties:
solvers:
items:
properties:
dns01:
properties:
webhook:
properties:
config:
x-kubernetes-preserve-unknown-fields: null


apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: orders.acme.cert-manager.io
spec:
preserveUnknownFields: null
```

Would it perhaps be possible to provide multiple CRD definition files for different (still supported) versions of kubernetes?

There's more projects that provide a 1.14+ and a 1.14- yaml file for user-friendliness and it saves you from many support questions in the near future :)

We recently also merged in some changes to improve support for OpenShift 3.11, which is based on Kubernetes 1.11: https://github.com/jetstack/cert-manager/pull/2609

It may be possible for us to rework this to be a set of generic 'compatibility mode' manifests instead of OpenShift specific, as I think the actual differences between the two are fairly minimal if any.

Is there any progress on this issue?

To sound dramatic:

  • My (lab) cluster accessibility is failing hard because cert-manager is not running for longer than 3 months and I'm not able to install or upgrade a more recent version when we stick to using gitops.
  • On my production clusters manual installation is not possible due to security restrictions, so in the next months I'm going to have a big issue if our production cert-manager fails and needs upgrading ...

I'd love to have a non-manual non-interactive non-hacky way to reinstall cert-manager on my clusters..... I've tried the suggestion in this thread to change some fields in the crd file but all my tries resulted in kubectl failing to apply with an enormous stacktrace...

So i'd tried the manual installation, to see if it would help to save our behind in case something goes very badong on production, which gives some errors as well:

kubectl apply manually without --validate=false gives errors:

error: error validating "00-crds.yaml": error validating data: ValidationError(CustomResourceDefinition.spec): unknown field "preserveUnknownFields" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1beta1.CustomResourceDefinitionSpec; if you choose to ignore these errors, turn validation off with --validate=false

kubectl apply manually with --validate=false gives errors too:

Error from server (Invalid): error when creating "00-crds.yaml": CustomResourceDefinition.apiextensions.k8s.io "certificaterequests.cert-manager.io" is invalid: spec.conversion.webhookClientConfig: Required value: required when strategy is set to Webhook, but not allowed because the CustomResourceWebhookConversion feature is disabled
Error from server (Invalid): error when creating "00-crds.yaml": CustomResourceDefinition.apiextensions.k8s.io "certificates.cert-manager.io" is invalid: spec.conversion.webhookClientConfig: Required value: required when strategy is set to Webhook, but not allowed because the CustomResourceWebhookConversion feature is disabled
Error from server (Invalid): error when creating "00-crds.yaml": CustomResourceDefinition.apiextensions.k8s.io "challenges.acme.cert-manager.io" is invalid: spec.conversion.webhookClientConfig: Required value: required when strategy is set to Webhook, but not allowed because the CustomResourceWebhookConversion feature is disabled
Error from server (Invalid): error when creating "00-crds.yaml": CustomResourceDefinition.apiextensions.k8s.io "clusterissuers.cert-manager.io" is invalid: spec.conversion.webhookClientConfig: Required value: required when strategy is set to Webhook, but not allowed because the CustomResourceWebhookConversion feature is disabled
Error from server (Invalid): error when creating "00-crds.yaml": CustomResourceDefinition.apiextensions.k8s.io "issuers.cert-manager.io" is invalid: spec.conversion.webhookClientConfig: Required value: required when strategy is set to Webhook, but not allowed because the CustomResourceWebhookConversion feature is disabled
Error from server (Invalid): error when creating "00-crds.yaml": CustomResourceDefinition.apiextensions.k8s.io "orders.acme.cert-manager.io" is invalid: spec.conversion.webhookClientConfig: Required value: required when strategy is set to Webhook, but not allowed because the CustomResourceWebhookConversion feature is disabled

I'm a bit stuck at the moment and my (lab) cluster remains down, (and I try not to think about my production cluster)... I'd love to have a solution that leaves my git repository as the single source of truth for our configuration as it saves us hours during a failover and is required for our security audits....

Is there anything I can do to help speed the search for a solution for this validation issue?
We'd love to keep using cert-manager as it served us well for the last years...

We've updated our install manifests to be split into two variants now, 'legacy' (for Kubernetes 1.14 and below), and 'normal' for 1.15+.

There's more info on this in the install guide: https://cert-manager.io/docs/installation/kubernetes/#installing-with-regular-manifests

This should remove the need for --validate=false unless you are running on Kubernetes v1.11.

EDIT: s/require/remove

I _think_ this issue can be closed since these changes merged (which was for v0.14). If anyone can confirm either way that the newer legacy manifests are working in your own environment, that'd be great 馃槃

Almost!

I can confirm I can use helm now to install Certmamager on k8s. That is excellent!! Thx.

However, the upgrade path is still manual. That is a gitops blocker.

We've also extended the Helm chart to include an installCRDs option in v0.15.0-alpha.0 onwards. This should finally resolve the GitOps issues.

The only remaining issue you may run into now, is if your GitOps controller resets the caBundle field on Mutating, Validating or Converting (CRD) webhooks back to "" - our own cainjector controller is responsible for setting and updating this field, and so ideally GitOps controllers would ignore this.

Hopefully Kubernetes server side apply and 'managed fields' will help resolve this if it _is_ an issue right now with GitOps controllers (I imagine there will be varying levels of support right now between different solutions).

For now though, I think we can close this issue with the addition of installCRDs 馃槃

Was this page helpful?
0 / 5 - 0 ratings