Cert-manager: ClusterIssuer not created with k3s

Created on 3 Apr 2019  路  8Comments  路  Source: jetstack/cert-manager

Describe the bug:
Creating a ClusterIssuer for acme server the validation hook says there is no resource of this type
I'm using k3s for kubernestes cluster (super easy to install)

Expected behaviour:

The issuer should be created

Steps to reproduce the bug:

apiVersion: certmanager.k8s.io/v1alpha1                                      
kind: ClusterIssuer                                                          
metadata:                                                                    
  name: letsencrypt-staging                                                  
  namespace: cert-manager                                                    
spec:                                                                        
  acme:                                                                      
    server: https://acme-staging-v02.api.letsencrypt.org/directory           
    email: [email protected]                                            
    privateKeySecretRef:                                                     
      name: letsencrypt-staging                                              
    http01: {}                                                                                                                                 

kubectl apply -f issuer.yaml

Response:

Error from server (InternalError): error when creating "issuer.yaml": Internal error occurred: failed calling webhook "clusterissuers.admission.certmanager.k8s.io": an error on the server ("Internal Server Error: \"/apis/admission.certmanager.k8s.io/v1beta1/clusterissuers\": the server could not find the requested resource") has prevented the request from succeeding

kubectl logs:

    GOROOT/src/net/http/server.go:1964 +0x44

github.com/jetstack/cert-manager/vendor/k8s.io/apiserver/pkg/server/filters.(timeoutHandler).ServeHT
TP.func1(0xc0007e31a0, 0xc0004a9c80, 0x196a2a0, 0xc00000ca98, 0xc00079ef00)
vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:108 +0xb3
created by github.com/jetstack/cert-manager/vendor/k8s.io/apiserver/pkg/server/filters.(
timeoutHandl
er).ServeHTTP
vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:97 +0x1b0

logging error output: "Internal Server Error: \"/apis/admission.certmanager.k8s.io/v1beta1?timeout=32
s\": the server could not find the requested resource\n"
[kubectl/v1.13.5 (linux/amd64) kubernetes/256ea73 10.42.0.1:54570]

Anything else we need to know?:

Changing ClusterIssuer to Issuer works.

Environment details::

  • Kubernetes version (e.g. v1.10.2): 1.13.5
  • Cloud-provider/provisioner (e.g. GKE, kops AWS, etc): None
  • cert-manager version (e.g. v0.4.0): v0.7.0
  • Install method (e.g. helm or static manifests):

Using lightweight https://k3s.io add a helm installation in directory /var/lib/rancher/k3s/server/manifests/certmanager.yaml with contents:

apiVersion: k3s.cattle.io/v1
kind: HelmChart
metadata:
  name: cert-manager
  namespace: kube-system
spec:
  chart: cert-manager
  version: v0.7.0
  targetNamespace: cert-manager
  repo: https://charts.jetstack.io
  set:
    ingressShim.defaultIssuerName: letsencrypt-prod
    ingressShim.defaultIssuerKind: ClusterIssuer
    webhook.enabled: "false"

Restart: systemctl restart k3s

/kind bug

aredeploy kinbug lifecyclrotten

Most helpful comment

I'm unable to reproduce this when installing cert-manager the "normal" way with helm. I.e. as you would in any other k8s cluster.

Maybe I'm missing something, but shouldn't the custom resource definitions be applied separately before the chart is installed? This is what the documentation states. I had no problems following those steps with k3s. Webhook working and ClusterIssuer created.

All 8 comments

Update: Issuer works because I removed the webhook validation from the namespace. ClusterIssuer keeps validating.
I did try to disable in helm installation and it does not disable the thing :-). Will check that later.

We don't run e2e tests against k3s, and I'm not too sure what features are removed from it compared to normal k8s, so this is particularly hard to debug.

Due to the fact that k3s is ultimately a fork of Kubernetes, I'm not really able to provide much extra help here - if anyone else has experience with it, please do comment 馃槃

Looks like the webhook validation on k3s is not working, as well as the helm option to disable. There is a discussion in https://github.com/rancher/k3s/issues/117

I'm unable to reproduce this when installing cert-manager the "normal" way with helm. I.e. as you would in any other k8s cluster.

Maybe I'm missing something, but shouldn't the custom resource definitions be applied separately before the chart is installed? This is what the documentation states. I had no problems following those steps with k3s. Webhook working and ClusterIssuer created.

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to jetstack.
/lifecycle stale

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to jetstack.
/lifecycle rotten
/remove-lifecycle stale

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to jetstack.
/close

@retest-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to jetstack.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Was this page helpful?
0 / 5 - 0 ratings