What I ran
kubectl apply -f https://raw.githubusercontent.com/jetstack/cert-manager/release-0.11/deploy/manifests/00-crds.yaml --validate=false -n kube-system
helm dep update charts/cert-manager/cert-manager/
helm install cert-manager charts/cert-manager/cert-manager --set ingressShim.extraArgs='{--default-issuer-name=ca-issuer,--default-issuer-kind=ClusterIssuer}' --set ingressShim.enabled=false --namespace kube-system
kubectl apply -f charts/cert-manager/pre-reqs/issuer.yaml -n kube-system
Describe the bug:
helm install cert-manager charts/cert-manager/cert-manager --set ingressShim.extraArgs='{--default-issuer-name=ca-issuer,--default-issuer-kind=ClusterIssuer}' --set ingressShim.enabled=false --namespace kube-system
Error: could not get apiVersions from Kubernetes: unable to retrieve the complete list of server APIs: webhook.cert-manager.io/v1beta1: the server is currently unable to handle the request
Expected behaviour:
I would expect the resource to work out of the chart?
Steps to reproduce the bug:
See above.
Anything else we need to know?:
Environment details::
/kind bug
Also happens with Helm 2.15.1 & Kubernetes v1.14.1 (minikube)
This issue seems to be fixed by downgrading to Helm v2.14.0
I "fixed" this by upgrading the CRDs
kubectl apply --validate=false -f https://raw.githubusercontent.com/jetstack/cert-manager/release-0.11/deploy/manifests/00-crds.yaml
I'm seeing this on a brand-new cluster setup (no previously installed CRDs):
helm version:
Client: &version.Version{SemVer:"v2.15.2", GitCommit:"8dce272473e5f2a7bf58ce79bb5c3691db54c96b", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.15.2", GitCommit:"8dce272473e5f2a7bf58ce79bb5c3691db54c96b", GitTreeState:"clean"}
+1 on helm v2.15.2 and new install of cert-manager. Workaround to downgrade to helm v2.14.0 is confirmed valid.
I "fixed" this by upgrading the CRDs
- Removed existing CRDs
- Installed new CRDs
kubectl apply --validate=false -f https://raw.githubusercontent.com/jetstack/cert-manager/release-0.11/deploy/manifests/00-crds.yaml
- Update all issuers with new domain.
Can you please give complete commands for each step? Thanks!
@imkane :
$ kubectl delete -f https://raw.githubusercontent.com/jetstack/cert-manager/release-0.11/deploy/manifests/00-crds.yaml
$ kubectl apply --validate=false -f https://raw.githubusercontent.com/jetstack/cert-manager/release-0.11/deploy/manifests/00-crds.yaml
Same issue while installing cert-manager 0.11.0 with helm v3.0.0-rc.3:
$ kubectl apply --validate=false -f https://raw.githubusercontent.com/jetstack/cert-manager/release-0.11/deploy/manifests/00-crds.yaml
$ helm install cert-manager jetstack/cert-manager --version 0.11.0 --namespace ingress
Error: could not get apiVersions from Kubernetes: unable to retrieve the complete list of server APIs: webhook.cert-manager.io/v1beta1: the server is currently unable to handle the request
I am having the same issue as @pavdmyt with helm v3.0.0-rc.3. Anyone got any ideas?
Same issue while installing cert-manager 0.11.0 with helm v3.0.0-rc.4:
We're seeing this with any helm/tiller version above 2.14.3 & cert-manager 0.10.1
Pretty much all Helm operations started to fail with 2.15.x & 2.16.x (for us it's been admission.certmanager.k8s.io/v1beta1 as API though). Even "non-cert-manager-related" ones.
Also, removing & re-installing cert-manager 0.10.1 with 2.16.x wasn't working on our Clusters.
Downgrade to Helm 2.14.3 was the only way to fix things.
@hexa2k9 Is there a related issue at helm bugtracker? Probably this one should be linked there.
Downgrading helm to versions < 2.15.2 is not very good option tacking into account this security vulnerability: https://helm.sh/blog/2019-10-30-helm-symlink-security-notice/
To be honest I haven‘t checked Helm Bugtracker. By „non-cert-manager-related“ Operations I meant to say that even though the Charts are not directly related to cert-manager they still use ClusterIssuer Resources. The error is hitting us nevertheless and is reproducible across all our environments (all on cert-manager 0.10.1 currently).
About the Security Note; yes we are very aware of this, that‘s been the reason we went with 2.16.1 in the first place. However as this is a client issue environments are still considered to be „secure“ as we don‘t allow running Helm tasks from anywhere but automation only.
yesterday,then i tried install cert-mansger release 0.11 without helm 3.0 rc4 in kubernetes 1.14.8,only run kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v0.11.0/cert-manager.yaml,then run kubectl kubectl api-resources,the error:"unable to retrieve the complete list of server APIs: webhook.cert-manager.io/v1beta1: the server is currently unable to handle the request" appear again!
Same problem with Kubernetes 1.15.5, Helm/Tiller 2.16.1, cert-manager release 0.11.0 on AKS.
My problem was that I was applying the full cert-manager.yaml before running the helm chart, not the CRDs-only version. Most of you appear not to have made this mistake, but I'm posting this here because this is the first result I hit when trying to find answers.
"Good" one for helm: https://raw.githubusercontent.com/jetstack/cert-manager/v0.11.0/deploy/manifests/00-crds.yaml
What worked for me:
1) helm delete --purge cert-manager (if there's any form of it there, FAILED or otherwise)
2) kubectl delete -f https://github.com/jetstack/cert-manager/releases/download/v0.11.0/cert-manager.yaml
3) kubectl get crd to make sure there aren't any cert-manager.io or certmanager.k8s.io Custom Resource Definitions hanging around
4) kubect get pods -A to make sure there aren't any cert-manager-* pods (to make sure you didn't accidentally install them in a different namespace)
5) kubectl apply -f https://raw.githubusercontent.com/jetstack/cert-manager/v0.11.0/deploy/manifests/00-crds.yaml --validate=false
6) Create a namespace.yaml with these contents:
apiVersion: v1
kind: Namespace
metadata:
name: cert-manager
labels:
cert-manager.io/disable-validation: "true"
7) kubectl apply -f namespace.yaml
8) helm upgrade cert-manager jetstack/cert-manager --install --namespace cert-manager --version v0.11.0 --wait. This took nearly 3 minutes, because the cert-manager-webhook pod takes a long time to be created.
9) Add your desired ClusterIssuers with kubectl apply. I'm using the acme issuer with Let's Encrypt and chose the http01 solver.
Hi! I am trying to install cert-manager 0.11.0 on k3s and am having this exact issue. I've tried installing as suggested by @amandadebler but no change. k3s' Kubernetes version is v1.16.3-k3s.2 and helm is v2.16.1. Any update on this? Thanks!
Same issue here:
$ helm version
version.BuildInfo{Version:"v3.0.0", GitCommit:"e29ce2a54e96cd02ccfce88bee4f58bb6e2a28b6", GitTreeState:"clean", GoVersion:"go1.13.4"}
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.3", GitCommit:"b3cbbae08ec52a7fc73d334838e18d17e8512749", GitTreeState:"clean", BuildDate:"2019-11-13T11:23:11Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.8", GitCommit:"211047e9a1922595eaa3a1127ed365e9299a6c23", GitTreeState:"clean", BuildDate:"2019-10-15T12:02:12Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}
I was able to solve the problem by deleting resource
kubectl delete apiservices v1beta1.webhook.cert-manager.io
After I also performed full clean up of previously installed cert-manager by looping through all the resources and looking for resources with my release name.
RELEASE_NAME="myrelease-"
kubectl api-resources |
awk '{ print $1}' |
while read a
do
echo "############$a"
kubectl g $a --all-namespaces | grep $RELEASE_NAME
done
After that everything seems to work fine.
I also had to remove other global objects before the uninstall was complete, and I could reinstall:
```kubectl get apiservice
kubectl delete apiservice v1beta1.webhook.cert-manager.io # find not available one
kubectl delete psp cert-manager-cainjector
kubectl delete psp cert-manager-webhook
kubectl delete psp cert-manager
kubectl get clusterrole | grep cert-manager | cut -f 1 -d ' ' | xargs kubectl delete clusterrole
kubectl get clusterrolebinding | grep cert-manager | cut -f 1 -d ' ' | xargs kubectl delete clusterrolebinding
kubectl delete role -n kube-system cert-manager-cainjector:leaderelection cert-manager:leaderelection
kubectl delete rolebinding -n kube-system cert-manager-cainjector:leaderelection cert-manager:leaderelection cert-manager-webhook:webhook-authentication-reader
kubectl delete MutatingWebhookConfiguration cert-manager-webhook
kubectl delete ValidatingWebhookConfiguration cert-manager-webhook
```
It's important you properly uninstall cert-manager before attempting to reinstall it if you had it installed before.
You can see uninstallation instructions here: https://cert-manager.io/docs/installation/uninstall/kubernetes/
Notably, deleting the cert-manager namespace is not sufficient as there are global resources (e.g. ClusterRole, ClusterRoleBinding, CRD, ValidatingWebhook, MutatingWebhook etc).
All of the above cases sound to me like you've previously attempted to install CM and ran into some issue, and then not 'cleaned up' the previous installation properly. I'm going to close this issue as I think it's becoming a catch-all for people to '+1' in, and there's already meaningful suggestions/solutions that have been posted.
Regarding Helm versioning, this should not have any impact here - please just make sure you have not left resources lying around in your cluster and issues should go away! For what it's worth, our own CI is currently using Helm 3 but has gone through many many revisions of Helm 2 before that without issue.
Most helpful comment
My problem was that I was applying the full cert-manager.yaml before running the helm chart, not the CRDs-only version. Most of you appear not to have made this mistake, but I'm posting this here because this is the first result I hit when trying to find answers.
"Good" one for helm: https://raw.githubusercontent.com/jetstack/cert-manager/v0.11.0/deploy/manifests/00-crds.yaml
What worked for me:
1) helm delete --purge cert-manager (if there's any form of it there, FAILED or otherwise)
2) kubectl delete -f https://github.com/jetstack/cert-manager/releases/download/v0.11.0/cert-manager.yaml
3) kubectl get crd to make sure there aren't any cert-manager.io or certmanager.k8s.io Custom Resource Definitions hanging around
4) kubect get pods -A to make sure there aren't any cert-manager-* pods (to make sure you didn't accidentally install them in a different namespace)
5) kubectl apply -f https://raw.githubusercontent.com/jetstack/cert-manager/v0.11.0/deploy/manifests/00-crds.yaml --validate=false
6) Create a namespace.yaml with these contents:
7) kubectl apply -f namespace.yaml
8) helm upgrade cert-manager jetstack/cert-manager --install --namespace cert-manager --version v0.11.0 --wait. This took nearly 3 minutes, because the cert-manager-webhook pod takes a long time to be created.
9) Add your desired ClusterIssuers with kubectl apply. I'm using the acme issuer with Let's Encrypt and chose the http01 solver.