Cert-manager: Helm chart fails to install with RBAC error on GKE

Created on 17 Jan 2018  Â·  19Comments  Â·  Source: jetstack/cert-manager

/kind bug

What happened:

  1. helm init
  2. git clone https://github.com/jetstack/cert-manager
  3. cd cert-manager
  4. helm install --name cert-manager --namespace kube-system contrib/charts/cert-manager
  5. See error:

Error: release cert-manager failed: clusterroles.rbac.authorization.k8s.io "cert-manager-cert-manager" is forbidden: attempt to grant extra privileges: [PolicyRule{Resources:["certificates"], APIGroups:["certmanager.k8s.io"], Verbs:[""]} PolicyRule{Resources:["issuers"], APIGroups:["certmanager.k8s.io"], Verbs:[""]} PolicyRule{Resources:["clusterissuers"], APIGroups:["certmanager.k8s.io"], Verbs:[""]} PolicyRule{Resources:["secrets"], APIGroups:[""], Verbs:[""]} PolicyRule{Resources:["events"], APIGroups:[""], Verbs:[""]} PolicyRule{Resources:["endpoints"], APIGroups:[""], Verbs:[""]} PolicyRule{Resources:["services"], APIGroups:[""], Verbs:[""]} PolicyRule{Resources:["pods"], APIGroups:[""], Verbs:[""]} PolicyRule{Resources:["ingresses"], APIGroups:["extensions"], Verbs:[""]}] user=&{system:serviceaccount:kube-system:default 6ee23ef4-fb0f-11e7-a397-42010a80014e [system:serviceaccounts system:serviceaccounts:kube-system system:authenticated] map[]} ownerrules=[PolicyRule{Resources:["selfsubjectaccessreviews"], APIGroups:["authorization.k8s.io"], Verbs:["create"]} PolicyRule{NonResourceURLs:["/api" "/api/" "/apis" "/apis/" "/healthz" "/swaggerapi" "/swaggerapi/" "/version"], Verbs:["get"]}] ruleResolutionErrors=[]

What you expected to happen: It to succeed

How to reproduce it (as minimally and precisely as possible):

This is a GKE cluster (version: 1.7.11-gke.1):

gcloud container clusters create certmgrtest

Environment:

  • Kubernetes version (use kubectl version): v1.7.11-gke.1
  • Cloud provider or hardware configuration: GKE
  • Install tools: Helm v2.7.2
kinbug

Most helpful comment

It looks like you may have deployed tiller without RBAC support - the full docs on this are here: https://github.com/kubernetes/helm/blob/master/docs/rbac.md

The tl;dr - you need to grant the tiller service account the cluster-admin role. I usually do this with:

$ kubectl create serviceaccount -n kube-system tiller
$ kubectl create clusterrolebinding tiller-binding --clusterrole=cluster-admin --serviceaccount kube-system:tiller
$ helm init --service-account tiller

This started becoming an issue out of the box with GKE when the default service account in kube-system dropped the cluster-admin role by default (which I guess was 1.7).

EDIT: must be 1.7 as you are on 1.7 😄

All 19 comments

I got it working by adding --set rbac.create=false but I think RBAC-enabled mode should work too.

It looks like you may have deployed tiller without RBAC support - the full docs on this are here: https://github.com/kubernetes/helm/blob/master/docs/rbac.md

The tl;dr - you need to grant the tiller service account the cluster-admin role. I usually do this with:

$ kubectl create serviceaccount -n kube-system tiller
$ kubectl create clusterrolebinding tiller-binding --clusterrole=cluster-admin --serviceaccount kube-system:tiller
$ helm init --service-account tiller

This started becoming an issue out of the box with GKE when the default service account in kube-system dropped the cluster-admin role by default (which I guess was 1.7).

EDIT: must be 1.7 as you are on 1.7 😄

Whoa this is too complicated. I expected something like helm init --with-proper-rbac. But I guess what's above will do.

I think it happened around 1.7.

It might be worth documenting this.

I agree - it's really annoying and catches me out every time (usually in demos... 🙄)! I think there must be an upstream issue for this somewhere? Couldn't find one myself though.

To be fair actually, the doc you linked does actually have a Example: Service account with cluster-admin role section that details what I've written above (albeit more verbosely).

I know what you are getting at - instructions (both ours referencing theirs, and theirs themselves) should be super-super clear and simple. You should have been able to debug the error message you got from reading a NOTES section or something, that links off to the RBAC doc.

I think it's unrealistic to think people are going to read https://github.com/kubernetes/helm/blob/master/docs/rbac.md before doing "helm init".

In cert-manager's Helm doc, there's no mention of even "helm init" so the Helm client will recommend them to run it when the "helm install" fails. So most people will just type "helm init", only to find out, there’s more to it.

Arguably Helm can do better here by ensuring it initializes itself with good defaults, but I don't know why this is the case. But I think we can do better by copy-pasting those 3 lines to the Helm documentation here.

True - but I don't want to repeat the Helm install docs. Would it be sufficient to add a 'Step 0: install Helm' to our install docs? Users that already have Helm configured could then skip over it.

I think that's reasonable. I'm assuming if your Helm doesn't have the right RBAC config, it'll fail installing most things. But if I had those 3 lines, I could just copy paste those instead of spending many minutes here. :)

On AWS with kops, I still get this error when installing cert-manager (even after doing the "Example: Service account with cluster-admin role" section):

release cert-manager failed: clusterroles.rbac.authorization.k8s.io "cert-manager-cert-manager" is forbidden: attempt to grant extra privileges: [PolicyRule{Resources:["certificates"], APIGroups:["certmanager.k8s.io"], Verbs:["*"]} PolicyRule{Resources:["issuers"], APIGroups:["certmanager.k8s.io"], Verbs:["*"]} PolicyRule{Resources:["clusterissuers"], APIGroups:["certmanager.k8s.io"], Verbs:["*"]} PolicyRule{Resources:["secrets"], APIGroups:[""], Verbs:["*"]} PolicyRule{Resources:["events"], APIGroups:[""], Verbs:["*"]} PolicyRule{Resources:["endpoints"], APIGroups:[""], Verbs:["*"]} PolicyRule{Resources:["services"], APIGroups:[""], Verbs:["*"]} PolicyRule{Resources:["pods"], APIGroups:[""], Verbs:["*"]} PolicyRule{Resources:["ingresses"], APIGroups:["extensions"], Verbs:["*"]}] user=&{system:serviceaccount:kube-system:tiller a8f371ca-116c-11e8-b56e-0ad3089af66a [system:serviceaccounts system:serviceaccounts:kube-system system:authenticated] map[]} ownerrules=[] ruleResolutionErrors=[clusterroles.rbac.authorization.k8s.io "cluster-admin" not found]

Just had the exact same issue with Azure Container Service. I also tried helm reset / init to no avail.

I actually found the issue with ACS is this one: the cluster-admin role doesn't get created by default.

@jpds from the looks of your error message (namely: clusterroles.rbac.authorization.k8s.io "cluster-admin" not found), it appears you don't have RBAC enabled in your cluster, hence the error.

Try setting --set rbac.create=false

Had this issue on GCE 1.9.2-gke.1

Should be added to the installation read me, and perhaps reopened

I'm not quite sure what should be added to the README here, but would be more than happy to merge PRs that people think improve clarity!

running into the same issue:

helm install --name cert-manager --namespace kube-system contrib/charts/cert-manager --set rbac.create=false

result:
Error: release cert-manager failed: clusterroles.rbac.authorization.k8s.io "cert-manager-cert-manager" is forbidden: attempt to grant extra privileges: [PolicyRule{Resources:["certificates"], APIGroups:["certmanager.k8s.io"], Verbs:["*"]} PolicyRule{Resources:["issuers"], APIGroups:["certmanager.k8s.io"], Verbs:["*"]} PolicyRule{Resources:["clusterissuers"], APIGroups:["certmanager.k8s.io"], Verbs:["*"]} PolicyRule{Resources:["secrets"], APIGroups:[""], Verbs:["*"]} PolicyRule{Resources:["events"], APIGroups:[""], Verbs:["*"]} PolicyRule{Resources:["endpoints"], APIGroups:[""], Verbs:["*"]} PolicyRule{Resources:["services"], APIGroups:[""], Verbs:["*"]} PolicyRule{Resources:["pods"], APIGroups:[""], Verbs:["*"]} PolicyRule{Resources:["ingresses"], APIGroups:["extensions"], Verbs:["*"]}] user=&{system:serviceaccount:kube-system:default 842f836f-10d0-11e8-9452-0290d451c828 [system:serviceaccounts system:serviceaccounts:kube-system system:authenticated] map[]} ownerrules=[] ruleResolutionErrors=[]

what am i doing wrong here?

ahhhh i checked the sourcecode of the templates - it seems like its
{{- if .Values.rbac.enabled -}} - rbac.enabled
instead of rbac.create

when using rbac.enabled=false the deployment now does work for me.

Reinstall helm tiller under an rbac service account

On Tue, Feb 20, 2018, 10:30 AM Herrmann Hinz notifications@github.com
wrote:

running into the same issue:

helm install --name cert-manager --namespace kube-system
contrib/charts/cert-manager --set rbac.create=false

result:
Error: release cert-manager failed: clusterroles.rbac.authorization.k8s.io
"cert-manager-cert-manager" is forbidden: attempt to grant extra
privileges: [PolicyRule{Resources:["certificates"], APIGroups:["
certmanager.k8s.io"], Verbs:[""]} PolicyRule{Resources:["issuers"],
APIGroups:["certmanager.k8s.io"], Verbs:["
"]}
PolicyRule{Resources:["clusterissuers"], APIGroups:["certmanager.k8s.io"],
Verbs:[""]} PolicyRule{Resources:["secrets"], APIGroups:[""], Verbs:[""]}
PolicyRule{Resources:["events"], APIGroups:[""], Verbs:[""]}
PolicyRule{Resources:["endpoints"], APIGroups:[""], Verbs:["
"]}
PolicyRule{Resources:["services"], APIGroups:[""], Verbs:[""]}
PolicyRule{Resources:["pods"], APIGroups:[""], Verbs:["
"]}
PolicyRule{Resources:["ingresses"], APIGroups:["extensions"], Verbs:["*"]}]
user=&{system:serviceaccount:kube-system:default
842f836f-10d0-11e8-9452-0290d451c828 [system:serviceaccounts
system:serviceaccounts:kube-system system:authenticated] map[]}
ownerrules=[] ruleResolutionErrors=[]

what am i doing wrong here?

—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/jetstack/cert-manager/issues/256#issuecomment-367034968,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AA1eLBPRirDLfK4t9b6KC6HI7PkYj5alks5tWvMogaJpZM4RglEN
.

>

Ray Foss

For anybody running into this, I followed every other example and nothing worked until I read this

TLDR - I needed to create extra role binding for kube-system:default

kubectl create clusterrolebinding add-on-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default

Had the same issue with a tillerless helm install on a new cluster in GKE. I was following the instructions for a Helm install but skipped the clusterrolebinding instructions (since tillerless helm runs locally I thought it didn't apply).

Turns out my own user, despite being IAM "owner", doesn't have cluster-admin privileges by default on GKE for a new cluster. This issue is covered in the docs under the normal non-helm install (https://docs.cert-manager.io/en/latest/getting-started/install.html), but since I was doing a "helm"(ish) install I had skipped that section.

In short the following did work:

# Install the CustomResourceDefinition resources separately
$ kubectl apply -f https://raw.githubusercontent.com/jetstack/cert-manager/release-0.6/deploy/manifests/00-crds.yaml

# Create the namespace for cert-manager
$ kubectl create namespace cert-manager

# Label the cert-manager namespace to disable resource validation
$ kubectl label namespace cert-manager certmanager.k8s.io/disable-validation=true

# Update your local Helm chart repository cache
$ helm repo update

# EXTRA STEP WHEN USING GKE: add the cluster-admin role to the current user
$ kubectl create clusterrolebinding cluster-admin-binding \
    --clusterrole=cluster-admin \
    --user=$(gcloud config get-value core/account)

# Install the cert-manager Helm chart using tillerless helm
$ helm tiller run cert-manager -- helm install --name cert-manager --namespace cert-manager stable/cert-manager
Was this page helpful?
0 / 5 - 0 ratings