Prometheus-operator: RBAC on GKE - extra step needed

Created on 9 May 2017  Â·  30Comments  Â·  Source: prometheus-operator/prometheus-operator

What did you do?

hack/cluster-monitoring/deploy

What did you expect to see?

No errors

What did you see instead? Under which circumstances?

namespace "monitoring" created
clusterrolebinding "prometheus-operator" configured
serviceaccount "prometheus-operator" created
deployment "prometheus-operator" created
Error from server (Forbidden): error when creating "manifests/prometheus-operator/prometheus-operator-cluster-role.yaml": clusterroles.rbac.authorization.k8s.io "prometheus-operator" is forbidden: attempt to grant extra privileges: [{[create] [extensions] [thirdpartyresources] [] []} {[*] [monitoring.coreos.com] [alertmanagers] [] []} {[*] [monitoring.coreos.com] [prometheuses] [] []} {[*] [monitoring.coreos.com] [servicemonitors] [] []} {[*] [apps] [statefulsets] [] []} {[*] [] [configmaps] [] []} {[*] [] [secrets] [] []} {[list] [] [pods] [] []} {[delete] [] [pods] [] []} {[get] [] [services] [] []} {[create] [] [services] [] []} {[update] [] [services] [] []} {[get] [] [endpoints] [] []} {[create] [] [endpoints] [] []} {[update] [] [endpoints] [] []} {[list] [] [nodes] [] []} {[watch] [] [nodes] [] []}] user=&{[email protected]  [system:authenticated] map[]} ownerrules=[{[create] [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] [] [] [] [/api /api/* /apis /apis/* /healthz /swaggerapi /swaggerapi/* /version]}] ruleResolutionErrors=[]
Waiting for Operator to register third party objects...done!
daemonset "node-exporter" created
service "node-exporter" created
clusterrolebinding "kube-state-metrics" configured
deployment "kube-state-metrics" created
serviceaccount "kube-state-metrics" created
service "kube-state-metrics" created
Error from server (Forbidden): error when creating "manifests/kube-state-metrics/kube-state-metrics-cluster-role.yaml": clusterroles.rbac.authorization.k8s.io "kube-state-metrics" is forbidden: attempt to grant extra privileges: [{[list] [] [nodes] [] []} {[watch] [] [nodes] [] []} {[list] [] [pods] [] []} {[watch] [] [pods] [] []} {[list] [] [resourcequotas] [] []} {[watch] [] [resourcequotas] [] []} {[list] [extensions] [daemonsets] [] []} {[watch] [extensions] [daemonsets] [] []} {[list] [extensions] [deployments] [] []} {[watch] [extensions] [deployments] [] []} {[list] [extensions] [replicasets] [] []} {[watch] [extensions] [replicasets] [] []}] user=&{[email protected]  [system:authenticated] map[]} ownerrules=[{[create] [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] [] [] [] [/api /api/* /apis /apis/* /healthz /swaggerapi /swaggerapi/* /version]}] ruleResolutionErrors=[]
secret "grafana-credentials" created
secret "grafana-credentials" configured
configmap "grafana-dashboards" created
deployment "grafana" created
service "grafana" created
clusterrolebinding "prometheus" configured
configmap "prometheus-k8s-rules" created
serviceaccount "prometheus-k8s" created
servicemonitor "alertmanager" configured
servicemonitor "kube-apiserver" configured
servicemonitor "k8s-apps-http" configured
servicemonitor "kube-state-metrics" configured
servicemonitor "kubelet" configured
servicemonitor "node-exporter" configured
servicemonitor "prometheus" configured
service "prometheus-k8s" created
prometheus "k8s" configured
Error from server (Forbidden): error when creating "manifests/prometheus/prometheus-cluster-role.yaml": clusterroles.rbac.authorization.k8s.io "prometheus" is forbidden: attempt to grant extra privileges: [{[get] [] [nodes] [] []} {[list] [] [nodes] [] []} {[watch] [] [nodes] [] []} {[get] [] [services] [] []} {[list] [] [services] [] []} {[watch] [] [services] [] []} {[get] [] [endpoints] [] []} {[list] [] [endpoints] [] []} {[watch] [] [endpoints] [] []} {[get] [] [pods] [] []} {[list] [] [pods] [] []} {[watch] [] [pods] [] []} {[get] [] [configmaps] [] []} {[get] [] [] [] [/metrics]}] user=&{[email protected]  [system:authenticated] map[]} ownerrules=[{[create] [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] [] [] [] [/api /api/* /apis /apis/* /healthz /swaggerapi /swaggerapi/* /version]}] ruleResolutionErrors=[]
secret "alertmanager-main" created
service "alertmanager-main" created
alertmanager "main" configured

Environment

  • Kubernetes version information:
Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.2", GitCommit:"477efc3cbe6a7effca06bd1452fa356e2201e1ee", GitTreeState:"clean", BuildDate:"2017-04-19T20:33:11Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.2", GitCommit:"477efc3cbe6a7effca06bd1452fa356e2201e1ee", GitTreeState:"clean", BuildDate:"2017-04-19T20:22:08Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
  • Kubernetes cluster kind:

    GKE

According to Google Container Engine docs:

Because of the way Container Engine checks permissions when you create a Role or ClusterRole, you must first create a RoleBinding that grants you all of the permissions included in the role you want to create.

An example workaround is to create a RoleBinding that gives your Google identity a cluster-admin role before attempting to create additional Role or ClusterRole permissions.

This is a known issue in the Beta release of Role-Based Access Control in Kubernetes and Container Engine version 1.6.

So in order to proceed without error, cluster-admin role should be added to current executing user, eg:

kubectl create clusterrolebinding your-user-cluster-admin-binding --clusterrole=cluster-admin [email protected]

Could this be added as a faq/hint etc. somewhere?

Thank you!

Most helpful comment

Okay I found this below command actually better:

$ gcloud projects get-iam-policy PROJECT_ID

It returns the user's email and in my case the capitalized one. So, probably it's an alternative command if gcloud info | grep Account doesn't work.

All 30 comments

Yes I think it would definitely be worth starting an FAQ/Troubleshooting document in the Documentation folder. Do you want to do a PR for this @gytisgreitai ?

sure :)

Closing here as #360 is merged. Thanks a lot @gytisgreitai !

So I followed the instructions in #360, but I'm still stuck with "forbidden" access. Is there anything else need to be done? Do the credentials need to be recreated or do any other commands need to be ran?

Should be fine just with that command.

  • What Kubernetes version are you using?
  • Do you see your clusterrolebinding when doing kubectl get clusterrolebinding ?
  • What does kubectl get clusterrolebinding [yourclusterindingname] -o json output?

As a quick hack you can do kubectl proxy and apply those failed yamls via create

  • kubectl version
$ kubectl version --short
Client Version: v1.6.0
Server Version: v1.6.2
  • I do see my clusterrolebinding.
$ kubectl get clusterrolebinding | grep myname-cluster-admin-binding
myname-cluster-admin-binding                   17h
  • kubectl get clusterrolebinding myname-cluster-admin-binding -o json
{
    "apiVersion": "rbac.authorization.k8s.io/v1beta1",
    "kind": "ClusterRoleBinding",
    "metadata": {
        "creationTimestamp": "2017-05-14T23:09:25Z",
        "name": "myname-cluster-admin-binding",
        "resourceVersion": "188788",
        "selfLink": "/apis/rbac.authorization.k8s.io/v1beta1/clusterrolebindingsmyname-cluster-admin-binding",
        "uid": "<snipped>"
    },
    "roleRef": {
        "apiGroup": "rbac.authorization.k8s.io",
        "kind": "ClusterRole",
        "name": "cluster-admin"
    },
    "subjects": [
        {
            "apiGroup": "rbac.authorization.k8s.io",
            "kind": "User",
            "name": "<my-gmail-username>@gmail.com"
        }
    ]
}

I literally used the name myname-cluster-admin-binding, but that shouldn't be an issue correct?

Well I've just tested this on 1.6.2 and it seems to work fine for me.
And gcloud info | grep Account outputs <my-gmail-username>@gmail.com yes?

What access rights does this email have on iAM ?

  • Yes, gcloud info | grep Account outputs my email.
  • This user is an Owner on the project.

screen shot 2017-05-16 at 9 59 28 am

I'll let this issue die out since it only seems to be my problem, but I'll report back if I find a fix.

I ran into this, and turns out that even though gcloud info | grep Account tells me my username is [email protected], my username really is [email protected]! The case matters, and I had to fix the case to make it work.

I could determine this by looking at the apiserver logs, which I could do on GKE with:

kubectl proxy
curl -s http://localhost:8001/logs/kube-apiserver.log > apiserverlogs

Yes, gcloud info | grep Account outputs my email.

That's not always true. For example, the case of @yuvipanda showed that the username that GKE recognized was capitalized at some characters. So my two questions:

1) why does capitalization happen ?
2) How do we retrieve that _modified/capitalized/real_ username by code ? I am looking for support from client-go for example...

Okay I found this below command actually better:

$ gcloud projects get-iam-policy PROJECT_ID

It returns the user's email and in my case the capitalized one. So, probably it's an alternative command if gcloud info | grep Account doesn't work.

Tried all the above, but still no luck:

kubectl version --short
Client Version: v1.8.4
Server Version: v1.8.4-gke.0
Error: release prometheus-operator failed: clusterroles.rbac.authorization.k8s.io "prometheus-operator-prometheus-operator" is forbidden: attempt to grant extra privileges



md5-2be70f02507d32687c62d8893d575874



➜  prometheus-operator git:(master) git rev-parse HEAD
848497cb6dfbab44f99c8facc2da5c93e87ec6c4

Anything else I could try?

That's very curious, as this seems to continue to be an RBAC issue and (so far) has only ever been reported for GKE, I would try to contact their support.

For what it is worth, gcloud auth list and gcloud config get-value account might be an easier way to check the currently configured account name.

@mattnworb feel free to open a PR to improve docs where possible :slightly_smiling_face:

Based on @ngtuna's response, here's what I did to streamline the process a bit further.

PROJECT_ID='<project_id>'

gcloud projects get-iam-policy "$PROJECT_ID" --format json \
  | jq -r '.bindings[] | select(.role == "roles/owner") | .members[]' \
  | awk -F':' '{print $2}'

# [email protected]

@mattnworb Both gcloud auth list and gcloud config get-value account still returned the incorrect un-capitalized email address for me.

@jason-riddle your code snippet is nice but only working for roles/owner. it can happen that the user is a roles/container.clusterAdmin but not a project owner.
gcloud auth list and gcloud config get-value account also return the all lowercase user for me

Instructions from here seem to doesn't work, as I have clusterrolebinding with my G Suite account email pointed, but no use. In the GCE IAM console I'm a project owner and Kubernetes Engine Admin as well.

$ kubectl version --short
Client Version: v1.8.6
Server Version: v1.8.6-gke.0

Sorted this out. Go to Stackdriver Logging, select appropriate kubernetes cluster and error log level.

Apply the next advanced filter:

resource.type="k8s_cluster"
resource.labels.location="europe-west1-b"
resource.labels.cluster_name="your-cluster-name"
severity>=ERROR
protoPayload.resourceName="rbac.authorization.k8s.io/v1beta1/clusterroles/prometheus-operator"

And you'll find errors like:

k8s.io create prometheus-operator 20456435270447878856446 {"@type":"type.googleapis.com/google.cloud.audit.AuditLog","status":{"code": ...

Next you need to copy that long numbered principalEmail. Paste it in clusterrolebinding command, as a user key:

kubectl create clusterrolebinding 20456435270447878856446-cluster-admin-binding --clusterrole=cluster-admin --user=20456435270447878856446

and you'll be able to create prometheus-operator cluster role.

@Blasterdick it may be obvious but keep in mind that using an actual email address for a user account e.g. [email protected] will also work

@RobinUS2 for my setup with GKE v1.9.2 cluster using an actual G Suite account email doesn't work, I've tried it first.

@Blasterdick interesting, because in my case there was no principal ID number to be found and I tried the email and it did work. At least it's worth testing both in case you are not making progress.

What I needed in my case was to create a ClusterRoleBinding to give my helm tiller super-powers :muscle:

First of all:

kubectl create serviceaccount tiller --namespace kube-system

Then create a tiller-clusterrolebinding.yaml file:

kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: tiller-clusterrolebinding
subjects:
- kind: ServiceAccount
  name: tiller
  namespace: kube-system
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: ""

And finally:

kubectl create -f tiller-clusterrolebinding.yaml
helm init --service-account tiller --upgrade

Source.

@fracasula I've followed your instructions and still getting "forbidden" output as detailed below:

Error: release prometheus-operator failed: clusterroles.rbac.authorization.k8s.io "prometheus-operator" is forbidden: attempt to grant extra privileges: [PolicyRule{Resources:["thirdpartyresources"], APIGroups:["extensions"], Verbs:["*"]} PolicyRule{Resources:["customresourcedefinitions"], .... [system:serviceaccounts system:serviceaccounts:kube-system system:authenticated] map[]} ownerrules=[] ruleResolutionErrors=[[clusterroles.rbac.authorization.k8s.io "cluster-admin" not found, clusterroles.rbac.authorization.k8s.io "cluster-admin" not found]]

I'm using minikube v0.25.0 on my local Windows machine.

kubectl get clusterrolebindings tiller 1h tiller-clusterrolebinding 6m
````yaml
kubectl get clusterrolebinding -o=yaml

apiVersion: v1
items:

  • apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
    creationTimestamp: 2018-05-08T18:40:07Z
    name: tiller
    namespace: ""
    resourceVersion: "38831"
    selfLink: /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/tiller
    uid: 3ad20aa3-52ef-11e8-9e50-00155dd8f11a
    roleRef:
    apiGroup: rbac.authorization.k8s.io
    kind: ClusterRole
    name: cluster-admin
    subjects:

    • kind: ServiceAccount

      name: tiller

      namespace: kube-system

  • apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
    creationTimestamp: 2018-05-08T19:52:43Z
    name: tiller-clusterrolebinding
    namespace: ""
    resourceVersion: "41447"
    selfLink: /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/tiller-clusterrolebinding
    uid: 5f8b5e03-52f9-11e8-a703-00155dd8f11a
    roleRef:
    apiGroup: rbac.authorization.k8s.io
    kind: ClusterRole
    name: cluster-admin
    subjects:

    • kind: ServiceAccount

      name: tiller

      namespace: kube-system

      kind: List

      metadata:

      resourceVersion: ""

      selfLink: ""



      kubectl version --short

      Client Version: v1.9.7

      Server Version: v1.9.0

      ````

Update: Turns out that I was missing cluster-admin role. I've executed following cluster-admin.yaml to create role.

````
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
creationTimestamp: null
name: cluster-admin
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
rules:

  • apiGroups:

    • ''

      resources:

    • ''

      verbs:

    • '*'

  • nonResourceURLs:

    • ''

      verbs:

    • ''



      kubectl apply -f cluster-admin.yaml

      ````

To add to Blasterdick solution of looking through Stackdriver logs.
We can also get the principalEmail from the client-id field in service-account.json file.

A sample yaml config for this can be

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: <CLIENT_ID>-cluster-admin-binding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: User
  name: "<CLIENT_ID>"

followed the solutions here adding both capitalised and lower-case email account to clusterrolebindings, still getting 403, has anyone else run into the same situation?

I also had to use the user ID in my ClusterRoleBinding instead of the email address.
see @sipian post for how to find it.

I just ran into the problem that gcloud config get-value account returned my email as @googlemail.com whereas kubernetes was expecting my account as @gmail.com. So assigning a cluster role binding to the @gmail user was required for it to work. Maybe this helps someone.

Got to here because of the forbidden attempt to grant extra privileges, the easiest fix is to upgrade the cluster 1.12.x if you don't want to add a clusterrolebinding - https://cloud.google.com/kubernetes-engine/docs/how-to/role-based-access-control#iam-rolebinding-bootstrap

Google Cloud IAM and 1.12.x does allow the creation of Roles and ClusterRoles

Was this page helpful?
0 / 5 - 0 ratings