Dashboard: Dashboard not working after re-deployment in GCE

Created on 27 Sep 2017  ·  27Comments  ·  Source: kubernetes/dashboard

Environment
Dashboard version: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.7.0
Kubernetes version: 1.7.4 og pool and master cluster version 1.7.6
Running on GCE
Steps to reproduce

Have the defualt GCE cluster running with 1.7.5. Verify the dashboard works on http://localhost:8001/ui
Then try to deploy the recomended version:
https://github.com/kubernetes/dashboard/blob/master/src/deploy/recommended/kubernetes-dashboard.yaml

Observed result

The recommended version fails with error:

secret "kubernetes-dashboard-certs" created
serviceaccount "kubernetes-dashboard" created
rolebinding "kubernetes-dashboard-minimal" created
deployment "kubernetes-dashboard" configured
service "kubernetes-dashboard" configured
Error from server (Forbidden): error when creating "https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml": roles.rbac.authorization.k8s.io "kubernetes-dashboard-minimal" is forbidden: attempt to grant extra privileges: [PolicyRule{Resources:["secrets"], APIGroups:[""], Verbs:["create"]} PolicyRule{Resources:["secrets"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["secrets"], ResourceNames:["kubernetes-dashboard-key-holder"], APIGroups:[""], Verbs:["get"]} PolicyRule{Resources:["secrets"], ResourceNames:["kubernetes-dashboard-certs"], APIGroups:[""], Verbs:["get"]} PolicyRule{Resources:["secrets"], ResourceNames:["kubernetes-dashboard-key-holder"], APIGroups:[""], Verbs:["update"]} PolicyRule{Resources:["secrets"], ResourceNames:["kubernetes-dashboard-certs"], APIGroups:[""], Verbs:["update"]} PolicyRule{Resources:["secrets"], ResourceNames:["kubernetes-dashboard-key-holder"], APIGroups:[""], Verbs:["delete"]} PolicyRule{Resources:["secrets"], ResourceNames:["kubernetes-dashboard-certs"], APIGroups:[""], Verbs:["delete"]} PolicyRule{Resources:["services"], ResourceNames:["heapster"], APIGroups:[""], Verbs:["proxy"]}] user=&{[email protected]  [system:authenticated] map[]} ownerrules=[PolicyRule{Resources:["selfsubjectaccessreviews"], APIGroups:["authorization.k8s.io"], Verbs:["create"]} PolicyRule{NonResourceURLs:["/api" "/api/*" "/apis" "/apis/*" "/healthz" "/swaggerapi" "/swaggerapi/*" "/version"], Verbs:["get"]}] ruleResolutionErrors=[]
Expected result

To see the dashboard

Comments

A collegue of mine deployed this kubernetes-dashboard, after a mistake and now I cant get it back. Iv tried the alternative version and other things, but I cant seem to get it working again

Most helpful comment

If you enabled RBAC, just type

kubectl create clusterrolebinding cluster-admin-binding \
--clusterrole cluster-admin --user $(gcloud config get-value account)

and

➜  ~ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml

secret "kubernetes-dashboard-certs" unchanged
serviceaccount "kubernetes-dashboard" unchanged
role "kubernetes-dashboard-minimal" created
rolebinding "kubernetes-dashboard-minimal" unchanged
deployment "kubernetes-dashboard" unchanged
service "kubernetes-dashboard" unchanged

All 27 comments

This looks to me like privilege escalation protection. Are you sure that with the account you want to create apply the dashboard.yaml you have the necessary right to create secrets etc.? You can't grant more permissions than your own account has in kubernetes.

Exactly, as @marco-jantke said. Just look at the error message. It says forbidden, which means you do not have privileges to create all resources. Only cluster admin can deploy Dashboard.

@marco-jantke @floreks I did look at them, but I created a fresh new cluster, shouldnt my account be the administrator then?

I don't know GCE cluster setup so I can't tell if it should or not. I see however that server responds with Forbidden error which means you do not have access to create some resources.

Make sure to grant yourself in GC IAM the Container Engine Admin/Cluster Admin rights. Hope this helps, but further support for that is not part of the kubernetes/dashboard project.

Hi, I am a cluster admin and I am still getting the same error, any ideas?

Also, because my master's had been updated to 1.7.6-gke.1, the dashboard stopped working

The dashboard has stopped working for me on 1.7.6-gke.1 as well across 5 clusters. I can see that my nodes are still at 1.7.5.

I have exactly same problem, with nodes being stuck on 1.7.5. Trying to
update them yields an error.

I tried to manually deploy dashboard, but it does not work as well, showing
same errors (missing static files).

Best, Bartosz

--
Bartosz Hernas

+49 174 971 63 46 <00491749716346>
hern.as

https://twitter.com/bartosz https://www.facebook.com/bhernas
https://www.linkedin.com/in/bhernas

On Wed, Oct 11, 2017 at 7:25 PM, Jeremy Shapiro notifications@github.com
wrote:

The dashboard has stopped working for me on 1.7.6-gke.1 as well across 5
clusters. I can see that my nodes are still at 1.7.5.


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/kubernetes/dashboard/issues/2415#issuecomment-335884818,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AA-mMwT1jBQlkg_kc5JESzH719aV3FZjks5srPoWgaJpZM4PmD7X
.

"Stopped working" does not really help us diagnose the problem. We need much more details together with logs from Dashboard to be able to help or point you in the right direction.

Same issue here, kube 1.7.6-gke.1 on gke, cluster admin, still getting the error :

Error from server (Forbidden): error when creating "https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml": roles.rbac.authorization.k8s.io "kubernetes-dashboard-minimal" is forbidden: attempt to grant extra privileges: [PolicyRule{Resources:["secrets"], APIGroups:[""], Verbs:["create"]} PolicyRule{Resources:["secrets"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["secrets"], ResourceNames:["kubernetes-dashboard-key-holder"], APIGroups:[""], Verbs:["get"]} PolicyRule{Resources:["secrets"], ResourceNames:["kubernetes-dashboard-certs"], APIGroups:[""], Verbs:["get"]} PolicyRule{Resources:["secrets"], ResourceNames:["kubernetes-dashboard-key-holder"], APIGroups:[""], Verbs:["update"]} PolicyRule{Resources:["secrets"], ResourceNames:["kubernetes-dashboard-certs"], APIGroups:[""], Verbs:["update"]} PolicyRule{Resources:["secrets"], ResourceNames:["kubernetes-dashboard-key-holder"], APIGroups:[""], Verbs:["delete"]} PolicyRule{Resources:["secrets"], ResourceNames:["kubernetes-dashboard-certs"], APIGroups:[""], Verbs:["delete"]} PolicyRule{Resources:["services"], ResourceNames:["heapster"], APIGroups:[""], Verbs:["proxy"]}] user=&{* [system:authenticated] map[]} ownerrules=[PolicyRule{Resources:["selfsubjectaccessreviews"], APIGroups:["authorization.k8s.io"], Verbs:["create"]} PolicyRule{NonResourceURLs:["/api" "/api/" "/apis" "/apis/" "/healthz" "/swaggerapi" "/swaggerapi/*" "/version"], Verbs:["get"]}] ruleResolutionErrors=[]

Was deploying correctly before upgrading the master from 1.7.5 to 1.7.6

same here

Experiencing the same issue here.

kubernetes 1.8.4
Latest Dashboard 1.8.0
fresh installation on AWS.
Rebuilding cluster shows no change.

Error:

$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
secret "kubernetes-dashboard-certs" created
serviceaccount "kubernetes-dashboard" created
rolebinding "kubernetes-dashboard-minimal" created
deployment "kubernetes-dashboard" created
service "kubernetes-dashboard" created
Error from server (Forbidden): error when creating "https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml": roles.rbac.authorization.k8s.io "kubernetes-dashboard-minimal" is forbidden: attempt to grant extra privileges: [PolicyRule{Resources:["secrets"], APIGroups:[""], Verbs:["create"]} PolicyRule{Resources:["configmaps"], APIGroups:[""], Verbs:["create"]} PolicyRule{Resources:["secrets"], ResourceNames:["kubernetes-dashboard-key-holder"], APIGroups:[""], Verbs:["get"]} PolicyRule{Resources:["secrets"], ResourceNames:["kubernetes-dashboard-certs"], APIGroups:[""], Verbs:["get"]} PolicyRule{Resources:["secrets"], ResourceNames:["kubernetes-dashboard-key-holder"], APIGroups:[""], Verbs:["update"]} PolicyRule{Resources:["secrets"], ResourceNames:["kubernetes-dashboard-certs"], APIGroups:[""], Verbs:["update"]} PolicyRule{Resources:["secrets"], ResourceNames:["kubernetes-dashboard-key-holder"], APIGroups:[""], Verbs:["delete"]} PolicyRule{Resources:["secrets"], ResourceNames:["kubernetes-dashboard-certs"], APIGroups:[""], Verbs:["delete"]} PolicyRule{Resources:["configmaps"], ResourceNames:["kubernetes-dashboard-settings"], APIGroups:[""], Verbs:["get"]} PolicyRule{Resources:["configmaps"], ResourceNames:["kubernetes-dashboard-settings"], APIGroups:[""], Verbs:["update"]} PolicyRule{Resources:["services"], ResourceNames:["heapster"], APIGroups:[""], Verbs:["proxy"]} PolicyRule{Resources:["services/proxy"], ResourceNames:["heapster"], APIGroups:[""], Verbs:["get"]} PolicyRule{Resources:["services/proxy"], ResourceNames:["http:heapster:"], APIGroups:[""], Verbs:["get"]} PolicyRule{Resources:["services/proxy"], ResourceNames:["https:heapster:"], APIGroups:[""], Verbs:["get"]}] user=&{worker  [system:authenticated] map[]} ownerrules=[] ruleResolutionErrors=[]

User that you want to create Dashboard with has no permissions to create Role with some verbs. You need to use admin account that has all the privileges to create objects in the cluster.

@floreks I'm not using any specific user, other than kubernetes-admin in my kubeconfig.

Can you explain a little further what I'm doing wrong?

Here is a copy of my kubeconfig:

apiVersion: v1
kind: Config
clusters:
- cluster:
    certificate-authority: ssl/ca.pem
    server: https://address.to-elb.com
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    namespace: default
    user: kubernetes-admin
  name: kubernetes-admin@kubernetes
users:
  - name: kubernetes-admin
    user:
      client-certificate: ssl/worker.pem
      client-key: ssl/worker-key.pem
current-context: kubernetes-admin@kubernetes

I ran into the same problem running on a fresh GKE 1.8.3 cluster. I have the cluster-admin role binding active for my user (kubectl create clusterrolebinding cluster-admin-binding --clusterrole=cluster-admin --user=<myusername>) but when I run kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml to update the dashboard it fails when trying to create the new role with

Error from server (Forbidden): error when creating "https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml": roles.rbac.authorization.k8s.io "kubernetes-dashboard-minimal" is forbidden: attempt to grant extra privileges: [PolicyRule{Resources:["secrets"], APIGroups:[""], Verbs:["create"]} PolicyRule{Resources:["configmaps"], APIGroups:[""], Verbs:["create"]} PolicyRule{Resources:["secrets"], ResourceNames:["kubernetes-dashboard-key-holder"], APIGroups:[""], Verbs:["get"]} PolicyRule{Resources:["secrets"], ResourceNames:["kubernetes-dashboard-certs"], APIGroups:[""], Verbs:["get"]} PolicyRule{Resources:["secrets"], ResourceNames:["kubernetes-dashboard-key-holder"], APIGroups:[""], Verbs:["update"]} PolicyRule{Resources:["secrets"], ResourceNames:["kubernetes-dashboard-certs"], APIGroups:[""], Verbs:["update"]} PolicyRule{Resources:["secrets"], ResourceNames:["kubernetes-dashboard-key-holder"], APIGroups:[""], Verbs:["delete"]} PolicyRule{Resources:["secrets"], ResourceNames:["kubernetes-dashboard-certs"], APIGroups:[""], Verbs:["delete"]} PolicyRule{Resources:["configmaps"], ResourceNames:["kubernetes-dashboard-settings"], APIGroups:[""], Verbs:["get"]} PolicyRule{Resources:["configmaps"], ResourceNames:["kubernetes-dashboard-settings"], APIGroups:[""], Verbs:["update"]} PolicyRule{Resources:["services"], ResourceNames:["heapster"], APIGroups:[""], Verbs:["proxy"]} PolicyRule{Resources:["services/proxy"], ResourceNames:["heapster"], APIGroups:[""], Verbs:["get"]} PolicyRule{Resources:["services/proxy"], ResourceNames:["http:heapster:"], APIGroups:[""], Verbs:["get"]} PolicyRule{Resources:["services/proxy"], ResourceNames:["https:heapster:"], APIGroups:[""], Verbs:["get"]}] user=&{[email protected]  [system:authenticated] map[]} ownerrules=[PolicyRule{Resources:["selfsubjectaccessreviews"], APIGroups:["authorization.k8s.io"], Verbs:["create"]} PolicyRule{NonResourceURLs:["/api" "/api/*" "/apis" "/apis/*" "/healthz" "/swagger-2.0.0.pb-v1" "/swagger.json" "/swaggerapi" "/swaggerapi/*" "/version"], Verbs:["get"]}] ruleResolutionErrors=[]

I am a bit at a loss because I would think that having cluster wide admin RBAC setup for my user should make this kind or error impossible. What can I do to debug this problem further? Thanks!

GKE kubernetes setup is more restrictive AFAIK. You need to use their API to somehow grant you necessary privileges.

Hi @floreks I think we're still unsure how to resolve this issue.
Could you clarify if you mean that we need to run this within the cluster-admin context? Because that is what I'm attempting right now and I am still getting the exact same error.

I am attempting this on AWS, but it is basically the same as a bare metal install.

apiVersion: v1
kind: Config
clusters:
- cluster:
    certificate-authority: ssl/ca.pem
    server: https://address.to-elb.com
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    namespace: default
    user: cluster-admin
  name: cluster-admin@kubernetes
users:
  - name: cluster-admin
    user:
      client-certificate: ssl/worker.pem
      client-key: ssl/worker-key.pem
current-context: cluster-admin@kubernetes

Otherwise is there some other way in which I'm supposed to run this "as the admin user"?

I am pretty sure that this setup is not same as "bare metal" nor kubeadm, because I have used both and there is no problem with deploying Dashboard. It has to be environment specific issue and your "admin" is not an actual admin with all privileges. I can't solve this for you as I don't have access to GKE nor AWS to test their deployments of kubernetes.

This might help with GKE setup: https://cloud.google.com/kubernetes-engine/docs/how-to/iam-integration

I believe you need to use gcloud and their API to grant yourself more privileges. As for AWS this might be a similar case.

The AWS deployment I have is an environment that is simply running on top of AWS, hence why its like a bare metal deployment. It seems beside the point, but just clarifying that its not on GKE/GC.

Is there a setting or api-server flag that should be enabled for this, as I never had this problem with previous versions of the dashboard. Also, the dashboard actually works, but I get this annoying error and I understand that its a security concern. I'm trying to do this correctly with RBAC.

Can you paste api-server parameters you are using to start it?

@floreks, and everyone, I think I managed to fix the issue (at least for my setup).
I got some help from @liggitt on the kubernetes slack, who was super awesome.

_THESE ARE ALL THE STEPS I USED:_

First I determined that I did not have the correct roles installed, which should be setup by the api-server, by default:

$ kubectl get roles --all-namespaces
No resources found.

I needed to run the api-server with the flag --authorization-mode=RBAC,AllowAlways which I learned will enable RBAC by default but will drop back to AllowAlways if auth fails.

This is verified in the api-server logs which will show a bunch of lines like:

Nov 30 23:48:08 ip-10-1-11-197 kube-apiserver[8216]: I1130 23:48:08.955830    8216 storage_rbac.go:198] created clusterrole.rbac.authorization.k8s.io/cluster-admin
Nov 30 23:48:08 ip-10-1-11-197 kube-apiserver[8216]: I1130 23:48:08.970721    8216 storage_rbac.go:198] created clusterrole.rbac.authorization.k8s.io/system:discovery
Nov 30 23:48:08 ip-10-1-11-197 kube-apiserver[8216]: I1130 23:48:08.985079    8216 storage_rbac.go:198] created clusterrole.rbac.authorization.k8s.io/system:basic-user
Nov 30 23:48:09 ip-10-1-11-197 kube-apiserver[8216]: I1130 23:48:09.005096    8216 storage_rbac.go:198] created clusterrole.rbac.authorization.k8s.io/admin
Nov 30 23:48:09 ip-10-1-11-197 kube-apiserver[8216]: I1130 23:48:09.032102    8216 storage_rbac.go:198] created clusterrole.rbac.authorization.k8s.io/edit
Nov 30 23:48:09 ip-10-1-11-197 kube-apiserver[8216]: I1130 23:48:09.048804    8216 storage_rbac.go:198] created clusterrole.rbac.authorization.k8s.io/view

This is not a production recommended solution, so I needed to bind it to a role.
However, it worked:

$ kubectl get roles --all-namespaces
NAMESPACE     NAME                                             AGE
kube-public   system:controller:bootstrap-signer               23m
kube-system   extension-apiserver-authentication-reader        23m
kube-system   system::leader-locking-kube-controller-manager   23m
kube-system   system::leader-locking-kube-scheduler            23m
kube-system   system:controller:bootstrap-signer               23m
kube-system   system:controller:cloud-provider                 23m
kube-system   system:controller:token-cleaner                  23m

Next I discovered that the only role that is enabled by default for SuperUser access is the system:masters group, not a particular username.
So my Admin cert creation process needed to include O=system:masters as the Org name:

$ openssl genrsa -out config/ssl/admin-key.pem 2048
$ openssl req -new -key config/ssl/admin-key.pem -out config/ssl/admin.csr -subj '/C=AU/ST=Some-State/O=system:masters/CN=cluster-admin'
openssl x509 -req -in config/ssl/admin.csr -CA config/ssl/ca.pem -CAkey config/ssl/ca-key.pem -CAcreateserial -out config/ssl/admin.pem -days 365

I changed my api-server flag to only --authorization-mode=RBAC and restarted services.
Using my new cert in my kubeconfig:

apiVersion: v1
kind: Config
clusters:
- cluster:
    certificate-authority: ssl/ca.pem
    server: https://address.to-elb.com
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    namespace: default
    user: cluster-admin
  name: cluster-admin@kubernetes
users:
  - name: cluster-admin
    user:
      client-certificate: ssl/admin.pem
      client-key: ssl/admin-key.pem
current-context: cluster-admin@kubernetes

I was able to successfully query:

$ kube-deploy get roles --all-namespaces
NAMESPACE     NAME                                             AGE
kube-public   system:controller:bootstrap-signer               42m
kube-system   extension-apiserver-authentication-reader        42m
kube-system   system::leader-locking-kube-controller-manager   42m
kube-system   system::leader-locking-kube-scheduler            42m
kube-system   system:controller:bootstrap-signer               42m
kube-system   system:controller:cloud-provider                 42m
kube-system   system:controller:token-cleaner                  42m

Lastly, with the correct permissions and roles bound, I could create Dashboard with correct permissions, using only RBAC:

$ kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
secret "kubernetes-dashboard-certs" created
serviceaccount "kubernetes-dashboard" created
role "kubernetes-dashboard-minimal" created
rolebinding "kubernetes-dashboard-minimal" created
deployment "kubernetes-dashboard" created
service "kubernetes-dashboard" created

This is what worked for me, I hope anyone who finds this finds it helpful. 👍

Great feedback :) This is probably the same issue with GKE. There are some additional steps required to enable RBACs. We'll link this solution to our FAQ so everyone can benefit from it.

If you enabled RBAC, just type

kubectl create clusterrolebinding cluster-admin-binding \
--clusterrole cluster-admin --user $(gcloud config get-value account)

and

➜  ~ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml

secret "kubernetes-dashboard-certs" unchanged
serviceaccount "kubernetes-dashboard" unchanged
role "kubernetes-dashboard-minimal" created
rolebinding "kubernetes-dashboard-minimal" unchanged
deployment "kubernetes-dashboard" unchanged
service "kubernetes-dashboard" unchanged

I also had a test cluster and had the same issue. Adding --authorization-mode=RBAC fixed it. Not sure if this is the only reason, but wanted to add if someone else had this problem.

I found that even with the owner role "Full access to all resources", the suggestion from mofelee is needed on GKE:

kubectl create clusterrolebinding cluster-admin-binding \
--clusterrole cluster-admin --user $(gcloud config get-value account)

If you enabled RBAC, just type

kubectl create clusterrolebinding cluster-admin-binding \
--clusterrole cluster-admin --user $(gcloud config get-value account)

and

➜  ~ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml

secret "kubernetes-dashboard-certs" unchanged
serviceaccount "kubernetes-dashboard" unchanged
role "kubernetes-dashboard-minimal" created
rolebinding "kubernetes-dashboard-minimal" unchanged
deployment "kubernetes-dashboard" unchanged
service "kubernetes-dashboard" unchanged

this is the true resolution .

Was this page helpful?
0 / 5 - 0 ratings

Related issues

dzoeteman picture dzoeteman  ·  4Comments

shu-mutou picture shu-mutou  ·  3Comments

billcloud-me picture billcloud-me  ·  5Comments

andrei-dascalu picture andrei-dascalu  ·  3Comments

kasunsjc picture kasunsjc  ·  3Comments