: roleRef: Invalid value: rbac.RoleRef{APIGroup:"rbac.authorization.k8s.io", Kind:"ClusterRole", Name:"kubernetes-dashboard"}: cannot change roleRef
Kubernetes version: v1.16.3:
Installation method:
Kubernetes version: Kubernetes version: v1.16.3:
Dashboard version: 2.0.0-beta8
Operating system: Ubuntu 18.04
Node.js version ('node --version' output): nvm/12
Go version ('go version' output): Not installed
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta8/aio/deploy/recommended.yaml
namespace/kubernetes-dashboard unchanged
serviceaccount/kubernetes-dashboard unchanged
service/kubernetes-dashboard unchanged
secret/kubernetes-dashboard-certs unchanged
secret/kubernetes-dashboard-csrf configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
secret/kubernetes-dashboard-key-holder configured
configmap/kubernetes-dashboard-settings unchanged
role.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
deployment.apps/kubernetes-dashboard unchanged
service/dashboard-metrics-scraper unchanged
deployment.apps/dashboard-metrics-scraper unchanged
The ClusterRoleBinding "kubernetes-dashboard" is invalid: roleRef: Invalid value: rbac.RoleRef{APIGroup:"rbac.authorization.k8s.io", Kind:"ClusterRole", Name:"kubernetes-dashboard"}: cannot change roleRef
Related?
You should delete Dashboard related resources before upgrading or only edit deployment to bump image version.
/close
@floreks: Closing this issue.
In response to this:
You should delete Dashboard related resources before upgrading or only edit deployment to bump image version.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Would be usefull to give the command used to delete as well :)
I have deleted the resources and still facing the same issue. Though I am using minikube on local machine.
This issue occurs if a user simply follows the instructions in the current master documentation to enable admin access. If you first apply reccomended.yaml and then follow the admin instructions it appears. The default deployment uses roleRef.name: kubernetes-dashboard, whereas the documentation refers to cluster-admin.
The problem is that the documentation conflates the two accounts. In the README, it continues to reference the kubernetes-dashboard binding... yet changes the roleRef, which causes deployment to fail. Diligent users will find the solution in the sample user documentation, but that document is not referenced in any way from the official Admin Access README.
In case it's useful for anyone, I needed to run this command: kubectl delete clusterrolebinding kubernetes-dashboard to stop the error appearing.
@joeczucha that did not stop the issue from appearing even after deleting the kubernetes-dashboard namespace
In case it's useful for anyone, I needed to run this command:
kubectl delete clusterrolebinding kubernetes-dashboardto stop the error appearing.
Thanks you Joe. It worked well for me
Would be usefull to give the command used to delete as well :)
You can just run kubectl delete using the YAML file that you've used to create the previous Kubernetes dashboard resources like kubectl delete -f <your-yaml-file-here> and then run kubectl apply and use the new YAML file.
Most helpful comment
In case it's useful for anyone, I needed to run this command:
kubectl delete clusterrolebinding kubernetes-dashboardto stop the error appearing.