Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT
Please provide the following details:
Environment:
Minikube version (use minikube version): v0.25.0
cat ~/.minikube/machines/minikube/config.json | grep DriverName): hyper kitcat ~/.minikube/machines/minikube/config.json | grep -i ISO or minikube ssh cat /etc/VERSION):minikube version
echo "";
echo "OS:";
cat /etc/os-release
echo "";
echo "VM driver":
grep DriverName ~/.minikube/machines/minikube/config.json
echo "";
echo "ISO version";
grep -i ISO ~/.minikube/machines/minikube/config.json
What happened: Trying to bring up minikube with default RBAC roles. Simply running minikube start --vm-driver hyperkit without the extra-config yields no roles. To get the default roles, I added the extra-config: minikube start --vm-driver hyperkit --extra-config=apiserver.Authorization.Mode=RBAC.
The expected roles are present, but the dashboard and dns pods do not fully come up:
$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system kube-addon-manager-minikube 1/1 Running 0 1m
kube-system kube-dns-54cccfbdf8-vqdgw 2/3 Running 0 1m
kube-system kubernetes-dashboard-77d8b98585-djkcf 0/1 CrashLoopBackOff 3 1m
kube-system storage-provisioner 1/1 Running 0 1m
kube-system tiller-deploy-587df449fb-b8wd6 1/1 Running 0 50s
Tailing the dashboard logs shows:
panic: secrets is forbidden: User "system:serviceaccount:kube-system:default" cannot create secrets in the namespace "kube-system"
The error can be fixed by creating the missing clusterrolebinding:
$ kubectl create clusterrolebinding kube-system-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default
clusterrolebinding "kube-system-cluster-admin" created
This should exist by default.
What you expected to happen: All pods come up without any intervention.
How to reproduce it (as minimally and precisely as possible):
minikube start --vm-driver hyperkit --extra-config=apiserver.Authorization.Mode=RBAC
Output of minikube logs (if applicable):
Anything else do we need to know: The kubeadm bootstrapper installs the RBAC roles correctly by default without requiring the extra-config.
Digging into this, I found the following in the kubedns container under the kube-dns pod:
$ kubectl logs kube-dns-... kubedns
E0221 23:56:46.848563 1 reflector.go:199] k8s.io/dns/vendor/k8s.io/client-go/tools/cache/reflector.go:94: Failed to list *v1.ConfigMap: configmaps is forbidden: User "system:serviceaccount:kube-system:default" cannot list configmaps in the namespace "kube-system"
E0221 23:56:46.848806 1 reflector.go:199] k8s.io/dns/vendor/k8s.io/client-go/tools/cache/reflector.go:94: Failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:default" cannot list services at the cluster scope
E0221 23:56:46.848835 1 reflector.go:199] k8s.io/dns/vendor/k8s.io/client-go/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: endpoints is forbidden: User "system:serviceaccount:kube-system:default" cannot list endpoints at the cluster scope
My pods were in the same state as @berndtj
My fix for this came from kubernetes-incubator/service-catalog/issues/1069:
kubectl create clusterrolebinding fixRBAC --clusterrole=cluster-admin --serviceaccount=kube-system:default
Environment:
minikube start --kubernetes-version v1.9.0 --vm-driver=hyperkit --extra-config='apiserver.Authorization.Mode=RBAC'Looks like this is fixed now? --boostrapper=kubeadm is the default and seems enabled:
kubectl get pods -n kube-system kube-apiserver-minikube -o yaml | grep RBAC
- --authorization-mode=Node,RBAC
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
News about it?
I am having this issue, and the above workaround to create the clusterrolebinding worked for me.
$ minikube version
minikube version: v0.28.2
example detailing symptoms and workaround
$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system etcd-minikube 1/1 Running 0 1m
kube-system kube-addon-manager-minikube 1/1 Running 1 1m
kube-system kube-apiserver-minikube 1/1 Running 0 1m
kube-system kube-controller-manager-minikube 1/1 Running 0 1m
kube-system kube-dns-86f4d74b45-4hhn2 3/3 Running 1 4m
kube-system kube-proxy-4cb8c 1/1 Running 0 1m
kube-system kube-scheduler-minikube 1/1 Running 1 1m
kube-system kubernetes-dashboard-5498ccf677-dq2ct 0/1 CrashLoopBackOff 4 4m
kube-system storage-provisioner 1/1 Running 2 4m
$ kubectl logs -n kube-system kubernetes-dashboard-5498ccf677-dq2ct | tail
2018/09/06 15:06:20 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout
panic: secrets is forbidden: User "system:serviceaccount:kube-system:default" cannot create secrets in the namespace "kube-system"
goroutine 1 [running]:
github.com/kubernetes/dashboard/src/app/backend/auth/jwe.(*rsaKeyHolder).init(0xc4204a16c0)
/home/travis/build/kubernetes/dashboard/.tmp/backend/src/github.com/kubernetes/dashboard/src/app/backend/auth/jwe/keyholder.go:131 +0x2d3
github.com/kubernetes/dashboard/src/app/backend/auth/jwe.NewRSAKeyHolder(0x1a79e00, 0xc42034eb40, 0xc42034eb40, 0x1278c20)
/home/travis/build/kubernetes/dashboard/.tmp/backend/src/github.com/kubernetes/dashboard/src/app/backend/auth/jwe/keyholder.go:170 +0x83
main.initAuthManager(0x1a79300, 0xc420062900, 0x384, 0x1, 0x1)
/home/travis/build/kubernetes/dashboard/.tmp/backend/src/github.com/kubernetes/dashboard/src/app/backend/dashboard.go:161 +0x12f
main.main()
/home/travis/build/kubernetes/dashboard/.tmp/backend/src/github.com/kubernetes/dashboard/src/app/backend/dashboard.go:95 +0x27b
$ kubectl create clusterrolebinding kube-system-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default
clusterrolebinding.rbac.authorization.k8s.io/kube-system-cluster-admin created
$ kubectl delete pods -n kube-system kubernetes-dashboard-5498ccf677-dq2ct
pod "kubernetes-dashboard-5498ccf677-dq2ct" deleted
$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system etcd-minikube 1/1 Running 0 4m
kube-system kube-addon-manager-minikube 1/1 Running 1 3m
kube-system kube-apiserver-minikube 1/1 Running 0 3m
kube-system kube-controller-manager-minikube 1/1 Running 0 3m
kube-system kube-dns-86f4d74b45-4hhn2 3/3 Running 1 6m
kube-system kube-proxy-4cb8c 1/1 Running 0 4m
kube-system kube-scheduler-minikube 1/1 Running 1 4m
kube-system kubernetes-dashboard-5498ccf677-hnsck 1/1 Running 0 1m
kube-system storage-provisioner 1/1 Running 2 6m
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue with/reopen.
Mark the issue as fresh with/remove-lifecycle rotten.Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/reopen
@daxgames: You can't reopen an issue/PR unless you authored it or you are a collaborator.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Issue is still open in minikube version: v0.30.0 (Windows 10, VirtualBox).
Running the aforementioned command fixed it :-)
kubectl create clusterrolebinding fixRBAC --clusterrole=cluster-admin --serviceaccount=kube-system:default
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue with/reopen.
Mark the issue as fresh with/remove-lifecycle rotten.Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/reopen
@jvleminc: You can't reopen an issue/PR unless you authored it or you are a collaborator.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Thanks @kelbyers, your kubectl commands got me out of this pickle.
Still happens (sometimes) with v1.2.0, the workaround above fixed it.
It would be nice if the dashboard command could detect this quirk and apply the aforementioned fixRBAC fix before starting the dashboard. Like, somewhere around here:
Help wanted!
@berndtj : I believe this issue is now addressed by minikube v1.4, as it uses a different dashboard config. If you still see this issue with minikube v1.4 or higher, please reopen this issue by commenting with /reopen
Thank you for reporting this issue!
Most helpful comment
Digging into this, I found the following in the kubedns container under the kube-dns pod:
My pods were in the same state as @berndtj
My fix for this came from kubernetes-incubator/service-catalog/issues/1069:
Environment:
minikube start --kubernetes-version v1.9.0 --vm-driver=hyperkit --extra-config='apiserver.Authorization.Mode=RBAC'