Describe the bug
When deploying the kubernetes-dashboard to a dedicated namespace, the container does not start and fails with
2020/02/17 07:40:15 Initializing csrf token from kubernetes-dashboard-csrf secret
panic: secrets "kubernetes-dashboard-csrf" is forbidden: User "system:serviceaccount:dashboard:dashboard-kubernetes-dashboard" cannot get resource "secrets" in API group "" in the namespace "kube-system"
Version of Helm and Kubernetes:
helm: 3.0.1
kubernetes: 1.17.2
Which chart:
stable/kubernetes-dashboard
What happened:
Container does not start and fails with
2020/02/17 07:40:15 Initializing csrf token from kubernetes-dashboard-csrf secret
panic: secrets "kubernetes-dashboard-csrf" is forbidden: User "system:serviceaccount:dashboard:dashboard-kubernetes-dashboard" cannot get resource "secrets" in API group "" in the namespace "kube-system"
goroutine 1 [running]:
github.com/kubernetes/dashboard/src/app/backend/client/csrf.(*csrfTokenManager).init(0xc0003833a0)
/home/travis/build/kubernetes/dashboard/src/app/backend/client/csrf/manager.go:40 +0x3b4
github.com/kubernetes/dashboard/src/app/backend/client/csrf.NewCsrfTokenManager(...)
/home/travis/build/kubernetes/dashboard/src/app/backend/client/csrf/manager.go:65
github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).initCSRFKey(0xc00053ae00)
/home/travis/build/kubernetes/dashboard/src/app/backend/client/manager.go:494 +0xc7
github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).init(0xc00053ae00)
/home/travis/build/kubernetes/dashboard/src/app/backend/client/manager.go:462 +0x47
github.com/kubernetes/dashboard/src/app/backend/client.NewClientManager(...)
/home/travis/build/kubernetes/dashboard/src/app/backend/client/manager.go:543
main.main()
/home/travis/build/kubernetes/dashboard/src/app/backend/dashboard.go:105 +0x212
What you expected to happen:
Pod starts without errors
How to reproduce it (as minimally and precisely as possible):
Create values.dashboard.yml
image:
repository: kubernetesui/dashboard
tag: v2.0.0-rc5
pullPolicy: IfNotPresent
pullSecrets: []
rbac:
create: true
clusterAdminRole: false
clusterReadOnlyRole: true
serviceAccount:
create: true
name:
Install helm chart
helm install dashboard stable/kubernetes-dashboard --namespace dashboard -f values.dashboard.yml
Anything else we need to know:
It seems it does not matter what is the value for clusterReadOnlyRole as I tried both true and false.
Also deploying it into kube-system does not help.
May be related to #15118 and #9776
Resources:
kubectl -n dashboard get role,clusterrole,rolebinding | grep dashb
role.rbac.authorization.k8s.io/dashboard-kubernetes-dashboard 85m
clusterrole.rbac.authorization.k8s.io/dashboard-kubernetes-dashboard-readonly 85m
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard 44d
kubectl -n dashboard describe role dashboard-kubernetes-dashboard
Name: dashboard-kubernetes-dashboard
Labels: app=kubernetes-dashboard
chart=kubernetes-dashboard-1.10.1
heritage=Helm
release=dashboard
Annotations: <none>
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
configmaps [] [] [create]
secrets [] [] [create]
secrets [] [dashboard-kubernetes-dashboard] [get update delete]
secrets [] [kubernetes-dashboard-key-holder] [get update delete]
configmaps [] [kubernetes-dashboard-settings] [get update]
services/proxy [] [heapster] [get]
services/proxy [] [http:heapster:] [get]
services/proxy [] [https:heapster:] [get]
services [] [heapster] [proxy]
/kind bug
If I change the role accordingly i.e.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
app: {{ template "kubernetes-dashboard.name" . }}
chart: {{ template "kubernetes-dashboard.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
name: "{{ template "kubernetes-dashboard.fullname" . }}-readonly"
namespace: {{ .Release.Namespace }}
rules:
# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups:
- ""
resources:
- secrets
resourceNames:
- kubernetes-dashboard-key-holder
- {{ template "kubernetes-dashboard.fullname" . }}
- kubernetes-dashboard-csrf
verbs:
- get
- update
- delete
The error is different
2020/02/17 17:17:06 Initializing csrf token from kubernetes-dashboard-csrf secret
panic: secrets "kubernetes-dashboard-csrf" not found
goroutine 1 [running]:
github.com/kubernetes/dashboard/src/app/backend/client/csrf.(*csrfTokenManager).init(0xc000530b00)
/home/travis/build/kubernetes/dashboard/src/app/backend/client/csrf/manager.go:40 +0x3b4
github.com/kubernetes/dashboard/src/app/backend/client/csrf.NewCsrfTokenManager(...)
/home/travis/build/kubernetes/dashboard/src/app/backend/client/csrf/manager.go:65
github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).initCSRFKey(0xc0002b8100)
/home/travis/build/kubernetes/dashboard/src/app/backend/client/manager.go:494 +0xc7
github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).init(0xc0002b8100)
/home/travis/build/kubernetes/dashboard/src/app/backend/client/manager.go:462 +0x47
github.com/kubernetes/dashboard/src/app/backend/client.NewClientManager(...)
/home/travis/build/kubernetes/dashboard/src/app/backend/client/manager.go:543
main.main()
/home/travis/build/kubernetes/dashboard/src/app/backend/dashboard.go:105 +0x212
The helm chart itself does not seem to create any csrf at least I don't find a reference
+1
+1
There is a PR https://github.com/helm/charts/pull/15744
@papanito Looks like that PR is not going to be merged though
Yep right @OliverLeighC and there seem also another more valid PR https://github.com/kubernetes/dashboard/pull/4502
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.
still valid imho
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.
@papanito there is a new chart directly in the dashboard repo close to be released.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.
@pierluigilenoci what is the status?
@papanito the new chart is released.
@pierluigilenoci Plese could you share the link to this new chart?
So my issue can be closed then...
@pierluigilenoci Can you share me the link to github.com as well? SCNR ;) Thanks for your work
@pierluigilenoci @jtry has a valid point, I was actually expecting to find the new helm chart in this repo (https://github.com/helm/charts), however I have seen it is in another repo (https://github.com/kubernetes/dashboard).
This also means I have to add helm repo kubernetes-dashboard
However, many thanks for your work, glad to have a updated helm-chart ;-)
Dear @papanito I think you and @jtyr will have to get over it because the old chart repo is deprecated and will be discontinued next November [1]. All charts will be migrated elsewhere (specifically here https://hub.helm.sh/) So I can only suggest to be prepared to migrate to helm 3 and to add a lot of repos.
[1] https://github.com/helm/charts#deprecation-timeline
@schmichri don't play with fire 馃槣
Thanks @pierluigilenoci I was not aware of that, yeah maybe I need better :eyes: