Dashboard version: 1.8.2
Kubernetes version: 1.8.7
Operating system: Debian Jessie
Node.js version:
Go version:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: list-cluster-resources
rules:
- apiGroups:
- ""
resources:
- secrets
verbs:
- list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: list-cluster-resources-binding
labels:
k8s-app: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
apiGroup: ""
namespace: kube-system
roleRef:
kind: ClusterRole
name: list-cluster-resources
apiGroup: ""
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: dashboard-read
labels:
k8s-app: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
apiGroup: ""
namespace: kube-system
roleRef:
kind: ClusterRole
name: view
apiGroup: rbac.authorization.k8s.io
apiVersion: v1
kind: Secret
metadata:
name: very-secret-key
data:
# echo -n security_key | base64
security_key: c2VjdXJpdHlfa2V5
---
apiVersion: v1
kind: Pod
metadata:
name: super-secret
labels:
name: super-secret
spec:
containers:
- name: super-secret
image: busybox
command:
- /bin/sh
args:
- -c
- while true; do date; sleep 60; done
env:
- name: SECURITY_KEY
valueFrom:
secretKeyRef:
name: very-secret-key
key: security_key
Visit the secrets list in the dashboard and confirm that the secret can be listed, but not read
Visit the pod in the dashboard, and click the eye icon next to the secret environment variable. The value is exposed.
The value of secrets that the dashboard user does not have permission to read can be exposed through pod environment variables.
Secret environment variables are only visible when the dashboard user is allowed to get secrets.
This actually sounds more like a kubernetes issue to me. We rely fully on authorization mechanisms of k8s core. If apiserver allowed to get this value then there is nothing we can do. API server should reject the request and throw unauthroized error.
I'm not able to see the secret from the command line.
$ kubectl describe pod/super-secret
...
SECURITY_KEY: <set to the key 'security_key' in secret 'very-secret-key'> Optional: false
...
I'll try querying the API server manually with curl and see what I can find there.
Looking at the JSON response from the API for the pod, it shows the secret reference, and I can't read the secret through localhost:8001/api/v1/namespaces/default/secrets/very-secret-key. I get an authorization error there.
I'm fairly new to kubernetes, so I don't know if there is another path to reading the secret (or the environment variable value directly?) that the dashboard could be using.
Yes but the API is different for env variables of a pod and for secrets. Maybe there is an issue with that in the core. There is a field in the container of a pod EnvVar that has: name, value and valueFrom fields. I don't know how it is handled on the k8s side but if the secret value is there even though user can not get secret then this is a core bug I think.
Thanks for taking a look. I'm not sure how to track this further in kubernetes from here. I'll leave the issue here for reference if somebody else is able to take a look into the kubernetes side of it.
Not only the API for environment variables, but also the exec command for pods (console access) will violate the security design for secrets; someone with that permission can get all environment variables through one simple command. Not inevitable even if you use secret volumes instead of environment variables. I believe it's not in kubernetes dashboard's project scope as @floreks said; maybe kube-api can do something to prevent revealing the secrets environments so apparently.
To prevent accidents from happening, sensitive configurations in production environment are kept in secret, e.g. database credentials, 3rd party API key & secrets, etc. However, letting an expert diagnose an running process in production is occasionally inevitable; when console access will be granted. It's easy to handle in small teams; other than that you need a proper workflow defined and a supervision system to make the whole process traceable. I think it's not simply a k8s/dashboard issue, it's been there for quite a long time.
Maybe there is already a system designed for this?
Possibly we could implement a workaround on our side for this issue by using SelfSubjectAccessReview API until this is fixed on k8s side. I'll mark this as an enhancement.
I walked through the provided example and found that the issue is with the list verb. When the ServiceAccount has list permissions on resource type secrets, the value of the secret is visible. Therefore, you do not need get permissions to see the value of a secret.
So, this is indeed a Kubernetes API gap. You can test the API directly via:
$ kubectl proxy
$ curl localhost:8001/api/v1/namespaces/kube-system/secrets/very-secret-key
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "secrets \"very-secret-key\" is forbidden: User \"system:serviceaccount:kube-system:kubernetes-dashboard\" cannot get secrets in the namespace \"kube-system\"",
"reason": "Forbidden",
"details": {
"name": "very-secret-key",
"kind": "secrets"
},
"code": 403
}
Getting the secret directly is forbidden. But you can access the sensitive data via the list permissions.
$ curl localhost:8001/api/v1/namespaces/kube-system/secrets
{
"kind": "SecretList",
"apiVersion": "v1",
"metadata": {
"selfLink": "/api/v1/namespaces/kube-system/secrets",
"resourceVersion": "936511"
},
"items": [
{
"metadata": {
"name": "very-secret-key",
"namespace": "kube-system",
"selfLink": "/api/v1/namespaces/kube-system/secrets/very-secret-key",
"uid": "4e3c9d21-3902-11e8-8cbd-000d3ab89e65",
"resourceVersion": "935268",
"creationTimestamp": "2018-04-05T18:51:10Z",
},
"data": {
"security_key": "c2VjdXJpdHlfa2V5"
},
"type": "Opaque"
}
]
}
Seems like an oversight to me.
If it's an api ticket should we open an issue in the api?
@danehans Definitely. It should be fixed primarily on the core side.
Created the following issue: https://github.com/kubernetes/kubernetes/issues/67420
From what I have tested revoking the list secret role generates the following warning:
secrets is forbidden: User "system:serviceaccount:kube-system:kubernetes-dashboard" cannot list secrets in the namespace "default"
Is there a way to avoid that?
@Miouge1 No, you can only close it. We are refactoring it for the new release, but warnings/notifications will stay here.
I think we can close this. Dashboard has no longer privilege to list secrets (https://github.com/kubernetes/dashboard/blob/master/aio/deploy/recommended/kubernetes-dashboard.yaml#L45) and with recent fixes in v1.10.1 it should be even more secure.
@floreks WDYT?
I think it should be fixed with migration. We can close for now.
/close
@floreks: Closing this issue.
In response to this:
I think it should be fixed with migration. We can close for now.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Most helpful comment
I walked through the provided example and found that the issue is with the
listverb. When theServiceAccounthaslistpermissions on resource typesecrets, the value of the secret is visible. Therefore, you do not needgetpermissions to see the value of a secret.So, this is indeed a Kubernetes API gap. You can test the API directly via:
Getting the secret directly is forbidden. But you can access the sensitive data via the
listpermissions.Seems like an oversight to me.