Describe the bug
Enabling the admission plugin PodSecurityPolicy fails with an error Error: enable-admission-plugins plugin "PodSecurityPolicy" is unknown.
To Reproduce
I'm running k3d helper like this (same behavior with k3s:v0.5.0 and k3s:v0.6.0-rc2)
$ k3d create -p 6444 -i docker.io/rancher/k3s:v0.6.0-rc2 --server-arg=--kube-apiserver-arg=enable-admission-plugins=PodSecurityPolicy
The docker container running k3s fails to start, and the logs include:
time="2019-05-30T20:40:16.287817900Z" level=info msg="Running kube-apiserver --tls-cert-file=/var/lib/rancher/k3s/server/tls/localhost.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/token-node.key --advertise-address=127.0.0.1 --api-audiences=unknown --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --requestheader-allowed-names=kubernetes-proxy --requestheader-username-headers=X-Remote-User --service-account-issuer=k3s --basic-auth-file=/var/lib/rancher/k3s/server/cred/passwd --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-group-headers=X-Remote-Group --watch-cache=false --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --allow-privileged=true --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/localhost.key --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/token-node-1.crt --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --insecure-port=0 --secure-port=6444 --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --advertise-port=6445 --enable-admission-plugins=PodSecurityPolicy"
Error: enable-admission-plugins plugin "PodSecurityPolicy" is unknown
Usage:
kube-apiserver [flags]
...
--enable-admission-plugins strings admission plugins that should be enabled in addition to default enabled ones (NamespaceLifecycle, LimitRanger, ServiceAccount, TaintNodesByCondition, Priority, DefaultTolerationSeconds, DefaultStorageClass, PersistentVolumeClaimResize, MutatingAdmissionWebhook, ValidatingAdmissionWebhook, ResourceQuota). Comma-delimited list of admission plugins: DefaultStorageClass, DefaultTolerationSeconds, LimitRanger, MutatingAdmissionWebhook, NamespaceLifecycle, NodeRestriction, PersistentVolumeClaimResize, Priority, ResourceQuota, ServiceAccount, TaintNodesByCondition, ValidatingAdmissionWebhook. The order of plugins in this flag does not matter.
...
Expected behavior
I expect k3s to start with the requested admission plugin.
Additional context
My goal is to test that some particular pods run in the presence of some PodSecurityPolicy.
As can be seen in k3s server command help screen, it is stated that:
admission plugins that should be enabled in addition to default enabled ones (NamespaceLifecycle, LimitRanger, ServiceAccount, TaintNodesByCondition, Priority, DefaultTolerationSeconds, DefaultStorageClass, PersistentVolumeClaimResize, MutatingAdmissionWebhook, ValidatingAdmissionWebhook, ResourceQuota). Comma-delimited list of admission plugins: DefaultStorageClass, DefaultTolerationSeconds, LimitRanger, MutatingAdmissionWebhook, NamespaceLifecycle, NodeRestriction, PersistentVolumeClaimResize, Priority, ResourceQuota, ServiceAccount, TaintNodesByCondition, ValidatingAdmissionWebhook. The order of plugins in this flag does not matter
Unfortunately the PodSecurityPolicy isn't on that list. Either way, it would be nice to know when it will be implemented.
Edit: I found out that PodSecurityPolicy was removed in k3s, probably on purpose :(. Couldn't bring it back for myself, it seems that Kubernetes changes related to this functionality are quite deep - or it's just me with my almost zero knowledge about go ;). I'm interested in bringing this plugin to k3s - is it planned? Or maybe there's some way to do build k3s with this plugin?
As I understand it PodSecurityPolicy is the only way to stop someone doing:
apiVersion: v1
kind: Pod
metadata:
name: test2
spec:
containers:
- name: busybox
image: busybox:1.28
command:
- sleep
- "3600"
volumeMounts:
- name: host
mountPath: /host
volumes:
- name: host
hostPath:
path: /
type: Directory
And having access to the entirety of the underlying node's filesystem. Can we have it back in k3s? Or maybe an easier way of disabling HostPath mounts?
With k3s >= v0.9.0 you should be able to enable PodSecurityPolicy by passing the server arg
--kube-apiserver-arg enable-admission-plugins=PodSecurityPolicy,NodeRestriction
Can this be closed then? cc @andyjeffries
I'll give it a test as soon as I can this week and reply back.
I got it working when I tried it late last night
@erikwilson / @alexellis - The cluster doesn't come up as normal with this option - I don't know if this is expected behaviour? Alex, when you got it working last night, did you do anything aside from add this option? Looking at the output of ps, the command is running with:
/usr/local/bin/k3s server \
--tls-san OUR_PUBLIC_IP \
--cluster-cidr=192.168.0.0/17 \
--service-cidr=192.168.128.0/17 \
--kubelet-arg system-reserved=cpu=250m,memory=250Mi \
--kubelet-arg kube-reserved=cpu=250m,memory=250Mi \
--kube-apiserver-arg enable-admission-plugins=PodSecurityPolicy,NodeRestriction
Aside: We are trying to reserve some CPU/memory because we had a customer run a huge Java app and overwhelm the cluster, making the API inaccessible - however, this doesn't actually work. It's not enabled by default, this is a "test cluster" mode.
This command runs the k3s master, the k3s nodes can connect to it - but zero pods are running in the cluster and in /var/log/syslog I get lines like:
Nov 11 17:31:03 kube-node-38b8 k3s[2648]: E1111 17:31:03.059910 2648 replica_set.go:450] Sync "kube-system/coredns-66f496764" failed with pods "coredns-66f496764-" is forbidden: no providers available to validate pod request
We had a user suggesting PodSecurityPolicy to try to stop users from mounting the whole / by default in k3s, but I've never used it myself - so don't know if this is expected behaviour and I'm missing a step, or if this is not working?
@andyjeffries Do you have added and setup a PodSecurityPolicy (psp), a Cluster/Role and Cluster/Rolebinding to use the it?
No, I'm sorry to say I don't know much about this area of k8s (although studying for the CKA I think it'll come up at some point).
Would it be possible to add the necessary PSPs, (Cluster)Roles and (Cluster)RoleBindings to k3s by default? I think this would make much sense. It wouldn't hurt to have them there even when the admission plugin is off.
I think everything needed to start k3s with the PSP admission plugin should be included, then it would be up to the user to define additional PSPs for their own workload.
For what it's worth, this is what I used to get it working (note that it is probably more permissive than necessary):
# A permissive PSP that allows anything
# see https://kubernetes.io/docs/concepts/policy/pod-security-policy/#example-policies
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: permissive
annotations:
seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*'
spec:
privileged: true
allowPrivilegeEscalation: true
allowedCapabilities:
- '*'
volumes:
- '*'
hostNetwork: true
hostPorts:
- min: 0
max: 65535
hostIPC: true
hostPID: true
runAsUser:
rule: 'RunAsAny'
seLinux:
rule: 'RunAsAny'
supplementalGroups:
rule: 'RunAsAny'
fsGroup:
rule: 'RunAsAny'
---
# A ClusterRole that allows using the permissive PSP above
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: permissive-psp
rules:
- apiGroups:
- policy
resourceNames:
- permissive
resources:
- podsecuritypolicies
verbs:
- use
---
# Allow all service accounts in kube-system to use the permissive PSP
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: permissive-psp
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: permissive-psp
subjects:
- kind: Group
name: system:serviceaccounts
FYI I've tried enabling it and creating a PSP, cluster role and it's binding and it works correctly, so you can close this issue.
The only thing I would say is that in my opinion this should be enabled by default. It's been a while for me trying to make it work until I saw that is not enabled by default.
But if it's not gonna be enabled by default, it would be helpful not being able to create PSP. It's a bit misleading.
I have tried creating a cluster using k3d with the following command
k3d create -n psp --server-arg=--kube-apiserver-arg=enable-admission-plugins=PodSecurityPolicy,NodeRestriction
The cluster is created but none of system the pods are running, and nor mines :)
here is the typical error
'FailedCreate' Error creating: pods "coredns-8655855d6-" is forbidden: no providers available to validate pod request
I believe this is because it lacks of some default PSP to enable the creation of those pods.
I have tried creating a cluster using k3d with the following command
k3d create -n psp --server-arg=--kube-apiserver-arg=enable-admission-plugins=PodSecurityPolicy,NodeRestrictionThe cluster is created but none of system the pods are running, and nor mines :)
here is the typical error'FailedCreate' Error creating: pods "coredns-8655855d6-" is forbidden: no providers available to validate pod requestI believe this is because it lacks of some default PSP to enable the creation of those pods.
@sgandon see my comment above. :point_up: The necessary PSPs and RBAC are not included with k3s so you have to add them yourself (to allow the system Pods to start).
I want once again to highlight that it would make much sense if these would be included by default in k3s, and it would improve the user experience a lot!
indeed, here is the psp,role and binding I used to make it work.
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: privileged
annotations:
seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*'
spec:
privileged: true
allowPrivilegeEscalation: true
allowedCapabilities:
- '*'
volumes:
- '*'
hostNetwork: true
hostPorts:
- min: 0
max: 65535
hostIPC: true
hostPID: true
runAsUser:
rule: 'RunAsAny'
seLinux:
rule: 'RunAsAny'
supplementalGroups:
rule: 'RunAsAny'
fsGroup:
rule: 'RunAsAny'
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: r-privileged
namespace: kube-system
rules:
- apiGroups: ['policy']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames:
- privileged
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: rb-privileged
namespace: kube-system
roleRef:
kind: Role
name: r-privileged
apiGroup: rbac.authorization.k8s.io
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:authenticated
I use only role and binding limited to the kube-system namespace because I wanted to test othe PSP for my workoad.
and here is the one used by minikube
https://github.com/kubernetes/minikube/blob/master/deploy/addons/pod-security-policy/pod-security-policy.yaml.tmpl
Sounds like we need an issue to track enabling of PodSecurityPolicy admission plugin and bundling the RBAC?
Closing due to age. This should be covered by the hardening guide in rancher/docs#2882
Most helpful comment
As can be seen in
k3s servercommand help screen, it is stated that:Unfortunately the
PodSecurityPolicyisn't on that list. Either way, it would be nice to know when it will be implemented.Edit: I found out that PodSecurityPolicy was removed in k3s, probably on purpose :(. Couldn't bring it back for myself, it seems that Kubernetes changes related to this functionality are quite deep - or it's just me with my almost zero knowledge about go ;). I'm interested in bringing this plugin to k3s - is it planned? Or maybe there's some way to do build k3s with this plugin?