User Story
As an operator I would like to run Cluster API's processes on a Kubernetes cluster that has the Pod Security Policy admission controllers turned on.
Detailed Description
Right now the various components of Cluster API are just using the defaults -- default security context, default service account in each namespace, and no PSP setup for those default service accounts. Unfortunately, PSP doesn't accept blanket defaults without explicitly referencing one of the Pod Security Policies installed on the cluster (which is difficult since some distributions just call them privileged and restricted while others don't), or just creating an accompanying policy. For that reason, you sorta need to recreate the privileged or restricted policies or provide a means to pass in the policy name.
Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]
/kind feature
/cc
/help
@vincepri:
This request has been marked as needing help from a contributor.
Please ensure the request meets the requirements listed here.
If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-help command.
In response to this:
/help
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/milestone v0.3.x
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Not sure we want to add PSP, since it is being deprecated, but I think it would be good to keep this open to track equivalent functionality.
/lifecycle frozen
While PSP might be a stretch goal, I would say a MVP and probably the most critical would be to at least have security context for all of the controllers, and make sure they're running as non-root.
@voor We're already running as non-root https://github.com/kubernetes-sigs/cluster-api/blob/master/Dockerfile#L56 in all our controllers
/milestone v0.4.0
@detiber To break this up in more issues / actionable items
BTW, containers are still running as root. On cluster with PSPs enabled, I get the following errors while trying to do clusterctl init:
Warning Failed 10m (x5 over 11m) kubelet Error: container has runAsNonRoot and image has non-numeric user (nobody), cannot verify user is non-root
Warning Failed 10m (x6 over 11m) kubelet Error: container has runAsNonRoot and image will run as root
@invidian That seems something we should fix, could you open a different issue?
@vincepri sure. Reported here: https://github.com/kubernetes-sigs/cluster-api/issues/4046