Kops: Implement authorization-mode for kube-apiserver

Created on 15 Aug 2016  路  14Comments  路  Source: kubernetes/kops

The current implementation does not start kube-apiserver with an authorization-mode specified. This appears to give all accounts full access to the k8s api including the default service account each container gets. A more secure configuration would be to use ABAC with a policy file of

{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"admin", "namespace": "*", "resource": "*", "apiGroup": "*", "nonResourcePath": "*"}}
{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"kubecfg", "namespace": "*", "resource": "*", "apiGroup": "*", "nonResourcePath": "*"}}
{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"kubelet", "namespace": "*", "resource": "*", "apiGroup": "*", "nonResourcePath": "*"}}
{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"kube-proxy", "namespace": "*", "resource": "*", "apiGroup": "*", "nonResourcePath": "*"}}
{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"kube", "namespace": "*", "resource": "*", "apiGroup": "*", "nonResourcePath": "*"}}
{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"system:controller_manager", "namespace": "*", "resource": "*", "apiGroup": "*", "nonResourcePath": "*"}}
{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"system:dns", "namespace": "*", "resource": "*", "apiGroup": "*", "nonResourcePath": "*"}}
{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"system:logging", "namespace": "*", "resource": "*", "apiGroup": "*", "nonResourcePath": "*"}}
{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"system:monitoring", "namespace": "*", "resource": "*", "apiGroup": "*", "nonResourcePath": "*"}}
{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"system:scheduler", "namespace": "*", "resource": "*", "apiGroup": "*", "nonResourcePath": "*"}}
{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"system:serviceaccount:kube-system:default", "namespace": "*", "resource": "*", "apiGroup": "*", "nonResourcePath": "*"}}

which would only allow core system services to access the API. Administrators could add to this file as needed for their specific implementation. To duplicate the existing functionality the following line could also be added to allow the default service account full access:

{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"system:serviceaccount:default:default", "namespace": "*", "resource": "*", "apiGroup": "*", "nonResourcePath": "*"}}

These policies may need tweaking because I'm not 100% sure about all of the accounts. For instance, is the account system:dns used or does the kube-dns pod use the default service account?

aresecurity flag-map-request lifecyclrotten

Most helpful comment

It would also be great to be able to set additional authorization modes, like RBAC.

I'm not sure what happens at the moment if you just modify the live apiserver pod?

All 14 comments

It would also be great to be able to set additional authorization modes, like RBAC.

I'm not sure what happens at the moment if you just modify the live apiserver pod?

@tazjin you interested in taking on some of this security stuff???

@jkemp01 security is going to be a focus for us :)

@billyshambrook yo you are doing this!!!

So https://github.com/kubernetes/kops/pull/1357 will map authorization-mode and authorization-rbac-super-user

Are there more flags that need doing? If so please open a new issue (so I'm sure to see it!)

You can manually edit the flags - for kubelet it is the systemd manifest, for other components it should be in /etc/kubernetes/manifests. But if the instance is recreated (e.g. kops rolling-update), the state is rebuilt, so your changes will be lost. Best to keep it in the kops state, but that requires us to map the flags.

FYI, the reason I didn't just map all the flags is because I wanted to see what was happening with componentconfig. But I think realistically we need to start mapping more flags. I'd like to do it on-request, but I'll basically just do it (or accept a PR that does it).

Moving to 1.5.1 - we have the flags in kops 1.5.0, though it is up to the user to put them all together in a way that works. Hopefully we can get an example / drop-in configuration in a post 1.5.0 kops version!

@justinsb Can you please point me at the docs of how to configure what is available today (v1.5.1).

yep, I also cannot find related info in the docs.

@pluttrell @ffjia RBAC example ...

$ kops edit cluster \
  --name=<CLUSTER_NAME> \
  --state=s3://<KOPS_STATE_STORE_BUCKET>
spec:
+  kubeAPIServer:
+    authorizationMode: RBAC,AlwaysAllow
+    authorizationRbacSuperUser: admin
+    runtimeConfig:
+      rbac.authorization.k8s.io/v1alpha1: "true"

@itskingori thanks a lot, mate.

Having AlwaysAllow will allow any user to auth - https://kubernetes.io/docs/admin/authorization/

If multiple modes are provided the set is unioned, and only a single authorizer is required to admit the action. This means the flag:

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Was this page helpful?
0 / 5 - 0 ratings