Kops: Securing default configuration of kops-provisioned clusters - IAM and api tokens

Created on 20 Jan 2017  路  7Comments  路  Source: kubernetes/kops

Hi all,

as me and my colleagues started to evaluate Kubernetes (with success, largely thanks to kops) we have found critical security issues related to default configuration of kops-provisioned clusters.

  • Access to IAM role credentials (with broad permissions) available from every Pod, mentioned already in #379 #376 #528 #1100 #363
  • API token mounted in every Pod (as far as I understand it gives ability to manage kubernetes from within every running Pod)

Me and my team would like to address those issues, preferably in a way that could make it into kops. This is why I would like to ask for suggestion how you would like to get it solved. I understand that making kops-provisioned clusters secure enough to run untrusted code is a big task.

My initial take on that

  1. Document current behavior. I believe it is crucial to make it clear to users and early adopters about current state and progress being made in that matter. I wish to start working on that once I get full understanding of current setup.
  2. Block access to IAM by default, at least from Pods run in 'default' namespace (or every namespace other than kube-system)
  3. Do not mount api token by default
  4. Further lower the amount of permissions that IAM roles have

I'd like to get your input how we can achieve 2) and 3)

Kube2iam seems to be the default go-to for people wishing to manage IAM roles for Pods. Could we install it by default?

Or should we just block 169.254.169.254 by default? If so, how to achieve that in a configurable (ability to turn that off when editing cluster) and clean way? Could this be done by protokube itself ? Once I get directions from you I can start working on that.

I'd also like to get better understanding of why kubernetes mounts default api token inside every Pod and how we can disable it.

Regarding 4) - I believe we should start by scoping EC2 permissions to resources tagged with KubernetesCluster=$clustername , I'll play with it in the next few days.

Looking forward for your input.

aresecurity lifecyclrotten

Most helpful comment

I'm rolling out a new cluster using ABAC/RBAC and kube2iam to ensure all pods default to an empty IAM policy. Just passing on some info from my recent work getting this configuration up and running. The iptables rule that is created by kube2iam when using the --iptables=true and --host-interface=cbr0 flags is only applied to the specified interface. So in this case something like kube-controller-manager is not affected because it uses hostNetwork=true which keeps its traffic out of the cbr0 interface. We could also start by not even running it on the masters, just get it running on the nodes to improve their security.

 pkts bytes target     prot opt in     out     source               destination
0     0 DNAT       tcp  --  cbr0   *       0.0.0.0/0            169.254.169.254      tcp dpt:80 to:10.0.1.53:8181

All 7 comments

Hi - it would be great to address this. As a suggestion, you might find it better to break this into 4 issues (you can still keep this as an "overall" issue):

  1. Yes, agreed. Having clear docs about the realistic situation is important. Until we have RBAC I don't think k8s is really safe for untrusted code (RBAC is planned to come out of alpha in 1.6), and the deeper problem is that some things may not have been designed for hostile code.

  2. I really want to get kube2iam installed as an addon, and then probably by default. Or something equivalent, but I have heard good things about kube2iam. The only downside of kube2iam I have heard is that it is very important to make sure the redirect rule is set up, but kops can help there. If it starts as an addon, we can then make it part of the default installation later, and if we do find any problems we can contribute fixes. If it goes well we can even see if the author would be willing to put it into incubation as an official part of k8s. The thing I like about kube2iam is that it both fixes a security issue (access to the IAM token) _and_ it adds functionality - being able to give pods permissions. In addition to the kops issues you mentioned, there is an upstream k8s issue: https://github.com/kubernetes/kubernetes/issues/23580 Blocking 169.254.169.254 is a good suggestion - there are 3 options that I can think of: allow, block and forward-to-kube2iam. I don't know if actually we could just always install something like kube2iam, maybe a simpler version that just always forwarded or always blocked. This is something that we should probably discuss in sig-aws, because we could establish a standard port regardless of installation method. One thing to be careful of is that we can't stop kube-controller-manager from talking to AWS, particularly during bootstrapping (I think)!

  3. Yes. This is really a k8s issue i.e. I don't think kops is in the flow at all here so we have to fix it in k8s. I think this is the primary issue: https://github.com/kubernetes/kubernetes/issues/16779

  4. Agreed - and any contributions welcome. We do now build the IAM policy in code ( https://github.com/kubernetes/kops/blob/master/pkg/model/iam/iam_builder.go ) precisely because we want to make it more locked-down. I agree that if we can scope based on tags that would be great. I think that not all permissions supported it though last time I checked :-( If it does work, we might want to change the tag we match on to allow sharing, but we can have that discussion if it is possible!

I'm rolling out a new cluster using ABAC/RBAC and kube2iam to ensure all pods default to an empty IAM policy. Just passing on some info from my recent work getting this configuration up and running. The iptables rule that is created by kube2iam when using the --iptables=true and --host-interface=cbr0 flags is only applied to the specified interface. So in this case something like kube-controller-manager is not affected because it uses hostNetwork=true which keeps its traffic out of the cbr0 interface. We could also start by not even running it on the masters, just get it running on the nodes to improve their security.

 pkts bytes target     prot opt in     out     source               destination
0     0 DNAT       tcp  --  cbr0   *       0.0.0.0/0            169.254.169.254      tcp dpt:80 to:10.0.1.53:8181
  1. Agreed on all fronts.

  2. Implementing kube2iam or kiam to gate access to EC2's metadata API is my suggested default, with the option to not use it if needed.

  3. Enabling RBAC should remove the permissions on default namespace tokens

  4. Refer to issue #1873, but I suggest that the defaults should be the minimal set and allowances for them to be overridden by the user during installation/configuration should certain features be needed.

I am removing P0 as this is not a show stopper

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Was this page helpful?
0 / 5 - 0 ratings

Related issues

owenmorgan picture owenmorgan  路  3Comments

Caskia picture Caskia  路  3Comments

drewfisher314 picture drewfisher314  路  4Comments

argusua picture argusua  路  5Comments

joshbranham picture joshbranham  路  3Comments