Kops: Audit Security Groups for kops

Created on 28 Oct 2016  路  10Comments  路  Source: kubernetes/kops

  • we need to determine if the SG for kops are sufficient, pretty good, let's make them awesome
aresecurity

Most helpful comment

Perhaps we allow overriding SG rules so the user can define their own _awesome_ SG?

All 10 comments

Perhaps we allow overriding SG rules so the user can define their own _awesome_ SG?

Have another issue for CRUD on SG :)

Pointer? I feel like these two are related - and might be the same work

https://github.com/kubernetes/kops/issues/749 This is broke because of SG

Not a deep enough sample size, but a good start: (file x has output of vpc flow logs):

awk '$4 != "172.20.44.6" && $6 != "443" && $7 != "443" && $13 != "REJECT" {print $0}' x
2 342113354287 eni-af3265ee 199.102.46.80 172.20.44.6 123 123 17 1 76 1478220105 1478220165 ACCEPT OK

Those awk filters translate to:
The filters are

  1. source not equal to the IP of master,
  2. source and destination port not 443
  3. Ignore rules rejected by the SG already

I'll try to dig deeper to prove that assertion, but early results indicate the master only needs to be accessed by:

  1. ADMIN_CIDR over port 443
  2. ADMIN_CIDR over port 22
  3. The node_sg(s) over port 443

Everything else can be turned off.

In my case, the pods that need to run on the master 1) are never interacted with (logging and monitoring push out data) and 2) don't offer internal services anyway.

Further, to clarify: The master should be able to touch the nodes on all ports. But the nodes should only need to talk to the master on port 443... not all traffic.

If heapster is a concern, we would also need to allow node-> master over port 4194/tcp (--cadisor-port)

run on the master:

# lsof -Pni | grep LISTEN | grep -v rpc
sshd        549   root    3u  IPv4    13723      0t0  TCP *:22 (LISTEN)
sshd        549   root    4u  IPv6    13725      0t0  TCP *:22 (LISTEN)
kubelet     909   root    7u  IPv4    14938      0t0  TCP 127.0.0.1:10248 (LISTEN)
kubelet     909   root    8u  IPv6    14941      0t0  TCP *:10255 (LISTEN)
kubelet     909   root    9u  IPv6    14943      0t0  TCP *:10250 (LISTEN)
kubelet     909   root   10u  IPv6    15964      0t0  TCP *:4194 (LISTEN)
kube-prox  1347   root    6u  IPv4    17382      0t0  TCP 127.0.0.1:10249 (LISTEN)
kube-sche  1424   root    3u  IPv6    18076      0t0  TCP *:10251 (LISTEN)
kube-cont  1553   root    3u  IPv6    19056      0t0  TCP *:10252 (LISTEN)
etcd       1659   root    3u  IPv6    19887      0t0  TCP *:2380 (LISTEN)
etcd       1659   root    5u  IPv6    19888      0t0  TCP *:4001 (LISTEN)
etcd       1691   root    3u  IPv6    20056      0t0  TCP *:2381 (LISTEN)
etcd       1691   root    5u  IPv6    20057      0t0  TCP *:4002 (LISTEN)
kube-apis  1724   root   47u  IPv4    20315      0t0  TCP 127.0.0.1:8080 (LISTEN)
kube-apis  1724   root   48u  IPv6    20849      0t0  TCP *:443 (LISTEN)

I believe we can ignore the 10k ports. That leaves 22, 4194 and 443.

We have locked down the ports in 1.5.0 beta1.

We open etcd for calico, but if you're not using calico, the nodes should only be able to reach the masters on ports 443 and 4194.

Closing, but I'm going to open a few 1.5.1 issues to tighten this further: #1669 #1670 and #1671

Was this page helpful?
0 / 5 - 0 ratings

Related issues

mikejoh picture mikejoh  路  3Comments

lnformer picture lnformer  路  3Comments

yetanotherchris picture yetanotherchris  路  3Comments

Caskia picture Caskia  路  3Comments

chrislovecnm picture chrislovecnm  路  3Comments