Containers-roadmap: [EKS] Increased pod density on smaller instance types

Created on 30 Jan 2019  路  19Comments  路  Source: aws/containers-roadmap

Community Note

  • Please vote on this issue by adding a 馃憤 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Tell us about your request
All instance types using the VPC CNI plugin should support at least the Kubernetes recommended pods per node limits.

Which service(s) is this request for?
EKS

Tell us about the problem you're trying to solve. What are you trying to do, and why is it hard?
Today, the max number of pods that can run on worker nodes using the VPC CNI plugin is limited by the number of ENIs and secondary IPv4 addresses the instance supports. This number is lower if you are using CNI custom networking, which removes the primary ENI for use by pods. VPC CNI should support at least the Kubernetes recommended pods per node thresholds, regardless of networking mode. Not supporting these maximums means nodes may run out of IP addresses before CPU/memory is fully utilized.

Are you currently working around this issue?
Using larger instance types, or adding more nodes to a cluster that aren't fully utilized.

Additional context
Take the m5.2xlarge for example, which has 8 vCPUs. Based on Kubernetes recommended limits of pods per node of min(110, 10*#cores), this instance type should support 80 pods. However when using custom networking today, it only supports 44 pods.

EKS

Most helpful comment

We are working on integrating with an upcoming VPC feature that will allow many more IP addresses to be attached per instance type. For example, a t3.medium will go from allowing 15 IPs per instance, to 240, a 1500% increase. No timeline to share, but it is a high priority for the team.

All 19 comments

@tabern could you please elaborate a bit what this feature brings?

Right now the number of pods on a single node is limited by --max-pod flag in kubelet, which for EKS is calculated based on the max number of IP addresses instance can have. This comes from AWS CNI driver logic to provide an IP-address per pod from VPC subnet. So for r4.16xl it is 737 pods.

which for EKS is calculated based on the max number of IP addresses instance can have

That's exactly the problem. What if we want to run 30 very small pods on a t.small?

@max-rocket-internet gotcha. Does it mean instances will get more IPs/ENAs or changes are coming to CNI?

It means we need to run a different CNI that is not limited by the number of IPs. Currently is more or less a DIY endeavour but it would be great to have a supported CNI from AWS for this use 馃檪

Yeah, running weave-net (and overriding the pods-per-node limitations) isn't much of an additional maintenance burden but it would have been nice to have that available by default.

Any idea how exactly are you going to proceed with this one?
Seems very much alike to #71

Sorry it's been a bit of time with out a lot of information. We're committed to enabling this feature and will be wrapping this into the next generation VPC CNI plugin.

Please let us know what you think on https://github.com/aws/containers-roadmap/issues/398

The comment by @mikestef9 on #398 refers to this issue for updates regarding the specific issue of pod-density. Since there has been no update on this issue in over a year, could someone from the EKS team give us an update?

We are working on integrating with an upcoming VPC feature that will allow many more IP addresses to be attached per instance type. For example, a t3.medium will go from allowing 15 IPs per instance, to 240, a 1500% increase. No timeline to share, but it is a high priority for the team.

@mikestef9 hi! Will be pod density increased for bigger instances types as well?
This is very important because we are thinking to switch to a different CNI plugin, but if you will increase the IP addresses count any time soon we will stay with AWS CNI :)

It will be a 1500% increase in IP addresses on every instance type. However, I don't feel that matters on larger instance types. For example, a c5.4xl today supports 234 IP addresses for pods. Which particular instance type are you using?

We are using m5.xlarge and still have enough resources to schedule additional pods but we out of free IPs.

Got it. I'm consider "smaller" to mean any instance type 2xl and below. In this case, m5.xlarge will go from supporting 56 IPs to 896, which will be more than enough to consume all instance resources by pods.

Pods can be very very small 馃槈 But nevertheless, this is a great step

To just get clarity, this is 16x the IPs while still using IPv4? Whereas longer term, for huge numbers of IPs etc, it's expected that EKS will shift to IPv6 instead?

Exactly. The same upcoming EC2/VPC feature that will allow us to increase IPv4s per instance, will also allow us to allocate a /80 IPv6 address block per instance. That's what we will leverage for IPv6 support, which is a top priority for us in 2021.

We are working on integrating with an upcoming VPC feature that will allow many more IP addresses to be attached per instance type. For example, a t3.medium will go from allowing 15 IPs per instance, to 240, a 1500% increase. No timeline to share, but it is a high priority for the team.

@mikestef9 Sounds awesome. I'm currently evaluating EKS and the current pod limitation is a blocker for our workload. Could you please share an approximate release date? Thanks.

Exactly. The same upcoming EC2/VPC feature that will allow us to increase IPv4s per instance, will also allow us to allocate a /80 IPv6 address block per instance. That's what we will leverage for IPv6 support, which is a top priority for us in 2021.

I'm currently evaluating EKS and the current pod limitation is a blocker for our workload. Could you please share an approximate release date? Thanks.

Unfortunately, there are few things in life more certain than the fact that AWS will never ever ever ever ever ever share an approximate release date for a future feature. I'm pretty sure pigs will fly and the Universe will cease to exist before we see it happen.

Was this page helpful?
0 / 5 - 0 ratings