Containers-roadmap: [EKS]: EKS Support for IPv6

Created on 14 Apr 2020  ·  23Comments  ·  Source: aws/containers-roadmap

Hi all,

Looking to get input on your IPv6 requirements as we develop our strategy here. IPv6 for EKS is a broad topic and your feedback will help us prioritize the most requested IPv6 scenarios.

Some topics that would be especially useful to get clarity on:

  • What type of VPC resources do you want to access over IPv6?
  • Are you interested in dual stack (IPv4+IPv6) or do you need IPv6-only (IPv4 disallowed) access?
  • Are you planning to use IPv6 only within your VPC(s), or are you also planning to connect your pods to IPv6 internet?
  • Do you require image pulls from ECR over IPv6?
  • Anything else that is important to you!

We have identified various milestones and they are outlined separately in the initial comments below. Please upvote this issue if you are interested in IPv6 in general, but also add a +1 to any of the milestone comments below that matter most to you.

For anything you feel is not listed as a milestone below, please open a separate feature request issue on the roadmap.

Looking forward to hear your thoughts here!

EKS

Most helpful comment

Hey all,

IPv6 is a major priority for the EKS team, but it's also a major project that requires changes in nearly every component of EKS, along with hard dependencies on a few other AWS service feature launches.

We have come a long way in our design since originally opening this issue, and it will look similar to @zadjee comment from above.

At cluster creation time, you will have the option to choose IPv4 or IPv6 as the pod IP address family. If you choose IPv6, pods will only get IPv6 addresses. When new nodes are launched in the cluster, they will each be assigned a /80 IPv6 CIDR block to be used by pods. However, IPv4 connections will still be possible at the boundaries of the cluster. With dual stack ALB (available now) and NLB (coming soon), you can accept IPv4 traffic that will be routed to pods. For egress, we will use a NAT64 solution that will allow pods to communicate with IPv4 services outside the cluster.

This design solves all pod density and IP exhaustion challenges faced today, without requiring all IPv4 services in your environment to first be migrated to IPv6.

Again, this design requires some features to be first launched from other AWS service teams, so no timeline to share right now, but it is a project we are actively working on.

-Mike

All 23 comments

External IPv6 clients communicating with EKS API server
Note: This is separate to API server access from pods within the cluster (via X-ENI), which depends on pod addressing.

External IPv6 clients communicating with pods
Services deployed on EKS are accessible from the IPv6 Internet. This includes Ingress via ALB and ALB Ingress Controller, and Services of type=LoadBalancer via NLB and the AWS cloud provider. Pods may run IPv4.

Pod to external IPv6 clients / Pod to pod (dual-stack)
Pods are able to connect to IPv6 addresses outside the cluster, for both ingress and egress (depending on security group policy). Serves as a good intermediate testing ground for IPv6-only, or for serving IPv6 external users. Every pod would still require an IPv4 address, which is the upstream Kubernetes “dual-stack” feature (https://kubernetes.io/docs/concepts/services-networking/dual-stack/).

External IPv6 clients to node / Node to external IPv6 clients (dual-stack)
Nodes (ie: EC2 instances) can connect to IPv6 addresses outside the cluster, depending on security group policy. Pods running in host network namespace can use this IPv6 connectivity even though the rest of Kubernetes is IPv4 only. Anything connecting to kube-proxy NodePorts (including NLB/ALB) can ingress over IPv6 and be proxied to an IPv4 pod. This requires IPv6 enabled at the VPC and within the host operating system.

IPv6 only Pods
Requirements we have in mind:

  • VPC CNI plugin supports IPv6.
  • EKS API server is available via IPv6 to in-cluster clients (either via x-eni or NLB).
  • EKS API server can connect to IPv6 pods/services via x-eni for exec/logs and aggregated API server features.
  • CoreDNS and other add-ons support IPv6.
  • IPv6 CIDRs supported in managed nodes.
  • EKS/Fargate IPv6 support.
  • All customer container workloads need to support IPv6.
  • Nodes (kubelets) require node-local IPv6 to perform container health checks.

Note: Pods might still have access to IPv4 using an egress protocol-NAT device, or using a private (to the pod/node) IPv4 address and NATing to the node’s IPv4 address (using iptables).

With this mode, IPv6-only pods do not consume an IPv4 address and allow you to scale clusters far beyond any IPv4 addressing plan.

IPv6 only nodes
Worker nodes only get IPv6 addresses. We think this feature will only be useful after IPv6-only pods.

Hello mike
Most of the things shouldn't be optional to have it or not. Anything that communicates must have IPv6 support now a days. If the intent it to choose priorities then fine.

I would say that any VPC resources should be accessible via IPv6.
Dual-stack is and will be the most common way, but IPv6-only is becoming a reality in IPv6-only Datacenters like Facebook does. However being able to be IPv6-only feature is the type of thing that can surely be less priority than basic IPv6 support.
Certainly pods must connect to IPv6 internet as well. They must be able to serve data directly or via Loadbalancers so there are not more locks and reasons for people to put their content available via IPv6 as well.

Hello folks
Do you have any update on it so far ?
I am getting often excuses for not having IPv6 on something mostly due to the of IPv6 in EKS. Not nice to have these type of arguments really.

Do you believe we an expect it for the beginning on next half of 2020 already ?

It is due to attitudes like this that we are having this battle going on for 20 years for IPv6 to be where it should already be. It seems still that developers (always them) can't get IPv6 into their minds.
They truly believe "it's Ok" to release a product to market in 2020 without having IPv6 support, just because "nobody uses, so it it unimportant and why should I care?". This is normally the most absurd. Anyone developing a product in 2020 should be very ashamed to release it without IPv6 regardless the reasons.

What a pity we have to still go through these scenarios. And that keeps facilitating for people to have their chain of excuses for not having IPv6 on something.

Please consider providing a managed NAT64 service so that the pods can run single-stack, IPv6-only (this helps administration and design) while maintaining access to the IPv4 Internet.

Also it would be extremely useful to provide a /64 routed subnet to a EC2 instance, to avoid the mess fiddling with multiple ENIs and adding/removing individual addresses (as is the current approach in AWS CNI plugin).

Another extremely feature would be a /48 assignment per VPC (or at least multiple /56s) to help with that /64 per EC2 instance.

Another feature I'd love to see is dual-stack support for those Kubernetes clusters that process both IPv4 and IPv6 ingress traffic from the Internet (that is: dual-stack the ingress, single-stack the rest of the communication within the cluster).

IPv6 support in the Network Load Balancer is obviously the next item on the _IPv6 support_ agenda. :-)

Thank you for considering any of those ideas.

Honestly I'd be delighted with 100% IPv6-only on the inside, with dual-stack LBs on the outside to support those poor people who are still stuck on the olde IPv4 internet of the early 2000s 😛

Fargate IPv6 support would be really handy for a bunch of different reasons.

External IPv6 clients communicating with pods
Services deployed on EKS are accessible from the IPv6 Internet. This includes Ingress via ALB and ALB Ingress Controller

Isn't this already possible today with ALB ingress controller? I saw e.g. this PR https://github.com/kubernetes-sigs/aws-alb-ingress-controller/pull/991 , merged 08/2019.
(not sure, since we're currently using Nginx + ELBv1)

Since CloudFront and/or ALB Ingress or Service LB could handle any IPv4 only clients, it would be great to have a fully IPv6 backend network option, but this isn't even possible in just VPC & EC2 yet, nevermind EKS.
Since the reverse is also possible (as in using CloudFront or ALB for IPv6 clients on IPv4 only internals), the main point inside the VPC would be to save on or not require IPv4 ranges in VPCs, especially in Transit Gateway, VPC Peering, etc. scenarios, where reusing the same IP range for different VPCs over and over again, would cause problems.

How long more are we going to wait to have IPv6 support ? This should have come form day zero and has been the reason people use as excuse to not have IPv6 for public facing on many product.

Hello,

Do you have any update about the IPv6?

Any updates here? Roadmap?

I recently came across this as I wanted to create my first EKS cluster and - of course - was setting up dual-stack VPC/subnets as a basis. I was literally shocked to see the actual non-existence of IPv6 support of such a major infrastructure product in the cloud era. I took half a day trying to find a solution because my brain was resisting to believe this could be true.

It's unbelievable how people can still treat IPv6 support to new products now a days so badly as if it was something "optional, for later, less important or cosmetic". Meanwhile a lot of new platforms, among them some quiet large ones who generate a fair amount of traffic remain without IPv6 support because of this issue.

If it helps get this stuff out the door faster, I'm more than happy to dedicate some after work time to review, pair, etc -- just point me in the right direction. I saw this PR a few days ago but havent seen much traction, so idk if its part of the main IPv6 effort or not. IPv6-only pods would be the biggest win for me but I'll help wherever I can

Hey all,

IPv6 is a major priority for the EKS team, but it's also a major project that requires changes in nearly every component of EKS, along with hard dependencies on a few other AWS service feature launches.

We have come a long way in our design since originally opening this issue, and it will look similar to @zadjee comment from above.

At cluster creation time, you will have the option to choose IPv4 or IPv6 as the pod IP address family. If you choose IPv6, pods will only get IPv6 addresses. When new nodes are launched in the cluster, they will each be assigned a /80 IPv6 CIDR block to be used by pods. However, IPv4 connections will still be possible at the boundaries of the cluster. With dual stack ALB (available now) and NLB (coming soon), you can accept IPv4 traffic that will be routed to pods. For egress, we will use a NAT64 solution that will allow pods to communicate with IPv4 services outside the cluster.

This design solves all pod density and IP exhaustion challenges faced today, without requiring all IPv4 services in your environment to first be migrated to IPv6.

Again, this design requires some features to be first launched from other AWS service teams, so no timeline to share right now, but it is a project we are actively working on.

-Mike

If I understand the last comment correctly, the nodes themselves, i.e. for NodePort Services, will be unaffected by this, as they will depend on the Node's IPv4/IPv6/Dual-Stack setup and can forward to the Service's ClusterIP in whatever the chosen Pod IP address family is? I'm assuming ClusterIP will match PodIP address family.

I ask mostly because our use-case has lots of UDP NodePort Services (Agones-managed), but partly because it seems that if I'm right, NLB instance-routing (i.e. without #981) talks to NodePort Services and so should be isolated from this change.

I wonder if this had been considered at the beginning of the project what would be the output.
It's hard to take initiate a new project in the recent years and not have IPv6 as a mandatory feature since day zero.
And it get even worst when it depends on other and other projects.

When asked people who develop platforms that use EKS why they don't have IPv6 support they response is because of EKS, then it seems there are yet other dependencies in a chain with prevents any timeline or commitment.

Yes, Service ClusterIP will be IPv6 if pods are IPv6. Nodes will be dual stack and handle the translation if needed.

Was this page helpful?
0 / 5 - 0 ratings