Tell us about your request
Currently it does not seem possibly to orchestrate workers in different VPCs to the EKS control plane.
Which service(s) is this request for?
EKS
Tell us about the problem you're trying to solve. What are you trying to do, and why is it hard?
We currently isolate development environments in separate VPCs, and are looking to adopt EKS. The cleanest pattern for us would be to stand up one EKS cluster that can embed workers into each of the development environment VPCs.
Are you currently working around this issue?
Currently we would have to stand up separate EKS clusters per environment.
@tomhaynes I'm thinking about a similar thing. Can you just confirm you also tried VPC peering and it still didn't work? Thanks!
Hi @orkaa - yeah our VPCs are peered. I found that worker nodes were able to join fine, and EKS is able to schedule pods to them.
I realised the issue when I found that kubectl logs times out on these pods - it currently seems the functionality is half there at the mo
Makes sense. Thanks for the confirmation @tomhaynes 👍
I realised the issue when I found that
kubectl logstimes out on these pods - it currently seems the functionality is half there at the mo
@tomhaynes & @orkaa this makes sense, the reason for this is the control plane adds a cross account ENI into your VPC during the provisioning process which logs, exec, port-forward use.
In your dev environment do you need to have your pods isolated at the VPC level? We have a way of extending VPC CIDRs using the AWS VPC CNI which would allow you to allocate not RFC1918 ranges which that can have their own security groups and you might be able to treat as isolated environments - https://docs.aws.amazon.com/eks/latest/userguide/cni-custom-network.html
Interested in hearing your thoughts…
@christopherhein so a potential way to add this functionality could be the ability to add additional ENIs into other VPCs?
Our current environment pattern is to have one infra VPC that contains support / orchestration functionality. This is peered to multiple app VPCs that contain separate development environments.
Ideally we would plug EKS into this setup by running the EKS cluster in the infra VPC, and embedding worker nodes into each app VPC.
This would solve another issue: When EKS is not available in a region, we could launch EKS in a supported region and launch the worker nodes in another region, using a peered VPC.
This would give your customers stuck in us-west-1 an option to use EKS ;)
this will help us adopt a good practice of having the api servers in a central management vpc, separating the cluster endpoints from the worker nodes vpc/ account where apps are running in.
Would like this functionality in order to be able to have a cluster spanning multiple regions where pod affinities could be used to specify regions etc. Also from a cost / overhead basis having to maintain multiple eks clusters for little benefit.
@yoda Sorry for bumping an old issue, have you managed to set up a multi-region cluster in the end? I'm trying to do the same. I have a node trying to join from a peered VPC from another region but the control plane rejects all requests with Unauthorized even though I'm using the same role as other nodes that work fine. Looks like kubectl tokens generated in another region are somehow not valid.
Hello there!
Any progress in this issue/request? I stumbled upon a situation where this'd fit very well in our PCI environment, and I'd love to see this working.
Looking for some similar functionality for PCI environment without having to spin up 2 eks clusters. Ideally, we have a single EKS control plane and then have node groups with each individually assigned to subnets in their own VPC.
For Example:
Most helpful comment
Would like this functionality in order to be able to have a cluster spanning multiple regions where pod affinities could be used to specify regions etc. Also from a cost / overhead basis having to maintain multiple eks clusters for little benefit.