I just tried out the new --managed=true flag but got an error message "--node-private-networking is not supported for Managed Nodegroups (--managed=true)". Is there any reason why managed nodes don't work in conjunction with --node-private-networking? Is this an aws or eksctl limitation?
I am able to create managedNodeGroups in private subnets, manually (including the usual NAT-GW) without any issues, so I am thinking this is either a bug or not yet implemented as per this update
https://eksctl.io/usage/eks-managed-nodegroups/
Noting:
Same issue - i'm a little confused however. I wonder if either of you could help me understand when I create managed Nodegroup via the console or Terraform the instances always get assigned a public IP address even if they are in the private subnets? Do you face this issue?
The private networking feature (nodegroup.privateNetworking: true) in eksctl works by launching the nodes in private subnets, and ensuring the nodes do not get a public IP by setting the NetworkInterfaces.AssociatePublicIpAddress field to false in the EC2 launch template.
eksctl _can_ launch managed nodegroups in private subnets, but the Autoscaling Group provisioned by the Managed Nodegroups API always assigns a public IP to the nodes in the nodegroup, and offers no control over configuring it.
Because the private networking feature in unmanaged nodegroups does not fully map to managed nodegroups, it was decided to not support it as it can create confusion and lead to security violations/issues.
when I create managed Nodegroup via the console or Terraform the instances always get assigned a public IP address even if they are in the private subnets?
@sionsmith As mentioned in my comment above, the EC2 launch template provisioned by the Managed Nodegroups API is configured to assign a public IP to the instances launched in the ASG, and it does not allow disabling it.
That's a valid issue that I've noticed as well, but it is different from my issue where a managed node group cannot be created together with private EKS cluster...
@cPu1 Thank you for the perspective. Do you have thoughts on a path forward? I find myself creating the cluster with eksctl and creating the nodegroups manually with the AWS Console to work around this issue.
I would propose a PR, however, based on your comments above this looks like a design decision not a missing feature.
the Managed Nodegroups API always assigns a public IP to the nodes in the nodegroup, and offers no control over configuring it
@cPu1 Did you all provide feedback to Amazon yet that this should be an option for Managed Node Groups? I'm assuming if they added it, you all would be able to easily offer the privateNetworking flag to ManagedNodeGroup (mirroring NodeGroup)?
Did you all provide feedback to Amazon yet that this should be an option for Managed Node Groups?
Yes, they are aware of this.
I'm assuming if they added it, you all would be able to easily offer the privateNetworking flag to ManagedNodeGroup
@cervantek that's correct, it's the only blocker preventing us from supporting the privateNetworking feature.
Hilariously, the containers team say they are overriding our private subnets and forcibly adding public IPs for our own good 🤣
https://github.com/aws/containers-roadmap/issues/607
Hopefully this will be sorted out soon, by making this override an option rather than the unchangeable default.
eksctl _can_ launch managed nodegroups in private subnets, but the Autoscaling Group provisioned by the Managed Nodegroups API always assigns a public IP to the nodes in the nodegroup, and offers no control over configuring it.
Does anyone know how to launch managed nodegroups in the existing private subnets using eksctl ? If I only specify private subnets in the ClusterConfig file, it always fails with error like "No export named eksctl-
Can you please share your yaml file as example? Thanks.
I am not concerned about the public IP assigined to EC2 instances in private subnets. We just would like to use eksctl to create the managed nodegroups in existing private subnets
The following is my config file... The two subnets in the files are in private subnets.
---
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: mytesteks01
region: us-east-1
version: "1.14"
vpc:
id: "vpc-########"
cidr: "10.161.0.0/16"
subnets:
private:
us-east-1f:
id: "subnet-0eecb7e59fd1a28c3"
cidr: "10.161.40.0/22"
us-east-1d:
id: "subnet-0e815470791881b3a"
cidr: "10.161.44.0/22"
iam:
serviceRoleARN: "arn:aws:iam::##########:role/ekstestservicerole"
managedNodeGroups:
- name: my-test-m5-private
labels: {pool: my-test-m5-private}
instanceType: m5.large
desiredCapacity: 2
minSize: 1
maxSize: 5
volumeSize: 50
ssh:
allow: true
publicKeyPath: ~/.ssh/id_rsa.pub
tags:
'Name': 'mytest'
Getting same error as @billchen8888.
"No export named eksctl--cluster::SubnetsPublic found." During managed nodegroup provisioning.
eksctl: version.Info{BuiltAt:"", GitCommit:"", GitTag:"0.13.0"}
You can create ManagedNodeGroups via AWS Console that are configured to use private only subnets. So this shouldn't be a requirement anymore.
Is this issue on the roadmap to fix?
so is there any guidance if we want to use eksctl to create an EKS cluster with private network (aka private subnets) where your worker nodes do not have public IP assigned to them?
or is this not supported at the moment?
I am also trying to create a managed node group with eksctl on a private subnet. But so far it seems doing it through console the only way to specify specific subnets.
```
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: reciter
region: us-east-1
managedNodeGroups:
@github4es @sarbajitdutta It's been a while but from what I recall, the workaround is to first create it public and then run an eksctl update to make it private.
You can find more info in issue https://github.com/weaveworks/eksctl/issues/649?email_token=AB24XN22MBTLAY65UU7BKC3QN4VPZA5CNFSM4G7XRS2KYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEA4KXCI#issuecomment-540584841
EDIT:
Please disregard above as it was for a different issue.
@BernhardLenz Private EndPoint and Private Networking for managed NodeGroup are different feature. Not sure if eksctl managedNodeGroup could accept a list of subnets (basically stripping any public) and doing what we are forced to do via console. It would be a work around until privateNetworking key works on managedNodeGroup
@BernhardLenz
It's been a while but from what I recall, the workaround is to first create it public and then run an eksctl update to make it private.
Can you elaborate on how to do that? eksctl cli has only update cluster, and there's no update nodegroup, so I'm not sure how to do that
@cdeenneen, cc @unfor19 Chris you are right, I had issues with both, private endpoints and the issue mentioned in this thread and I got them mixed up. I believe we ended up not using managed endpoints at the time. I'm surprised to see this hasn't been fixed.
So to sum it up - it's still not possible to use managed flag with privateNetworking flag. Is there a workaround to manage everything in eksctl, and then manually change it in AWS Console? I wasn't able to do that :\
Would like to bump this issue as well
Recent announcement by AWS team related to this issue https://aws.amazon.com/blogs/containers/upcoming-changes-to-ip-assignment-for-eks-managed-node-groups/
They also mentioned about eksctl, so I assume maintainers are aware of this and plan accordingly.
You can create ManagedNodeGroups via AWS Console that are configured to use private only subnets. So this shouldn't be a requirement anymore.
shit reply , question is using eksctl ..
You can create ManagedNodeGroups via AWS Console that are configured to use private only subnets. So this shouldn't be a requirement anymore.
shit reply , question is using eksctl ..
@vsakati Excuse you? The point of the comment is that since eksctl is blocked by the fact that managedNodeGroup “requires” public subnets when in fact it does not. So this should no longer be a requirement of eksctl looking for a public subnet. In fact on April 20 managedNodeGroup will no longer require attaching public IPs to workers (private or public subnets).
when I create managed Nodegroup via the console or Terraform the instances always get assigned a public IP address even if they are in the private subnets?
@sionsmith As mentioned in my comment above, the EC2 launch template provisioned by the Managed Nodegroups API is configured to assign a public IP to the instances launched in the ASG, and it does not allow disabling it.
@cPu1 I think the issue here is that public IPs and what eksctl considers "Public Subnets" are 2 different things. Since you can create managedNode Group on "private" subnets (with public IPs being attached). Even though the update is coming on April 20 to remove the Public IP requirement it seems like the code can be modified today for eksctl to stop looking for public subnets as a requirement.
So https://github.com/weaveworks/eksctl/blob/b308244e72a1f52130c1c370223b20ec4526ae8c/pkg/cfn/builder/managed_nodegroup.go#L108-L109 should be able to be updated to match nodegroup.go where privateNetworking: true allows for Private subnets. (https://github.com/weaveworks/eksctl/blob/b308244e72a1f52130c1c370223b20ec4526ae8c/pkg/cfn/builder/nodegroup.go#L198-L201)
@cPu1 My thinking is same as what @cdenneen mentioned in previous comments.
We discussed this one long time back, and we were not sure how AWS EKS team solves issue with public IP assignment.
Now, things are quite clear to me that as long as eksctl is supporting _private_ subnets for managed node group, users will have much better experience with eksctl now, and it perfectly alignes with AWS roadmap in the future.
https://github.com/weaveworks/eksctl/pull/1791#issuecomment-604913253
@cPu1 I think the issue here is that public IPs and what eksctl considers "Public Subnets" are 2 different things. Since you can create managedNode Group on "private" subnets (with public IPs being attached). Even though the update is coming on April 20 to remove the Public IP requirement it seems like the code can be modified today for eksctl to stop looking for public subnets as a requirement.
@cdenneen While we can relax that requirement in eksctl, the main motivation for not supporting the private networking feature (nodeGroup.privateNetworking) for managed nodegroups was, as mentioned in my comment above:
eksctl _can_ launch managed nodegroups in private subnets, but the Autoscaling Group provisioned by the Managed Nodegroups API always assigns a public IP to the nodes in the nodegroup, and offers no control over configuring it.
Because the private networking feature in unmanaged nodegroups does not fully map to managed nodegroups, it was decided to not support it as it can create confusion and lead to security violations.
I should note that while these instances cannot be reached via their public IP if they're in a private subnet (a subnet with no route to an internet gateway), having a public IP assigned can still be a security violation and not meet the security checklist for some organisations.
That said, we are working on moving the public IP allocation part to the subnet level. This change will help enable support for private networking for MNG when these changes are out: https://aws.amazon.com/blogs/containers/upcoming-changes-to-ip-assignment-for-eks-managed-node-groups/
@cPu1 Does that mean we can expect support for private subnets for managed node groups soon after April 20? This is currently the only thing blocking us from using eksctl for a production workload, so it is something we look forward to :)
@mattias-fjellstrom yes, that's correct. We are working on getting support for private networking for managed nodegroups out in the next release of eksctl.
This is out in https://github.com/weaveworks/eksctl/releases/tag/0.18.0
Managed nodegroups now support the privateNetworking feature, and nodes in a private nodegroup no longer get a public IP.
Most helpful comment
This is out in https://github.com/weaveworks/eksctl/releases/tag/0.18.0
Managed nodegroups now support the
privateNetworkingfeature, and nodes in a private nodegroup no longer get a public IP.