As discussed on slack, we will just deprecate this field and remove it from the next version of the API. This filed was added without a particular use-case in mind, and is not considered useful enough at present.
This is something we will need to fix in the next API version.
xref #888
There is defiantly a use-case here. In AWS VPC you can use multiple CIDR blocks. I would like EKS to run private in a different VPC CIDR and have a second VPC CIDR for public access (company policy)
@errordeveloper I think keeping vpc.extraCIDRs from the ClusterVPC schema would allow for the addition of CIDRs to be added the ControlPlaneSecurityGroup and would at least save the ec2 authorize-security-group-ingress step from https://github.com/weaveworks/eksctl/issues/1745 (if I understand what this setting is/was originally meant for)
Also it's confusing if this is removed or not already because the spec still shows it but I added an array to that key within the cluster vpc configuration and it's done nothing.
Wouldn't this be a good use case for adding additional CIDRs to the ControlPlaneSecurityGroup and maybe even adding attachIDs as well for existing securityGroups people might want to add?
@martina-if did this get removed from latest release? Looks like my extraCIDRs are ignored.
@cdenneen extraCIDRs never had code merged for it to do something.
@tiffanyfay understood but right now when you create a cluster that is public: false and private: true there is no Inbound 443 rule to allow access to the API resulting in Error: timed out waiting for control plane "eks-private-cluster" after 25m0s
Need a way to attach SG/CIDRs to the ControlPlane as well as remote access. Currently remoteAccess for managedNodeGroups allows for ssh.sourceSecurityGroupIds but that doesn't help fix the CP access issue and also honestly requires you to pre-create SGs outside of this "infrastructure as code" which I think is bad practice since it's only function is for this. (If you had a shared SG for your company that you use globally I understand but not in the most basic use case here)
So what would be useful is to specify array in vpc.extraCIDRs (as mentioned above back in February) which could eliminate self-managing SGs to attach here and it would add these CIDRs to both the CP and NG SG's as needed.
Currently it appears only way to get 443 access to the cluster is running a series of subsequent events after cluster creation which makes utilizing the vpc.extraCIDRs that much more necessary to be functional code.
As per #1745 The steps needed would be:
Get Cluster SG:
CP_SG=$(aws eks describe-cluster --name $EKS_NAME | jq '.cluster.resourcesVpcConfig.clusterSecurityGroupId' --raw-output)aws ec2 authorize-security-group-ingress --group-id $CP_SG --protocol tcp --port 443 --cidr $ADMIN_IP@cdenneen it appears you're trying to enable access to a private-endpoint-only EKS cluster.
This can be achieved if you have a pre-existing VPC that you're supplying to eksctl. You can create a security group in the VPC with an ingress rule allowing $ADMIN_IP on port 443, and specify that SG in vpc.securityGroup in the ClusterConfig. This will override the default SG that's added by eksctl to enable communication between the control plane and unmanaged/self-managed nodegroups, so you'll have to ensure to add the correct SGs to any unmanaged nodegroups. Managed nodegroups will not be affected by this setting as they use the cluster security group that EKS creates by default.
@cPu1 might be worth documenting that key (https://eksctl.io/usage/vpc-networking/) also from the wording here it appears that Managed nodegroups will not be affected by this setting so it wouldn't help in my case. Resulting in needing to modify the clusterSecurityGroupId manually?
from the wording here it appears that Managed nodegroups will not be affected by this setting so it wouldn't help in my case.
I meant to say that if you specify your own vpc.securityGroup, you'll need to add the correct SGs for unmanaged nodegroups, but managed nodegroups will continue to work without requiring any changes.
@cPu1 using vpc.securityGroup is by definition for "for communication between control plane and nodes" however the requirement here is for "communication from other ranges to the control plane API".
if you don't override this value would you have to need to add the correct SGs for unmanaged nodegroups?
I think there also should be an option for attaching additional SGs to the cluster as well.
Currently if you are using a VPC over VPN or Direct Connect. Your personal machine isn't not part of a SG. So you can't add your SG to cluster because you don't have one. You must in this case add your CIDR ranges to the existing SG which is where extraCIDRs would be useful to actually implement code for.
@tabern any thoughts on needing ability to add extracidrs to CP SG in order to access from Direct Connect? Not everyone uses a dedicated bastion host which is part of a SG
Most helpful comment
@cPu1 using
vpc.securityGroupis by definition for "for communication between control plane and nodes" however the requirement here is for "communication from other ranges to the control plane API".if you don't override this value would you have to
need to add the correct SGs for unmanaged nodegroups?I think there also should be an option for attaching additional SGs to the cluster as well.
Currently if you are using a VPC over VPN or Direct Connect. Your personal machine isn't not part of a SG. So you can't add your SG to cluster because you don't have one. You must in this case add your CIDR ranges to the existing SG which is where extraCIDRs would be useful to actually implement code for.