What happened?
A description of actual behavior (with error messages).
I am trying to create a cluster that spans all AZs in us-east-1 (a, b, c, d, e, f) since I have an application that only needs to be in the same region, but can run equally well in any AZ within that region. When I try to user more than four AZs, I get the following error message:
❯ eksctl create cluster -f eks-cluster-test.yaml
[ℹ] using region us-east-1
[✖] insufficient number of subnets (have 8, but need 12) for 6 availability zones
This is because eksctl only creates 8 subnets, and there needs to be a public and private subnet per AZ. I cannot figure out how to get eksctl to create more subnets, but it should automatically scale the subnets by AZs.
What you expected to happen?
A clear and concise description of what the bug is.
How to reproduce it?
Include the steps to reproduce the bug
I use the following config file as eks-cluster-test.yaml:
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: test-cluster
region: us-east-1
nodeGroups:
- name: control-nodes-1
instanceType: t3.small
desiredCapacity: 1
availabilityZones: ["us-east-1a", "us-east-1b", "us-east-1c", "us-east-1d", "us-east-1e", "us-east-1f"]
Then run eksctl create cluster -f eks-cluster-test.yaml.
Anything else we need to know?
What OS are you using, are you using a downloaded binary or did you compile eksctl, what type of AWS credentials are you using (i.e. default/named profile, MFA) - please don't include actual credentials though!
Versions
Please paste in the output of these commands:
$ eksctl version
[ℹ] version.Info{BuiltAt:"", GitCommit:"", GitTag:"0.6.0"}
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-19T13:57:45Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"darwin/amd64"}
error: You must be logged in to the server (the server has asked for the client to provide credentials)
Logs
Include the output of the command line when running eksctl. If possible, eksctl should be run with debug logs. For example:
eksctl get clusters -v 4
Make sure you redact any sensitive information before posting.
If the output is long, please consider a Gist.
Hi @albertmichaelj I think only some us-east-1 AZ support EKS? Possible only a/b/c do? I think I saw another user with an error due to that. And this mention too:
https://docs.aws.amazon.com/eks/latest/userguide/troubleshooting.html
Hi @whereisaaron. This is not the same issue. I’m pretty confident this is about the fact that eksctl only creates 8 subnets (and it needs a public and private subnet for each AZ). For example, if I use either of the below config files, things work fine:
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: test-cluster
region: us-east-1
nodeGroups:
- name: control-nodes-1
instanceType: t3.small
desiredCapacity: 1
availabilityZones: ["us-east-1a", "us-east-1b", "us-east-1c", "us-east-1d"]
Or
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: test-cluster
region: us-east-1
nodeGroups:
- name: control-nodes-1
instanceType: t3.small
desiredCapacity: 1
availabilityZones: ["us-east-1b", "us-east-1c", "us-east-1d", "us-east-1f"]
(Notice that between the two config files, I use all AZs in us-east-1 except us us-east-1e which is the AZ that doesn’t support EKS mentioned in #817, which I’m assuming is the issue you are referring to)
However, if I use this config:
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: test-cluster
region: us-east-1
nodeGroups:
- name: control-nodes-1
instanceType: t3.small
desiredCapacity: 1
availabilityZones: ["us-east-1a", "us-east-1b", "us-east-1c", "us-east-1d", "us-east-1f"]
Then I get the same error (with the error stating I need 10 subnets instead of 12, but that I only have 8 available).
At https://eksctl.io/usage/vpc-networking/ it states that the VPC created has only 8 subnets (though it says they are partitioned differently than the behavior I see with the 2 reserved subnets). The error that I get states that there are not enough subnets, consistent with my explanation.
I think that eksctl hard codes the number of subnets at some level, and they get split between each AZ as an private and public subnet. When you have more than 4 AZs, eksctl can’t do this, and errors.
Is there a config that I’m missing here? I know that I can create my own VPC and assign everything manually, but I really don’t want to do that. Moreover, using more than 4 AZs does not seem like an unreasonable typical use case, so it seems like something that should be supported by eksctl.
EKS supports US-East-1 A-F (6 Availability Zones).
However, eksctl takes the VPC CIDR and divides that into 8 subnets.
Since eksctl creates 1 public and 1 private subnet per Availability Zone, if two times the number of Availability Zones is greater than the amount of subnets it can create (8), you will receive the error.
So in effect, eksctl has a hard cap of 4 Availability Zones.
[Edited for clarity]
PS> I ran into the same error and hope they fix this.
Hi @albertmichaelj indeed, eksctl only creates 8 subnets (from which it uses 3 public and 3 private).
A workaround is to define your own VPC and import it.
@martina-if Thanks for your reply. However, I think that the current behavior is unnecessarily limiting. Consider this a feature request to allow for more subnets. The number of subnets created could depend on the number of availability zones in the config file, for example.
Can we please add a feature request to increate the 8 limit to 16?
@albertmichaelj @billnbell this feature should be pulled into the next release 😄 https://github.com/weaveworks/eksctl/pull/2804
Most helpful comment
Can we please add a feature request to increate the 8 limit to 16?