Eksctl: Create cluster in existing VPC fails with VPC CIDR block "192.168.0.0/16" is not the same as "172.31.0.0/16"

Created on 30 Jan 2019  Â·  6Comments  Â·  Source: weaveworks/eksctl

What happened?
Attempting to create a cluster in an existing VPC with only private subnets fails with VPC CIDR block "192.168.0.0/16" is not the same as "172.31.0.0/16".

I built eksctl from master (commit d1791278253d3e4b3739150e6a8d8d8666ba150a)

What you expected to happen?
A cluster to be created :-)

How to reproduce it?
```eksctl -v4 create cluster
--name=$CLUSTER_NAME
--region=us-west-2
--vpc-private-subnets=subnet-XXXXX,subnet-XXXXX,subnet-XXXXX
--nodes=1
--node-type=r5.12xlarge
--ssh-access --ssh-public-key=XXXXX
--node-volume-size=500
--node-volume-type=gp2
--asg-access
--full-ecr-access
--node-private-networking
--node-ami=auto

**Anything else we need to know?**

**Versions**
Please paste in the output of these commands:

$ eksctl version
Built from master (commit d1791278253d3e4b3739150e6a8d8d8666ba150a)
$ uname -a
Linux ip-172-31-4-191 4.4.0-1070-aws #80-Ubuntu SMP Thu Oct 4 13:56:07 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.2", GitCommit:"cff46ab41ff0bb44d8584413b598ad8360ec1def", GitTreeState:"clean", BuildDate:"2019-01-10T23:35:51Z", GoVersion:"go1.11.4", Compiler:"gc", Platform:"linux/amd64"}

**Logs**
This is the only log output with `-v 4`:

2019-01-30T00:41:03Z [ℹ] using region us-west-2
2019-01-30T00:41:04Z [â–¶] role ARN for the current session is "arn:aws:iam::XXXXXX"
2019-01-30T00:41:04Z [✖] VPC CIDR block "192.168.0.0/16" is not the same as "172.31.0.0/16"
```

All 6 comments

I am also facing the same issue.
I use the yaml file below.

apiVersion: eksctl.io/v1alpha3
kind: ClusterConfig

metadata:
  name: xxxxxxx
  region: ap-northeast-1

vpc:
  id: "vpc-xxxxxxxx"  
  cidr: "10.0.0.0/16"
  subnets:
    Private:
      ap-northeast-1a:
        id: "subnet-xxxxxxxx"
        cidr: "10.0.129.0/24"

      ap-northeast-1c:
        id: "subnet-yyyyyyyy"
        cidr: "10.0.128.0/24"

nodeGroups:
  - name: xxxxx-nodegroup
    labels: {role: workers}
    instanceType: t3.small
    desiredCapacity: 2
    privateNetworking: true

The command line is eksctl create cluster -f cluster.yaml --verbose=4, and the log output is ↓.

2019-01-30T12:32:27+09:00 [ℹ]  using region ap-northeast-1
2019-01-30T12:32:27+09:00 [â–¶]  role ARN for the current session is "arn:aws:iam::xxxxxxxxxxxxx:user/xxxxxxxxxx"
2019-01-30T12:32:28+09:00 [✖]  subnet ID "subnet-xxxxxxxx" is not the same as "subnet-yyyyyyyy"

eksctl version is ↓

$eksctl version
[ℹ]  version.Info{BuiltAt:"", GitCommit:"", GitTag:"0.1.19"}

@danielchalef I guess you should also specify the exact CIDR of your own VPC like --vpc-cidr "172.31.0.0/16"

@s-tokutake Perhaps your subnet ids aren't matching their corresponding availability zones?

Try swapping your subnet ids between availability zones:

vpc:
  id: "vpc-xxxxxxxx"  
  cidr: "10.0.0.0/16"
  subnets:
    Private:
      ap-northeast-1a:
        id: "subnet-yyyyyyyy"
        cidr: "10.0.129.0/24"

      ap-northeast-1c:
        id: "subnet-xxxxxxxx"
        cidr: "10.0.128.0/24"

I'm working on a fix. This is just another case of defaulting that needs to be undone in certain cases. The reason this didn't come up in #471, is that @danielchalef was passing --vpc-cidr and it happen to do the right thing, although we never intended it to be used that way. This is actually not affecting config file users.

As @mumoshu pointed out, @s-tokutake's issue is of different nature (although similar error message).

it's my mistake as you said, thanks . @mumoshu @errordeveloper

Was this page helpful?
0 / 5 - 0 ratings