When creating a cluster inside an existing VPC which contains already some subnets,
we get errors like:
W0405 11:59:18.655454 35343 executor.go:109] error running task "Subnet/utility-eu-west-1a.k8s.eu-west-1.test.redacted.net" (9s remaining to succeed): error creating subnet: InvalidSubnet.Conflict: The CIDR '172.31.0.0/22' conflicts with another subnet
status code: 400, request id: 9d2fccda-c45b-422d-8ad4-b18add7f4ba4
Kops should find some non-conflicting IP ranges for the subnets it creates.
When demo-ing Kops "Look guys, this tool is awesome, you get a resilient cluster in your AWS account with just 2 commands!"
--> errors happen requiring complicated looking workarounds (see below)
Developers leave thinking "It really doesn't seem so easy".
In our existing VPC, we have 3 subnets, with IP ranges:
172.31.0.0/20
172.31.16.0/20
172.31.32.0/20
$ kops create cluster \
--cloud aws \
--vpc vpc-redacted \
--master-zones eu-west-1a,eu-west-1b,eu-west-1c \
--zones eu-west-1a,eu-west-1b,eu-west-1c \
--name k8s.eu-west-1.test.redacted.net \
--master-size t2.medium \
--node-size t2.medium \
--cloud-labels "Owner=Me,Stack=K8s-test" \
--networking calico \
--target cloudformation \
--state s3://k8s \
--dns-zone=test.redacted.net \
--topology private
$ kops update cluster --state s3://k8s --name k8s.eu-west-1.test.redacted.net --yes
---> errors happen:
W0405 11:59:18.655454 35343 executor.go:109] error running task "Subnet/utility-eu-west-1a.k8s.eu-west-1.test.redacted.net" (9s remaining to succeed): error creating subnet: InvalidSubnet.Conflict: The CIDR '172.31.0.0/22' conflicts with another subnet
status code: 400, request id: 9d2fccda-c45b-422d-8ad4-b18add7f4ba4
W0405 11:59:18.655497 35343 executor.go:109] error running task "Subnet/utility-eu-west-1b.k8s.eu-west-1.test.redacted.net" (9s remaining to succeed): error creating subnet: InvalidSubnet.Conflict: The CIDR '172.31.4.0/22' conflicts with another subnet
status code: 400, request id: 4733e80b-46c3-4d53-83fb-6e9ab6cbeb0b
W0405 11:59:18.655514 35343 executor.go:109] error running task "Subnet/utility-eu-west-1c.k8s.eu-west-1.test.redacted.net" (9s remaining to succeed): error creating subnet: InvalidSubnet.Conflict: The CIDR '172.31.8.0/22' conflicts with another subnet
status code: 400, request id: 75f69b34-7a1a-4ba0-9cac-8fd07fb0e4c8
W0405 11:59:18.655529 35343 executor.go:109] error running task "Subnet/eu-west-1a.k8s.test.eu-west-1.testqbi.stylight.net" (9s remaining to succeed): error creating subnet: InvalidSubnet.Conflict: The CIDR '172.31.32.0/19' conflicts with another subnet
status code: 400, request id: 463ec4be-3728-4e16-948d-ffb289f72e7d
I0405 11:59:18.655590 35343 executor.go:124] No progress made, sleeping before retrying 4 failed task(s)
Our existing subnets in the VPC...
172.31.0.0/20 (from 172.31.0.1 to 172.31.15.254)
172.31.16.0/20 (from 172.31.16.1 to 172.31.31.254)
172.31.32.0/20 (from 172.31.32.1 to 172.31.47.254)
...do indeed conflict with the ones Kops is trying to create:
172.31.0.0/22 (from 172.31.0.1 to 172.31.3.254) -> conflicts with Subnet with range 172.31.0.0/20!
172.31.4.0/22 (from 172.31.4.1 to 172.31.7.254) -> conflicts with Subnet with range 172.31.0.0/20!
172.31.8.0/22 (from 172.31.8.1 to 172.31.11.254) -> conflicts with Subnet with range 172.31.0.0/20!
172.31.32.0/19 (from 172.31.32.1 to 172.31.63.254) -> conflicts with Subnet with range 172.31.32.0/20!
Export the config, fix the subnets and replace the config.
$ kops get clusters k8s.eu-west-1.test.redacted.net --state s3://k8s -o yaml > cluster-config.yaml
$ vi cluster-config.yaml
# in this file, in the Subnet section, replace conflicting IP ranges...
subnets:
- cidr: 172.31.32.0/19
name: eu-west-1a
type: Private
zone: eu-west-1a
- cidr: 172.31.64.0/19
name: eu-west-1b
type: Private
zone: eu-west-1b
- cidr: 172.31.96.0/19
name: eu-west-1c
type: Private
zone: eu-west-1c
- cidr: 172.31.0.0/22
name: utility-eu-west-1a
type: Utility
zone: eu-west-1a
- cidr: 172.31.4.0/22
name: utility-eu-west-1b
type: Utility
zone: eu-west-1b
- cidr: 172.31.8.0/22
name: utility-eu-west-1c
type: Utility
zone: eu-west-1c
#... by non-conflicting IP ranges, ex:
subnets:
- cidr: 172.31.224.0/19
name: eu-west-1a
type: Private
zone: eu-west-1a
- cidr: 172.31.64.0/19
name: eu-west-1b
type: Private
zone: eu-west-1b
- cidr: 172.31.96.0/19
name: eu-west-1c
type: Private
zone: eu-west-1c
- cidr: 172.31.128.0/22
name: utility-eu-west-1a
type: Utility
zone: eu-west-1a
- cidr: 172.31.160.0/22
name: utility-eu-west-1b
type: Utility
zone: eu-west-1b
- cidr: 172.31.192.0/22
name: utility-eu-west-1c
type: Utility
zone: eu-west-1c
$ kops replace -f cluster-config.yaml --state s3://k8s
$ kops update cluster --state s3://k8s --name k8s.eu-west-1.test.redacted.stylight.net --yes
--> then it works
Kops 1.5.3
@chrislovecnm Any update on this fix?
The fix would be in https://github.com/kubernetes/kops/pull/2395
For now we still have the workaround in the description.
@kenden Which description? Point me to the same?
@dolftax the description of this ticket. Export the config with kops get cluster, fix the subnets manually in the yaml file and then replace the config with kops replace
I had opened similar issue yesterday with kops and did the same workaround - https://github.com/kubernetes/kops/issues/2437
Will follow https://github.com/kubernetes/kops/pull/2395
@kenden
So when I run
kops replace -f cluster-config.yaml --state s3://k8s
I get this error:
error: error replacing cluster: Subnet "eu-west-1a" had a CIDR "<the one that I updated>" that was not a subnet of the NetworkCIDR "<vpc 's cdr>" .
I am using brew install kops in mac and kops version is : kops 1.5.3
I updated to kops 1.6.0 but still same problem
Any ideas?
@alifa20
The message means your subnet's IP range is not inside the vpc's IP range.
You can define the VPC range in cluster-config.yaml, ex:
```
apiVersion: kops/v1alpha2
kind: Cluster
networkCIDR: 172.20.0.0/16 # the VPC's IP range
networkID: vpc-ef12345a # the vpc's id
subnets:
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Prevent issues from auto-closing with an /lifecycle frozen comment.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale
+1 to the same issue
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale
I think this is a duplicate. Can anyone comment?
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Kops version 1.8.1 and kubectl 1.10.0
This is still an issue and requires OP workaround
@kenden Can another workaround for this issue can be to define --network-cidr as a subset of the <vpc-id>'s cidr which i am sure there is no subnet created in yet. Won't that be much cleaner than manually assigning subnets CIDR each time?
Most helpful comment
+1 to the same issue