When specifying --topology private --vpc-id=foo --networking calico and specific network CIDR, two subnets are created. The private subnet gets assigned wrt to the specified CIDR, but the utility subnet (assuming, for Calico), doesn't respect the specified CIDR.
export VPC_ID=foo
export NETWORK_CIDR=172.31.4.0/24
kops create cluster --zones ap-southeast-1a ${NAME} --vpc=${VPC_ID} --networking calico --topology private
Subnet/ap-southeast-1a.foo
VPC name:bar
AvailabilityZone ap-southeast-1a
CIDR 172.31.4.0/22
Shared false
Subnet/utility-ap-southeast-1a.foo
VPC name:bar
AvailabilityZone ap-southeast-1a
CIDR 172.31.0.0/25
Shared false
As a workaround, I got it working by kops edit cluster $NAME and manually changing the CIDR for calico utility subnet. But this should be handled, right?
@dolftax I need some more details on this specifically what you are doing. Most power users will utilize our API/YAML interface. You can export the cluster spec via:
kops get cluster $1 -o yaml > $1.yaml
echo " " >> $1.yaml
echo "---" >> $1.yaml
echo " " >> $1.yaml
kops get ig --name $1 -o yaml >> $1.yaml
With the YAML file for a clusterspec you are able to make several fine tune adjustments.
Specifically with CNI providers, except for weave, nonMasqueradeCIDR: 100.64.0.0/10 is the value that drives most of the configuration for the CNI networking provider. I believe that Calico is using that value properly.
I highly recommend using the clusterspec, and yes we need to document the use of it more 馃槥
On kops create, ideal behavior should be to query AWS if there are existing subnets, and update the YAML. But this ain't happening at the moment. A conflicting CIDR is added as calico's CIDR on kops create.
@chrislovecnm I think you're talking about the workaround which is to run kops create and then manually kops edit with non-conflicting CIDRs, then kops update. But I'm trying to achieve the same on create, by itself.
@chrislovecnm From first glance, I'm not sure if this is completely relevant to #1171.
@ottoyiu you would know better than I :)
Is this fixed now?
I ran into this bug with kops 1.7.0.
Same issue with kops version 1.8.0
/cc @caseydavenport @blakebarnett not sure how to fix this. Any ideas?
Like @jaipradeesh said, the ideal behavior would be for kops to query for subnets to make sure there are no non-tagged conflicts. I think the only other thing to do here is to make it more clear in the docs that nonMasqueradeCIDR is what's used for things like the utility subnets, not the VPC networkCIDR
Also validation that everything will actually fit in a /24 would be good... I'm guessing the subnet math assumes some minimum sizes.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Most helpful comment
Same issue with kops version 1.8.0