Kops: --subnets flag not working with create cluster

Created on 30 Jan 2018  ·  26Comments  ·  Source: kubernetes/kops

According to https://github.com/kubernetes/kops/blob/master/docs/run_in_existing_vpc.md kops should support specifying existing subnets to use when setting up a cluster. When I run

kops create cluster \
--node-size t2.small \
--master-size t2.small \
--node-count 4 \
--cloud aws \
--subnets ${SUBNETS} \
--zones eu-central-1a \
--master-zones eu-central-1a \
--vpc ${VPCID} \
--dns-zone ${ZONE} \
--ssh-public-key="./.ssh/id_rsa.pub" \
--topology private \
--networking calico \
--bastion="true" \
--authorization=RBAC \
--name ${NAME}

I receive the error message

unknown flag: --subnets

Running kops 1.8.0 on a Mac, installed via homebrew.

From a discussion on the kops-users Slack channel I got the information, that flag indeed does not exist. Maybe the corresponding section should be removed from the document?

lifecyclrotten

Most helpful comment

Looks like this section needs to be corrected: https://github.com/kubernetes/kops/blob/master/docs/run_in_existing_vpc.md#shared-subnets

All 26 comments

kops create cluster doesn't have a --subnet flag.

kops create cluster --help will confirm

Some other commands - kops create ig for example, do have the flag.

If you want to attach existing subnets you have to edit the cluster config after the create cluster command is complete.

Try checking this out -- it's blog post I did to attach a new cluster to existing subnets and VPC.

https://blenderfox.com/2018/01/05/guide-to-creating-a-kubernetes-cluster-in-existing-subnets-vpc-on-aws-with-kops/

Looks like this section needs to be corrected: https://github.com/kubernetes/kops/blob/master/docs/run_in_existing_vpc.md#shared-subnets

@chrislovecnm

I just struck this as well.

Here's the commit where it was added: https://github.com/kubernetes/kops/commit/48d4a7cb1ad5c800c692289023431f2017d2b4dc (Dec 14 vs release 1.8.0 which was released Dec 4).

Is this functionality coming in a new release of Kops, because it's pretty handy?

@joelittlejohn the problem is that we are not versioning our documentation. If we versioned our docs we would not have those instructions. An issue has been open for a long time to version our documentation.

So isn't this option available anymore?

I need to deploy a cluster on shared vpc and subnets for topology design reasons.

Cheers

@marranz this option is only available in master and has not been released. You can accomplish the same steps covered here https://github.com/kubernetes/kops/blob/master/docs/run_in_existing_vpc.md

Thanks for the info @chrislovecnm .

I will try it, but I'd like to use my own subnets already created by terraform.

If i compile the source i can use that feature?

@marranz yes you can use it from source, but you need the full source using S3_BUCKET=s3://my-public-bucket make upload then export KOPS_BASE_URL=https://s3.amazonaws.com/$MY_BUCKET/kops/1.8.0

We have had some challenges with a recent change in nodeup.

Thanks, I'll have a look. :)

https://github.com/kubernetes/kops/blob/master/hack/dev-build.sh has more than you need, but it covers the env variables well

@huang-jy I followed the blog you shared but when I change the subnets it complains that the master node is associated with a subnet in the list and so it won't let me replace them.

Did you encounter this?

Also any other advice on how to fix this?

@darrenhaken Did you make the changes on an existing cluster or on one you haven't yet created (and the error is coming up on the first kops update)?

On the first creation. I managed to get it to actually deploy by adding my subnets (with IDs) but keeping the existing names in place e.g eu-west-1. I also had to remove the utility subnets as they didn’t have IDs, I’m not sure what they are used for.

I’m currently having problems using Weave now so I can’t confirm it’s all working yey


From: 黃健毅 notifications@github.com
Sent: Sunday, February 18, 2018 7:02:23 PM
To: kubernetes/kops
Cc: Darren Haken; Comment
Subject: Re: [kubernetes/kops] --subnets flag not working with create cluster (#4358)

Did you make the changes on an existing cluster or on one you haven't yet created (and the error is coming up on the first kops update)?


You are receiving this because you commented.
Reply to this email directly, view it on GitHubhttps://github.com/kubernetes/kops/issues/4358#issuecomment-366538633, or mute the threadhttps://github.com/notifications/unsubscribe-auth/AA6Me5jBqf_tc8-4igEwSFxKOS_jy9Ohks5tWHO_gaJpZM4RyPop.

@darrenhaken utility is used IIRC for things like Bastions or mixed public-private. What was the exact error you were getting?

The error happened on editing a brand new cluster

Yes, but the exact error you got?

@huang-jy validation failed: InstanceGroup "master-eu-west-1a" is configured in "eu-west-1a", but this is not configured as a Subnet in the cluster

I can work on getting a dump of the kops config with a diff if needed (will need to wait until tomorrow)

Looks like in your master-eu-west-1a instance group definition (kops edit ig master-eu-west-1a), you're saying you want to use subnet eu-west-1a, but that isn't defined in the main cluster definition (kops edit cluster cluster-name).

Try adding this under your subnets in the cluster definition

 - cidr: [your CIDR range for the subnet]
    id: [your subnet id for the eu-west-1a]
    name: eu-west-1a
    type: Private [change this if you need to]
    zone: eu-west-1a

Keeping the name eu-west-1a seems to have done the trick 👍

If I rename it then validation fails, it's also the same for renaming the master's instance group definition.

Seems to be working now though :)

👍

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Looks like this one can be closed now that kops 1.9 is out.

Yeah, I'm using this in 1.9 and can confirm that it works.

A related issue though is that the --subnets parameter does not handle Public and Private subnets and just complains if multiple subnets are in the same zone: https://github.com/kubernetes/kops/issues/5171

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Was this page helpful?
0 / 5 - 0 ratings

Related issues

minasys picture minasys  ·  3Comments

pluttrell picture pluttrell  ·  4Comments

rot26 picture rot26  ·  5Comments

chrislovecnm picture chrislovecnm  ·  3Comments

RXminuS picture RXminuS  ·  5Comments