Kops: problem to deploy my cluster in AWS Paris zone (eu-west-3c)

Created on 10 Mar 2018  ·  6Comments  ·  Source: kubernetes/kops

Hello ! I have a problem to deploy my cluster in AWS Paris zone (eu-west-3c)

:/home/ubuntu# kops version
Version 1.8.1 (git-94ef202)
:/home/ubuntu# kubectl version
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.3", GitCommit:"d2835416544f298c919e2ead3be3d0864b52323b", GitTreeState:"clean", BuildDate:"2018-02-07T12:22:21Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}

I use this command line :

:/home/ubuntu# kops create cluster --name=cluster01 --state=s3://data-xxx-xxx --zones=eu-west-3c --node-count=2 --node-size=t2.micro --master-size=t2.micro

unable to infer CloudProvider from Zones (is there a typo in --zones?)

Any mistake in my command ?

lifecyclrotten

Most helpful comment

This is fixed in master https://github.com/kubernetes/kops/blob/9e471fe1ddabb6d5c67e1d4c8fdc65df60b4b100/upup/pkg/fi/cloud.go#L104

But we should do a better error message.

Any recommendations on a better error message that would have helped you use --cloud aws flag?

All 6 comments

You need to set “—cloud aws” - the zones are new and are not hard coded yet, so you need to tell kops what cloud you are using.

This is fixed in master https://github.com/kubernetes/kops/blob/9e471fe1ddabb6d5c67e1d4c8fdc65df60b4b100/upup/pkg/fi/cloud.go#L104

But we should do a better error message.

Any recommendations on a better error message that would have helped you use --cloud aws flag?

Thank you very much !!!! Is there any documentation where we can find all the available zone with the write syntax ?

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Was this page helpful?
0 / 5 - 0 ratings