Kops: Unable to change master instance type

Created on 14 Dec 2016  路  19Comments  路  Source: kubernetes/kops

easy to reproduce using following sets of commands

  1. create a cluster
    kops create cluster --cloud=aws --dns-zone=k8s.mycluster.com --master-size=t2.nano --node-size=t2.nano --zones=us-east-1a,us-east-1c,us-east-1e --master-zones=us-east-1a --node-count=1 --kubernetes-version=v1.4.6 k8s.mycluster.com

  2. launch cluster
    kops update cluster k8s.mycluster.com --yes

  3. find and update master node (I am not sure if this step is relevant)
    kops get instancegroups
    kops edit - now edit instance type to m3.medium

  4. update cluster again, I know I should do rolling update but I mistakenly issued this command and it was hanging for ever.
    kops rolling-update cluster : it shows "No rolling-update required"
    kops rolling-update cluster --yes: it does not change master instance size

P1 aredocumentation

Most helpful comment

@vinayagg I have just tested this with 1.5.0-alpha4 and I got the same result but this is actually the correct and expected behavior.

Kops is telling you that no rolling-updates are required because the master launch configuration have not been edited. This is being completed by the update command.

Here are the steps that you need to follow:

  1. kops edit ig - this will adjust the ig configuration but not the launch configuration at AWS.

  2. kops update cluster --yes to apply these changes on AWS.

  3. After that kops rolling-update cluster should show you that updates could be performed.

  4. Execute rolling-update with the --yes parameter and your master instance size should be changed.

I found some information about this in doc/

All 19 comments

3 more details please

Can you just give me the command line options?

kops create cluster --cloud=aws --dns-zone=k8s.mycluster.com --master-size=t2.nano --node-size=t2.nano --zones=us-east-1a,us-east-1c,us-east-1e --master-zones=us-east-1a --node-count=1 --kubernetes-version=v1.4.6 k8s.mycluster.com

kops update cluster k8s.mycluster.com --yes

kops get instancegroups
kops edit master-xxx

kops rolling-update cluster
kops rolling-update cluster --yes

So first did the cluster even start? I don't think it even would. So a valid use case of having a running Custer is where I would start first ;)

To be clear t2.nano is probably too small and you would not have a running cluster.

My cluster started just fine.
I will test this with m3.medium instances sometime during this week and report back.

I tested again. After I update the configuration, it does not even detect it.
I started a cluster with m3.medium master and m3.medium node.
Then I updated master to m3.large
Then I tried following

root@ip-172-31-27-134:~# kops rolling-update cluster
Using cluster from kubectl context: vinay.fstest.info

NAME STATUS NEEDUPDATE READY MIN MAX NODES
master-us-east-1a Ready 0 1 1 1 1
nodes Ready 0 1 1 1 1

No rolling-update required
root@ip-172-31-27-134:~# kops rolling-update cluster --yes
Using cluster from kubectl context: vinay.fstest.info

NAME STATUS NEEDUPDATE READY MIN MAX NODES
master-us-east-1a Ready 0 1 1 1 1
nodes Ready 0 1 1 1 1

No rolling-update required

@vinayagg which version of kops??

kops Version 1.4.3

I ended up issuing a bunch of commands update, upgrade, rolling-update and finally it did work.
I am not sure what is right set of commands or order of commands.

It may just be a documentation issue.

I can't reproduce this. Your original commands worked for me.

ok, let me know what you want to do? its pretty consistent here.
Its working for me with update + upgrade + rolling-update

@vinayagg I have just tested this with 1.5.0-alpha4 and I got the same result but this is actually the correct and expected behavior.

Kops is telling you that no rolling-updates are required because the master launch configuration have not been edited. This is being completed by the update command.

Here are the steps that you need to follow:

  1. kops edit ig - this will adjust the ig configuration but not the launch configuration at AWS.

  2. kops update cluster --yes to apply these changes on AWS.

  3. After that kops rolling-update cluster should show you that updates could be performed.

  4. Execute rolling-update with the --yes parameter and your master instance size should be changed.

I found some information about this in doc/

Thanks for the example @kamilhristov!

Does anyone think we need to take an action item out of this and improve the documentation? If so does anyone want to volunteer for the PR?

If not, can we close the issue?

@kris-nova , I have included long descriptions in #1656

From the sounds of it, functionality seems OK (?). Marking as documentation.

To be clear t2.nano is probably too small and you would not have a running cluster.

@chrislovecnm What would you suggest the minimum instance type for master?

I currently use the default one: m3-medium. However, from K8S UI, I notice the total memory usage is only < 200 MiB:
screen shot 2017-02-11 at 11 54 32 pm

I won't go with t2.nano, but if I want the cost to be minimum (for production machine with ~20 slaves), what would be the recommended instance type?

I can confirm that some people in my team tripped with this, but also that if you follow the process everything works as expected, but I agree that better documentation always helps 馃槃

In this case, I think the problem is the workflow: people not kops update cluster --yes after editing.

Perhaps making update part of the rolling-update would solve the issue.

@fikriauliya answering your question is tricky. T2 has CPU credits, which is a bit dangerous depending on what you do. A not great network doesn't help either.

Faced the same issue - can't change instance size to m3.large of one of my masters. Tried several times, can change size to t2.medium, t2.large, t2.xlarge. Then I looked to activity history of corresponding Auto Scaling Group in amazon, and there is error of launching new instance just because eu-central-1c doesn't support m3.large instances.

image

@pavelhritonenko we should ping aws and determine if we are going to default to m4's or c4's. Unfortunately, you are not the first person to run into this.

@vinayagg can we close out this issue? I believe the main problem people run into is that people attempt to use m3's which are not always available.

@chrislovecnm just wanted to share the possible cause of the issue, because it wasn't clear what was going on.

Was this page helpful?
0 / 5 - 0 ratings