Kops: Change name of cluster

Created on 14 Aug 2016  路  26Comments  路  Source: kubernetes/kops

Is it possible to change the name of a running cluster?

lifecyclrotten

Most helpful comment

I wanted to add our use case for changing cluster names:

We do cluster upgrades by creating a new cluster, moving traffic to it and after validating the new cluster works as expected we delete the old cluster.

For example:

  1. We have a running cluster at prod.k8s.com
  2. We create a new prod cluster with an upgraded k8s version on prod-new.k8s.com
  3. We direct traffic to prod-new
  4. After cluster validation we delete prod.k8s.com
  5. Now we're left with prod-new.k8s.com. It would be nice to be able to rename this to prod.k8s.com.

All 26 comments

Practically speaking no - the name serves as an identifier (which is fixable), but also serves for discovery (so this would require near-simultaneous changing across the cluster).

I guess we could work towards allowing multiple names, which would then let you add and remove names smoothly.

What's the reason for changing the name - just changing your mind? I can also see a company rebrand or something.

Note to self: also check if federation enables smooth transitions here.

Actually, I'm thinking this might be possible, though not with a live update. kops upgrade from kube-up does something similar.

@justinsb: To supply another reason, I'll try to explain the situation we are in currently:

We accidentally set up a cluster using an IP range which wasn't private, we only discovered this mistake way too late. We obviously want to move the cluster to a "safe" private IP range. Due to the way Amazon handles VPCs it's not really possible to fix the IP ranges without moving to a new VPC.

If it was possible to rename a cluster, it would be possible for us to create a copy of the cluster, swap all the DNS records for services, shutdown the old cluster, and "rename" the new cluster to the location of the old cluster.

I think it would still be possible, but I suspect that it would be a tricky task, as I suspect I need to rename the cluster throughout the kops state store, and then also rename all the AWS resources created by kops.

It would also provide a lot of flexibility in general. Like you mention, a company might change name. I could also imagine that a company sets up a cluster to be general purpose, but then over time finds a need for multiple clusters (one per team for example), and then wants to change the original cluster name to reflect a team name, while creating more cluster.

Now trying to know the federation.
Once i set the single cluster be switched as host cluster and i create one more cluster in a single vm.
But, either cluster have the same name of cluster (by default, kubernetes)
In my thinking, how those clusters can have the same name?
So, i tried to change it...but, i can't do rename of cluster with "kubectl"

Questions are
1> no way to change cluster's name?
2> no effect each other cause of the same name

My working env in below

  • OS: ubuntu 16.04
  • Docker: 1.11
  • K8S: 1.7.1

I wanted to add our use case for changing cluster names:

We do cluster upgrades by creating a new cluster, moving traffic to it and after validating the new cluster works as expected we delete the old cluster.

For example:

  1. We have a running cluster at prod.k8s.com
  2. We create a new prod cluster with an upgraded k8s version on prod-new.k8s.com
  3. We direct traffic to prod-new
  4. After cluster validation we delete prod.k8s.com
  5. Now we're left with prod-new.k8s.com. It would be nice to be able to rename this to prod.k8s.com.

Interesting!
I'm curious... @erez-rabih How do you handle persistent volumes? Do you migrate the
content somehow before starting the pods in the new cluster?

@eedugon We use external means to keep our cluster state - mostly RDS and DynamoDB
We use persistent volumes for textual logs (elasticsearch) and numeric data (graphite) which we consider volatile so we're ok with starting a clean sheet with each cluster.

So, is there a way to change a name of a running cluster? I have created two on-prem clusters but by default they get the name 'kubernetes' which might me the problem for why I am not able to create a federation with a host cluster being one of those.

on-prem? @samanthakem I am guessing that you did not use kops ;) You probably want to talk to the install team that built the installer you are using.

This may be possible with kops, but it is not supported or documented at this point. Not trivial and much testing is required.

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

/remove-lifecycle rotten

This comment from @justinsb suggests that there might be a way to implement this.

I think performing an in-place upgrade of a kops cluster node by node is not a great idea. If things go wrong, a complex roll-back is required. Bringing up a new cluster and switching over to it is far safer, particularly as it allows you to instantly switch back to the existing cluster at the first sign of trouble.

We've chosen a good naming strategy for our clusters, so it would be really good if we could keep this naming strategy once we come to upgrade. To upgrade I'd like to introduce a new cluster, do a Blue/Green deployment, then finally (once we're happy with everything and the new cluster has been running successfully for a while) we would destroy the old cluster and rename the new cluster so that it takes the same name as the original.

I can see how this might be difficult, given how much of the infrastructure depends on the cluster name. Is it remotely likely that this will be possible with kops or is this a crazy dream?

I'd like to have this feature implemented as well, since upgrading a live cluster is somewhat scary and it's better to deploy a new one and switch. Also I am currently in a situation where I'd like to rename the cluster, because the previous devops named one particular cluster smth like dev.example.com whereas actually it's used as a staging env, so I'd rather have it named staging.example.com .

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

I know this is not a trivial enhancement. I'm interested to know from the core team whether there could be a realistic path forward for this one.

I think the only upgrade path we would ever consider is creating a new cluster and doing a blue/green switch over. I don't think we would entertain the idea of an in-place cluster upgrade. So once we upgrade a cluster, renaming it to match the original would be _super_ handy.

/remove-lifecycle rotten

One option for the use case of cluster upgrading would be to do the upgrade twice: cluster at foo.cluster.com needs upgrade, create new cluster tmp.cluster.com, blue/green, then when all good destroy the one at foo.cluster.com; create a new one again, this time with same name as old foo.cluster.com, using the exact same setup files as used for tmp, blue/green again (but probably doesn't need to be as thorough if everything identical), and finally retire tmp.

Another option might be to create a "clone" cluster foo2.cluster.com using the exact same setup as done for foo.cluster.com; blue/green just to confirm that everything is indeed identical; destroy foo.cluster.com; recreate it with upgraded kubernetes; thorough blue/green; switch the rest of traffic over to it; tear down foo2.

@schollii and it's another chance to verify the completeness and repeatability of the upgrade process, I like it! :)

Added a second option, similar idea, although I think the first would show failure faster. If however you always have multiple instances of a cluster for HA, the second option is trivial.

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

/remove-lifecycle stale

I would like to see this also, we have a situation where we have production environments that we need to make large upgrades where we can't have any failures and it would be much easier if we could just change name of original cluster and bring up a new cluster with our naming convention.

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

thejsj picture thejsj  路  4Comments

chrislovecnm picture chrislovecnm  路  3Comments

justinsb picture justinsb  路  4Comments

DocValerian picture DocValerian  路  4Comments

olalonde picture olalonde  路  4Comments