Kops: Upgrading public to private topology

Created on 8 Nov 2016  路  13Comments  路  Source: kubernetes/kops

According to some excellent feedback from @MrTrustor on https://github.com/kubernetes/kops/pull/694#issuecomment-258684636

We will need to tighten up our support for the upgrade path from public to private topologies.

This is probably a high visibility and highly used feature that will need to be expedited.

good first issue lifecyclrotten

Most helpful comment

@justinsb is this supported now?

All 13 comments

So the elephant in the room is can we migrate from public to private networking. I would say initially that we do not support it, and we scope if and how it will work. The premise that I we envision is that we don't want down time, if we can help it.

Not saying that it cannot be done, I am saying that we want to do it well, if we have a use case for it.

Wondering if this is still a priority for users? Anyone?

It looks like we closed this simply because we weren't going to do it in v1. I'm reopening and marking it for consideration in 1.5.1

@justinsb is this supported now?

I did a little writeup of my progress here:
https://gist.github.com/olvesh/c63b3490a885b4f2054847abe9c41bd2

Not without downtime though, and I have a few snags left still that I am unsure if is a consequence of the switch. But if it helps in some way please us it for what you want.

I can confirm that @olvesh's solution works. The only difference in my setup is that I use calico instead of weave networking.

Can someone write this up?

Aka drop this into documentation?

I guess it needs to be reformatted somewhat to fit the documentation format? Or would a copy-paste with small adjustments be enough?

Don't know if I will have the time to do a writeup in the near future but I can try to make something of it. Where would it be best to put it?

@olvesh let's iterate, best to get it in, and then make it pretty.

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Was this page helpful?
0 / 5 - 0 ratings