kubeadm upgrade
, allows you to automatically upgrade a cluster created by kubeadm
@lukemarsden the feature freeze for 1.7 has already gone (on Monday, May 1).
@idvoretskyi yes, sorry – I've been slammed after DockerCon. I've been chatting to @calebamiles who said it might be possible to make an exception for this, please 🙏
I think this work was already scheduled by the SIG, @idvoretskyi the issue was simply filed late which I don't think is a huge deal.
Yeah, this has been planned for a while, thanks @calebamiles!
@lukemarsden @luxas sounds good to me, thanks.
Making a call on this – we're not going to make it for 1.7. Let's bump this to 1.8. Sorry.
cc @idvoretskyi
@lukemarsden is there a WIP fork or issue on this somewhere? Just wondering what the current state is.
@timothysc nope, no code yet.
For subscribers of this thread, I've now written up kind of a new proposal for how to achieve this. see here: https://docs.google.com/document/d/1PRrC2tvB-p7sotIA5rnHy5WAOGdJJOIXPPv23hUFGrY
Feedback welcome, coding is expected to start next week-ish.
This will be beta in v1.8, our alpha was the kubeadm v1.6-v1.7 upgrade route: https://kubernetes.io/docs/tasks/administer-cluster/kubeadm-upgrade-1-7/
The upgrades proposal in markdown format: https://github.com/kubernetes/community/issues/825
I also updated the first comment to correctly reflect the status of this feature.
@lukemarsden @luxas @timothysc who is an actual assignee here?
@idvoretskyi myself and @timothysc. I have the proposal in markdown format in https://github.com/kubernetes/community/pull/825 and have some PRs up for implementing this.
@timothysc is doing some work on kubelet Pod checkpointing and possibly also helping with a new updateStrategy for DaemonSets etc. (dependencies for getting this to work smoothly)
This is going well I think
@aaronlevy mentioned that @diegs would be helping with the DaemonSet upgrade strategy.
Docs are here: https://github.com/kubernetes/kubernetes.github.io/pull/4770
A slightly improved version of kubeadm upgrade
will ship in v1.9, but still beta.
@lukemarsden :wave: Please indicate in the 1.9 feature tracking board
whether this feature needs documentation. If yes, please open a PR and add a link to the tracking spreadsheet. Thanks in advance!
@lukemarsden is out, so this should be assigned to @luxas
@luxas Bumping for a docs PR. ☝️ Incoming boilerplate:
Please open a documentation PR and add a link to the 1.9 tracking spreadsheet. Thanks in advance!
UPDATE: Sorry, I missed the comment about https://github.com/kubernetes/website/pull/4770. I updated the feature tracker.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale
@timothysc @luxas
Any plans for this in 1.11?
If so, can you please ensure the feature is up-to-date with the appropriate:
stage/{alpha,beta,stable}
sig/*
kind/feature
cc @idvoretskyi
@justaugustus this feature won't move to GA yet, we're improving it incrementally in the beta state.
/lifecycle frozen
/remove-lifecycle frozen
This will be graduated at the same time kubeadm itself goes GA: https://github.com/kubernetes/features/issues/11
This feature current has no milestone, so we'd like to check in and see if there are any plans for this in Kubernetes 1.12.
If so, please ensure that this issue is up-to-date with ALL of the following information:
Set the following:
Once this feature is appropriately updated, please explicitly ping @justaugustus, @kacole2, @robertsandoval, @rajendar38 to note that it is ready to be included in the Features Tracking Spreadsheet for Kubernetes 1.12.
Please make sure all PRs for features have relevant release notes included as well.
Happy shipping!
P.S. This was sent via automation
/kind feature
/stage stable
Tracking as GA for 1.12
Hey there! @lukemarsden I'm the wrangler for the Docs this release. Is there any chance I could have you open up a docs PR against the release-1.12 branch as a placeholder? That gives us more confidence in the feature shipping in this release and gives me something to work with when we start doing reviews/edits. Thanks! If this feature does not require docs, could you please update the features tracking spreadsheet to reflect it?
@lukemarsden, @luxas @timothysc --
Any update on docs status for this feature? Are we still planning to land it for 1.12?
At this point, code freeze is upon us, and docs are due on 9/7 (2 days).
If we don't here anything back regarding this feature ASAP, we'll need to remove it from the milestone.
cc: @zparnold @jimangel @tfogo
for this one, upgrades are already useful and the docs are in place for 1.11:
https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-upgrade/
https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-11/
there are still some question marks in terms of implementation and we have some tracking issues in k/kubeadm
.
the KEP for this seems to be still in flight:
https://github.com/kubernetes/community/pull/825
also, we don't have placeholder docs for 1.12, yet.
We're aiming for GA in 1.13 at this point.
Thanks for the update, @timothysc !
@timothysc i'm curious if this should just be rolled into #11 and tracked there since it's all going GA at the same time. Or is it better to track kubeadm features separately? Are kubeadm upgrades still targeting 1.13?
Closing in favor of #11 .
We are in the final stages now of moving towards GA.
Most helpful comment
For subscribers of this thread, I've now written up kind of a new proposal for how to achieve this. see here: https://docs.google.com/document/d/1PRrC2tvB-p7sotIA5rnHy5WAOGdJJOIXPPv23hUFGrY
Feedback welcome, coding is expected to start next week-ish.