kubeadm version (use kubeadm version): 1.8.0
Environment:
kubectl version): 1.7.6While watching TGIK, @jbeda ran sudo kubeadm upgrade apply v1.8.0 and the upgrade ran successfully but the previous manifests in /etc/kubernetes/tmp were deleted.
The general comments were the previous manifests should have been kept around even with a successful upgrade.
sudo kubeadm upgrade apply v1.8.0 ls -alh /etc/kubernetes/tmpThis looks to be the expected behavior per https://github.com/kubernetes/kubernetes/blob/73d1b38604d6f39ec77a94f0f411a46c765d1c4f/cmd/kubeadm/app/phases/upgrade/staticpods.go#L172 so the question is whether the current behavior needs to be altered.
Thanks for filing the issue! It was on my list to follow up.
@luxas -- I think you wrote this code. What do you think about leaving the old manifests there so that users can see what changed. It also enables a (dangerous) rollback path.
If you think the command should "leak" directories for the user -- feel free to change.
I thought that if the user actually wants to do that -- he/she would just do cp -r /etc/kubernetes{,-preupgrade} before kubeadm upgrade
Documentation improvement or code change?
@luxas @jbeda I submitted https://github.com/kubernetes/kubernetes/pull/57116 as an attempt to address the concern here. If y'all think it's better to go the route of adding verbiage to the upgrade docs, I'm happy to go that route instead.
@jipperinbham Thanks! I think more documentation would be better :+1:
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/assign @liztio
The manifests are a pretty insignificant amount of data.
I think we gain more by having a longterm backup directory than deleting a temp dir.
@timothysc mentioned it would also be useful to store the old kubeadm-config as a file in the backupdir
We'd probably store the etcd backup in the same directory?
This is significantly more data to store.
related #651
We should definitely outline best practices as part of an upgrade, but as we mentioned on the call rollback will become unsupportable for HA with the current methods and I think kubeadm overstepped here.
Sorry for the lack of feedback here, I think some of the original thoughts behind this issue revolved around providing the user some way of previewing the changes prior to actually running the upgrade.
Something I've enjoyed when using kops is the delta it provides when running kops update cluster ... as can be seen in this gist.
I know being able to provide some sort of preview delta is a much bigger change for kubeadm. If a current goal is around outlining best practices and providing good feedback to the user, then it may or may not be something worth pursuing.
@jipperinbham it's a considerable change for UX, but the technical parts are already there to support what you're talking about.
We already render the upgrade manifests beforehand. We would just have to call a diff.
It should definitely be split out into more phases.
Then it would just be a matter of moving it into the plan vs. upgrade.
FWIW; we kinda already support this "preview" feature with upgrade --dry-run
So the following makes a lot of sense to me:
upgrade plan --diffI'm not going to have time to get to 2 or 3 before I leave for holiday.
here's where the manifests are deleted
here's where the directory names are created
going to resume this now that we have more time until freeze
Most helpful comment
going to resume this now that we have more time until freeze