It'd be great if clusters that weren't created by kubeadm could later use kubeadm upgrade functionality. For example, imagine I provision a cluster with bootkube or another tool, I might later want to use the kubeadm CLI to perform upgrades. Re-provisioning with kubeadm might be non-trivial for these users whose dev process is tied up in other tools.
Upgrading bootkube clusters is currently non-trivial because
Deployments for the controller-manager and scheduler, whereas kubeadm upgrade assumes every control plane element is a DaemonSet. Here is a dump of the YAML manifests for a bootkube cluster.ConfigMap which kubeadm uses to persist control panel state does not really support easy creation for non-kubeadm clusters I think the question we need to ask is how compatible we want kubeadm upgrade to be with non-kubeadm clusters? Some solutions to this problem might be:
Mandate that only clusters with DaemonSet control panels are compatible. We will not support Deployments. This limits our reach to a potentially small subset of clusters. We'd also still need to figure out how kubeadm recognizes custom named DaemonSets. @aaronlevy Is there a reason why bootkube uses Deployments for the CM and scheduler? How would you feel about changing these to DSs, or at least making it configurable (I know bootkube is opinionated rather than extensible, so just wanted to check feasibility here).
Support Deployments in upgrades. The user would then be responsible for specifying exactly what their control panel looks like with configuration flags: --apiserver-resource-type=DaemonSet, --apiserver-resource-name=apiserver, --scheduler-resource-type=Deployment, --scheduler-resource-name=custom-scheduler-name.
Would like to hear feedback from other people here.
/cc @luxas
I think a more generalized solution will be to have a kubeadm adopt command that adopts any non-kubeadm provisioned cluster into a kubeadm provisioned cluster. After that any other kubedm commands should work. I expect this adoption process to be a one-way journey.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Prevent issues from auto-closing with an /lifecycle frozen comment.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale
Although, I'd like this to be a possibility, it seems like a stretch goal.
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Most helpful comment
I think a more generalized solution will be to have a
kubeadm adoptcommand that adopts any non-kubeadm provisioned cluster into a kubeadm provisioned cluster. After that any other kubedm commands should work. I expect this adoption process to be a one-way journey.