Currently today kind executes a series of implicit steps as part of operations such as create. The purpose of this issue is to make those steps explicit and well-defined ~= kubeadm phases such that both developers and consumers can execute different steps as needed for their testing.
/cc @kubernetes/sig-cluster-lifecycle
Making them well defined is the tricky part, as we've added features like HA or proxy support the required steps have changed.
An immediate alternative with even more control to iterate on kubeadm phases was suggested at the kind meeting with @fabriziopandini @neolit123 to support creating nodes but stopping short of any kubeadm provisioning etc.
FWIW not all phases need to be at the same level and can go through promotion, so supportability is also explicit.
Right. Thinking more in terms of how to actually break it down internally at this point.
We will need some more refactor to clean up interdependencies between the creation steps (which we absolutely should do).
We should start by identifying what steps would be helpful.
As far as I could tell the most helpful thing would be to have control over the kubeadm steps, for which a way to provision but stop before kubeadm init / kubeadm join would probably be a quicker win in the short term.
@BenTheElder I agree. Stopping before init/join is already a well-defined use case and having a short term solution in please will be great
I'm going to send a PR for this
0.3 is overdue, punting to 0.4, but I've spent some time experimenting with options for this and what the phases should be (we need to make a few more granular).
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue with/reopen.
Mark the issue as fresh with/remove-lifecycle rotten.Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Most helpful comment
Right. Thinking more in terms of how to actually break it down internally at this point.
We will need some more refactor to clean up interdependencies between the creation steps (which we absolutely should do).
We should start by identifying what steps would be helpful.
As far as I could tell the most helpful thing would be to have control over the kubeadm steps, for which a way to provision but stop before
kubeadm init/kubeadm joinwould probably be a quicker win in the short term.