Describe the solution you'd like
As an old-timer to kubernetes, I find clusterctl to be weird... It's performing operations which "could be preconditions", and also performing operations which IMO should be part of the .spec of the objects. This issue tries to break down some of the details, and feedback is solicited.
Building a bootstrap cluster...
kubectl applyCreate / Delete (CRDs, Cluster, Machines, ... )
kubectl apply OR delete... possibly with a kubectl plugin to make it feel more 1st classed. Pivot
cluster.spec that is part of a state machine of the cluster object. Kubeconfig
What I'm really struggling with is... do we really need this tool? I think there are portions of clusterctl which I think could move into a client library that I think would be generally useful as aggregate utility functions which are common operations that providers could leverage. I also think a kubectl plugin might be generally useful to treat clusterapi objects as 1st classed resources, but other then that.?.?.? In a v1alpha2 world what workflows are missing that clusterctl provides?
/kind feature
/cc @ncdc @vincepri @detiber
Agree on the clusters and machines, CRD's are a little different though
There are a few high-level features that I think clusterctl could serve:
1) A generic install
e.g. capdctl crds | kubectl apply -f - from https://github.com/kubernetes-sigs/cluster-api-provider-docker.
Install could probably be solved by hosting a concatenated YAML if it were the only use case
2) clusterctl logs for streaming logs from machines (not sure if kubectl logs would be extensible for this, but a kubectl machine-logs plugin would work as well)
3) clusterctl status - provide more real-time status e.g. node readiness, conditions, top, node events
1 & 3 could be solved with a kubectl plugin. Or possibly an AddOn operator pattern.
2 is a non-starter imo. If you want that you should setup your own logging service.
- Pivot
- IMO, this should be a field in the cluster.spec that is part of a state machine of the cluster object.
Pivot is complicated for a few reasons, I'm not sure it could be simplified outside of an external tool.
- Building a bootstrap cluster...
- Why? We can / should have it as a precondition for folks to run kubectl apply
The big reason for this was to simplify the user experience and avoid needing to run manual pre-steps.
- Kubeconfig
- There are many ways to drop an encrypted secret config to access, and I think creating a workflow for this might better serve the community.
+1 to this approach going forward, it didn't exist for the first iteration
Overall, I think kubectl plugins are a great way for us to move going forward, when clusterctl was started kubectl plugins either did not exist yet or they were just implemented with little to no documentation.
Perhaps we need to reframe the question:
a) Should CAPI and CAPI providers depend/require clusterctl ? I think not
b) Would the community benefit from tool that can provide higher-order functionality for a better UX? I think yes, but it could be a sub-project or a 3rd project entirely.
- Building a bootstrap cluster...
- Why? We can / should have it as a precondition for folks to run kubectl apply
The big reason for this was to simplify the user experience and avoid needing to run manual pre-steps.
I think the UX with kind is good enough, if not great
a) Should CAPI and CAPI providers depend/require clusterctl ? I think not
I agree here 100%, there should definitely be workflows that exist without clusterctl, that said certain functionality such as Pivot, will require some explicit documentation on how to do without causing issues.
b) Would the community benefit from tool that can provide higher-order functionality for a better UX? I think yes, but it could be a sub-project or a 3rd project entirely.
I think there may be multiple levels here. I think the project benefits from having a bootstrapping tool to go from 0 -> cluster-api, but how much of a friendly UX we can accomplish while keeping the relatively un-opinionated nature of cluster-api proper is debatable.
For a proper user-friendly installer UX, I definitely think that is a separate project.
Pivot is complicated for a few reasons, I'm not sure it could be simplified outside of an external tool.
I'm thinking Cluster API Operator to manage the lifecycle across * providers. This component can install the CRDs and controllers and turn down components if needed. It could be the one todo the final killing, but regardless we could. The operator also solves a part of our distribution problem.
I definitely like the idea of an operator, but we also need to remember that deploying the operator also presents a chicken/egg situation where an existing cluster needs to be present.
I'm totally cool with having a bootstrap cluster be a precondition.
Same - kind is so easy to set up these days, that should be sufficient as a minimum requirement.
The problem with kind as the bootstrapping cluster now means that you still need something to handle pivoting, or would we expect that the operator be able to solve the pivoting problem for us?
My thought is that the operator would do the pivot.
Here's my POV:
kind. I understand the argument _chicken/egg problem_ but there's no such thing as free lunch.kind cannot be the solution. However, I'd argue that kind could be the reference implementation and the user should be able opt to use any Kubernetes cluster, eg k3s or GKE.clusterctl or any other CLI is not something I appreciate as, as @timothysc put it, kubectl can do the same things and a plug-in for it would make things feel 1st class still.@detiber I don't want to hijack the discussion here but since you mention the concurrency problems of moving from bootstrapper to pivot, I think for as long as one can reach the other, say bootstrapper can reach the pivot, controller(s) may be smart and rely on the same locks to guarantee there are no concurrency issues. Something along the follow lines:
Does it make any sense? If yes, then let's take it out of this issue, maybe?
One thing to keep in mind in this discussion is that currently kubectl does not allow for setting foreground propagation during delete which is currently needed to ensure that resources are cleaned up properly, see #985. Providing something to safely delete resources might be a good thing. Whether that is a standalone tool ala clusterctl or a kubectl plugin.
I like the idea of a kubectl cluster bootstrap plugin. kubectl create cluster would be nice but I don't think you can override create like that (?).
I don't see a need for clusterctl. I've been using CAPA entirely without clusterctl. I create a management cluster with kind (one command) and then use kubectl to deploy the CAPI/CAPA bits, and the workload cluster(s).
I do use a CAPA-maintained helper shell script to generate the manifests, though. With the infra/bootstrap provider split, I can see each infra and bootstrap provider shipping its own tool to help generate manifests. There might be room for a CAPI-maintained tool that helps generate the now provider-agnostic Cluster and Machine manifests.
Haven't thought this through, but I suspect kubectl plugins using kustomize would do the job nicely.
@timothysc @ncdc
There is a CAEP in flight defining the way forward for clusterctl, so IMO we can close this issue
In the meantime, I'm moving pivoting bits to a separated issue
/close
@liztio: Closing this issue.
In response to this:
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Most helpful comment
I'm totally cool with having a bootstrap cluster be a precondition.