/kind bug
What steps did you take and what happened:
[A clear and concise description of what the bug is.]
The current behavior is if I use existing cluster as bootstrap cluster, then after the k8s cluster was provisioned on the remote cloud, then the CRDs and CRs will be deleted on the existing cluster.
The problem here is if I run two clusterctl create one by one, then after the first clusterctl create finished and the second clusterctl create just finished the creation of the CRDs, if the first clusterctl create delete CRDs at this time, it will cause the second clusterctl create does not work.
What did you expect to happen:
We can delete all of the CRs but should left the CRDs for the cloud providers for future use.
/cc @vincepri @detiber
FYI @qiujian16 @xunpan @jichenjc @hchenxa @clyang82
Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]
Environment:
kubectl version):/etc/os-release):Or at least we should adding some parameters in clusterctl to let end use decide if they want to delete the CRD or not.
Personally I'm torn on this, I do think this is a great use case, however I think the scope and goals of clusterctl is already overloaded and confusing.
I would like to see clusterctl's scope reduced to just being a tool to bootstrap a cluster-api management cluster and potentially introduce a new tool, say clusterdeploy, for deploying a cluster using an existing management cluster.
/assign @timothysc
/area clusterctl
I agree with the general idea of reducing the scope of clusterctl.
Internally we have discussed various ways to prevent CRDs from being deleted on a management cluster while the controllers are still running and there are still managed clusters, however I am not convinced anything more is necessary than setting the right RBAC permissions. Maybe an webhook could prevent it.
At the very least we should document this behavior (or the fact that clusterctl is not intended to be used to manage management clusters).
@davidewatson @detiber This will be difficult to control if there are multiple end users using on the same existing cluster, and I would expect only the cluster admin have the ability to create clusters via clusterctl, as a short term solution, how about add an option in clusterctl to fix this?
I haven't looked at clusterctl in some time but I believe the --bootstrap-cluster-cleanup option may help you avoid cleaning up the bootstrap cluster.
@xunpan can you help check if --bootstrap-cluster-cleanup can help this case?
my understanding is that flag is just for whole boot cluster to be deleted, seems different to the question here....
if cleanupBootstrapCluster {
cleanupFn = func() {
klog.Info("Cleaning up bootstrap cluster.")
provisioner.Delete()
}
}
Thanks @jichenjc , Yes, seems we still need a parameter to control this.
@davidewatson comments?
Currently, cleanupBootstrapCluster for existing cluster do nothing.
The CRDs are deleted when cluster api resources are pivoted to target cluster. So, if we need the feature, we need coding work.
@xunpan: You are correct. I have mostly been using kubectl to manage my clusters so I misunderstood the original problem.
More code would be necessary. I'll defer to @detiber on whether a PR implementing this would be accepted. IMO it should be. clusterctl is already too complicated. We can't save it by not merging features...
I'm going to defer this to @timothysc, who is attempting to wrangle and make sense of the various clusterctl issues that we have outstanding.
closing, we heading into a heavy refactor for v1alpha3 regarding CLI UX.
Most helpful comment
I'm going to defer this to @timothysc, who is attempting to wrangle and make sense of the various
clusterctlissues that we have outstanding.