Cluster-api: clusterctl delete command is leaving webhooks behind

Created on 30 Sep 2020  路  13Comments  路  Source: kubernetes-sigs/cluster-api

What steps did you take and what happened:

clusterctl delete --all leaves behind the webhook deployments and pods running.

What did you expect to happen:
It should remove/delete webhooks also.

  • Cluster-api version: v0.3.9
    /kind bug
areclusterctl kinbug lifecyclactive prioritimportant-soon

Most helpful comment

Thanks guyz, I agree with you both @fabriziopandini and @wfernandes , we can close this issue for now
/close

All 13 comments

cc @wfernandes

/priority important-soon

/area clusterctl

/assign

/milestone v0.3.11

/lifecycle active

@kashifest After looking into this issue, I realized that this is actually expected behavior. As per our documentation, we remove the webhook configurations as part of the --include-crd flag.

That is, if you'd like to delete the webhook configuration as well then you'll need to run clusterctl delete --all --include-crd. Or if it is for a specific provider, clusterctl delete --infrastructure aws --include-crd.

The reason the operation of deleting webhooks is separated out to --include-crd is because in a multi-tenant scenario, it will have impact to all the management clusters. For more info, see here.

@wfernandes Thanks a lot, indeed you are right, I stopped thinking beyond --all, but it works with --include-crd flag. A followup question would be I can still see the cert-manager left behind which was installed with clusterctl, is there any other flag or plan to do that? My motivation is to leave the k8s cluster in the same state after delete as it was before init.

Yes, this point was raised before IIRC.
clusterctl installs cert-manager as a convenience feature to improve Day 0 experience. However, clusterctl technically does not manage the cert-manager deployment. I believe the main point was to improve the initial user experience as much as possible without having to create a full-fledged operator to manage the cert-manager.
The most recent change we did was to upgrade the cert-manger version to v0.16.1 we installed due to some security vulnerabilities in v0.11.0.
This specific discussion isn't here but just referencing this issue https://github.com/kubernetes-sigs/cluster-api/issues/2635

Technically, cert-manager could be installed separately via their helm-chart for example. clusterctl init verifies that a cert-manager is installed by creating a test cert. If this succeeds, it moves on with the installation. See here for more info.

Hope this helps!

Cleaning up cert-manager could be potentially problematic, because there could be other components in the cluster relying on it (outside of Cluster API)
If we should go this path, IMO the user should explicitly opt-in for cert-manager deletion

Oh yeah..and that reason 猬嗭笍 as well 馃槃

Maybe we have a separate flag like --include-cert-manager as part of clusterctl delete.

Currently, clusterctl doesn't have any confirmation prompts in its UX but I think if we had a flag available then the user is explicitly stating that their intention is to delete cert-manager.

Thanks guyz, I agree with you both @fabriziopandini and @wfernandes , we can close this issue for now
/close

@kashifest: Closing this issue.

In response to this:

Thanks guyz, I agree with you both @fabriziopandini and @wfernandes , we can close this issue for now
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

timothysc picture timothysc  路  6Comments

dlipovetsky picture dlipovetsky  路  5Comments

vincepri picture vincepri  路  4Comments

invidian picture invidian  路  5Comments

fabriziopandini picture fabriziopandini  路  5Comments