Cluster-api: Leader election config maps are not deleted when provider is deleted

Created on 30 Jan 2020  路  11Comments  路  Source: kubernetes-sigs/cluster-api

What steps did you take and what happened:

  1. Followed steps to install provider components on a kind cluster using clusterctl init.
    $ ./cmd/clusterctl/hack/local-overrides.py $ clusterctl init --core cluster-api:v0.3.0 --bootstrap kubeadm-bootstrap:v0.3.0 --control-plane kubeadm-control-plane:v0.3.0 --infrastructure=aws:v0.4.8 --target-namespace default
  2. Once the components were installed, I tried to delete them,
    $ clusterctl delete --all
  3. kubectl get configmaps --show-labels

What did you expect to happen:
Expected to not see any config maps.

Anything else you would like to add:
I don't believe this is a clusterctl specific bug, but I'm tagging (/area) it because I happened to reproduce it via clusterctl. Also I don't think this is a blocker for this release (0.3).

Environment:

  • Cluster-api version: 72e293f86fb0ba61732ba1a28ddc0fe46d43853c

/kind bug
/priority backlog
/area clusterctl

kinbug lifecyclstale prioritawaiting-more-evidence prioritbacklog

All 11 comments

The leader election configmaps are created by client-go code (our manager -> controller-runtime -> client-go). Until/unless client-go supports labeling on creation, we either:

  1. recognize they'll be orphaned and ignore them
  2. work to adjust the client-go and controller-runtime code so we can label them on creation
  3. have our managers label them post-creation

/milestone Next

/remove-area clusterctl

@ncdc I'm pretty partial to option 1

If we do want to handle additional handling, I would like to suggest that we do not pivot the configmap and let the controllers create new configmaps as they come up on the new management cluster.

Yeah, same. Also, I don't think clusterctl is moving these configmaps.

+1 to keep ignore the configmaps

/priority awaiting-more-evidence

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Small not.clusterctl is not pivoting providers; they are re-installed from scratch in the target cluster before pivot
+1 to ignore

Given that the issue went stale, and there is general consensus on keeping the current behavior, I think we can close this, what do you think @wfernandes ?

@vincepri Feel free to close.

/close

@vincepri: Closing this issue.

In response to this:

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

alexeldeib picture alexeldeib  路  4Comments

dlipovetsky picture dlipovetsky  路  5Comments

fabriziopandini picture fabriziopandini  路  5Comments

bgoareguer picture bgoareguer  路  5Comments

gyliu513 picture gyliu513  路  6Comments