What steps did you take and what happened:
kind cluster using clusterctl init.
$ ./cmd/clusterctl/hack/local-overrides.py
$ clusterctl init --core cluster-api:v0.3.0 --bootstrap kubeadm-bootstrap:v0.3.0 --control-plane kubeadm-control-plane:v0.3.0 --infrastructure=aws:v0.4.8 --target-namespace default
$ clusterctl delete --allkubectl get configmaps --show-labelsWhat did you expect to happen:
Expected to not see any config maps.
Anything else you would like to add:
I don't believe this is a clusterctl specific bug, but I'm tagging (/area) it because I happened to reproduce it via clusterctl. Also I don't think this is a blocker for this release (0.3).
Environment:
/kind bug
/priority backlog
/area clusterctl
The leader election configmaps are created by client-go code (our manager -> controller-runtime -> client-go). Until/unless client-go supports labeling on creation, we either:
/milestone Next
/remove-area clusterctl
@ncdc I'm pretty partial to option 1
If we do want to handle additional handling, I would like to suggest that we do not pivot the configmap and let the controllers create new configmaps as they come up on the new management cluster.
Yeah, same. Also, I don't think clusterctl is moving these configmaps.
+1 to keep ignore the configmaps
/priority awaiting-more-evidence
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Small not.clusterctl is not pivoting providers; they are re-installed from scratch in the target cluster before pivot
+1 to ignore
Given that the issue went stale, and there is general consensus on keeping the current behavior, I think we can close this, what do you think @wfernandes ?
@vincepri Feel free to close.
/close
@vincepri: Closing this issue.
In response to this:
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.