Choose one: BUG REPORT
I was trying out kubeadm 1.7.2 and I noticed whenever I enter kubeadm reset. The cluster is shut down and the containers disappear (as expected).
I have an external etcd cluster running and kubeadm connects to that etcd (3.1.0). The problems occurs when I init a new kubeadm instance, on the same machine. The old pods start reappearing creating conflicts with the current ones.
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system kube-apiserver-k8s-1 1/1 Running 0 26s
kube-system kube-controller-manager-k8s-1 1/1 Running 0 26s
kube-system kube-dns-1657474582-348qn 0/3 OutOfcpu 0 8m
kube-system kube-dns-1657474582-hp8vl 0/3 Unknown 0 13m
kube-system kube-dns-1657474582-kcz5l 0/3 Pending 0 9s
kube-system kube-dns-1930225767-t8hs1 0/3 ContainerCreating 13 46m
kube-system kube-proxy-1j8km 1/1 Unknown 0 46m
kube-system kube-proxy-gdt89 1/1 Running 0 46m
kube-system kube-proxy-gtj81 0/1 Pending 0 8m
kube-system kube-scheduler-k8s-1 1/1 Running 0 14m
That's actually kind of expected given the current code. It may not be optimal though and is probably something we can do better.
What would you expect? kubeadm to cleanup /registry on kubeadm reset and external etcd?
Currently, kubeadm reset does nothing when external etcd is set.
Very open to proposals
cc @justinsb @kris-nova as this touches kubeadm composability
kubeadm to cleanup /registry on kubeadm reset and external etcd?
No! this can be dangerous! what about a flag to be used with kubeadm reset to delete all etcd state?
@aanm Exactly, that's why we don't do it ;)!
what about a flag to be used with kubeadm reset to delete all etcd state?
I'd be fine with that if you send a PR
Does etcd v3 support recursive deletes?
I think we should just document this.
@jamiehannaford @aanm up for a quick PR mentioning that it's your responsibility to cleanup etcd (if needed) if you're running it externally?
SGTM, I'll try to rustle up something this afternoon.
This is fixed now, thanks @jamiehannaford!