Thanks for helping us to improve k3s! We welcome all bug reports. At this stage, we are also looking for help in testing/QAing fixes. Once we've fixed you're issue, we'll ping you in the comments to see if you can verify the fix. We'll give you details on what version can be used to test the fix. Additionally, if you are interested in testing fixes that you didn't report, look for the issues with the status/to-test label. You can pick any of these up for verification. You can delete this message portion of the bug report.
Describe the bug
k3s doesnt kill containerd containers running neither cleans the veths cni and flannel interfaces
To Reproduce
just run k3s server and stop it
Expected behavior
containers should go away as awell as interfaces etc.
Screenshots
If applicable, add screenshots to help explain your problem.
Additional context
maybe a k3s server stop command would be nice something around those terms.
This is what I currentlypkill containerd-shim
pkill containerd-shim
ip link show | grep veth | awk '{ print $2 }' | cut -d\@ -f1 | xargs -I{} ip link delete {}
ip link delete cni0
ip link delete flannel.1
Having those continue after the server is stopped is a sensible default. For example, if you are upgrading the k3s binary and restart the server process, you don't necessarily want all the pods etc to come down too (particulaly when multi-node, as killing the master process definitely shouldn't tear everything down on every node). Essentially, when you stop the process you are running without a master (or a kubelet in the agent-only case), so new changes can't be made, but all existing resources continue until the master returns.
As you mentioned, perhaps an option to dismantle everything setup by the local agent would be useful, without uninstalling k3s, but I don't think that should be the default. The uninstall script appears to do this cleanup properly, except for a few /pause processes leftover, which is a different issue.
This is general containerd issue. I honestly don't have a great solution for it yet but it does bother me quite a bit too. Basically I'd like to see a k3s cleanup command.
Yep something like a cleanup ... I鈥檒l try my luck this weekend , the good thing is that all the containers are within the same containerd namespace , so it would be kind of easy to pin point .
As a temporary solution, if you are using systemd the new install script we are testing for rancher/k3s/issues/65 should provide a better uninstall script for cleaning up.
Most helpful comment
This is general containerd issue. I honestly don't have a great solution for it yet but it does bother me quite a bit too. Basically I'd like to see a
k3s cleanupcommand.