Please provide the following details:
Environment: linux
Minikube version (use minikube version): v0.22.3
What happened:
I want to delete minikube and stop all k8s containers but Im unable to do it.
I've run minikube delete didin't helped. Everytime I do run docker stop $(docker ps -q --filter name=k8s); docker rm $(docker ps -aq --filter name=k8s) it spins up new containers
What you expected to happen:
Able to stop all containers and cleanup images from my local docker / remove minikube
Can you see what the status of the systemd unit running localkube is? It seems this process is still running the kubelet and re-initializing those containers
systemctl status localkube
friendly ping @stychu
@aaron-prindle That's right. localkube is active and looks like it keeps the containers alive. I'm new to minikube, but I expected from minikube stop to actually stop everything.
minikube version: v0.24.1
docker version: 17.09.1-ce
I was wrong. after minikube stop ps aux | grep localkube gives nothing. But all containers are still up and running.
Had the same problem. The containers are started by the _localkube_ service. Running
systemctl stop localkube
docker rm -f $(docker ps -aq --filter name=k8s)
just worked fine for me
Stopping localkube and removing images, but then restarting localkube it's spitting out errors like this:
Apr 06 15:52:49 redacted.hostname localkube[30493]: E0406 15:52:49.381347 30493 kubelet_volumes.go:128] Orphaned pod "5fe08a97-39c5-11e8-a0ee-080027c9d234" found, but volume paths are still present on disk : There were a total of 3 errors similar to this. Turn up verbosity to see them.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/close
@stychu: Closing this issue.
In response to this:
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Found a hack:
Enter into minikube shell first. and use poweroff command :)
$ minikube ssh
$ sudo poweroff
hi, having same issue with minikube v1.9.0
minikube start --driver=docker
minikube status
minikube stop
...
docker ps
>>> here are k8s containers
docker kill $(docker ps -q)
>>> here are k8s containers AGAIN
This seems like a separate bug. You need to run minikube delete to make the containers disappear.
Most helpful comment
Had the same problem. The containers are started by the _localkube_ service. Running
just worked fine for me