The minikube 0.26.0 was installed successfully. But there is a serious problem. On my ubuntu 16.04 virtualbox vm, I have pulled k8s images for minikube dashboard. After running sudo -E minikube start --vm-driver=none, all the docker images including my private images such as hyperledger fabric are deleted without any notification!!
the docker images are deleted when running minikube dashboard, not minikube start.
Yeah I had something like this as well, except it's not when you start dashboard or start Minikube.
I reverted to 0.25 and everything went back to normal.
i am using 0.25.0. yesterday, all my local docker images were deleted too. feel strange.
I am also seeing this, although I'm using minikube V0.28.0. Any update on this issue?
My guess is that this is the kubelet image garbage collector. It's probably unavoidable when using the none driver - the kubelet assumes full control over the machine and docker daemon.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
It would be nice if kubeadm, or at least ourselves, could detect this possibility and prompt the user rather than quietly deleting everything.
Can we interact with https://github.com/kubernetes/kubernetes/issues/68930 to find a solution?
It would be nice if kubeadm, or at least ourselves, could detect this possibility and prompt the user rather than quietly deleting everything.
we add some code in kubelet that user can specify a list of image that garbage will ignore here
maybe it can fix this
https://github.com/kubernetes/kubernetes/pull/68549
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
Don't stale this
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue with/reopen.
Mark the issue as fresh with/remove-lifecycle rotten.Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
FWIW, If this was indeed caused by the kubelet reaping disk space, I believe this was fixed by #3671 - which is part of minikube v0.34.0. If this is not the case, please re-open. Thanks!
Most helpful comment
FWIW, If this was indeed caused by the kubelet reaping disk space, I believe this was fixed by #3671 - which is part of minikube v0.34.0. If this is not the case, please re-open. Thanks!