minikube 0.31.0
driver: kvm2
No idea how the VM got in "Paused" state but minikube can't start or stop it since then:
$ minikube status
minikube: Paused
cluster:
kubectl:
$ minikube start
Starting local Kubernetes v1.10.0 cluster...
Starting VM...
E1219 14:48:14.263048 106732 start.go:168] Error starting host: Error starting stopped host: Error creating VM: virError(Code=55, Domain=10, Message='Requested operation is not valid: domain is already running').
Retrying.
E1219 14:48:14.263711 106732 start.go:174] Error starting host: Error starting stopped host: Error creating VM: virError(Code=55, Domain=10, Message='Requested operation is not valid: domain is already running')
$ minikube status
minikube: Paused
cluster:
kubectl:
$ minikube stop
Stopping local Kubernetes cluster...
Error stopping machine: Error stopping host: minikube: stopping vm: virError(Code=55, Domain=10, Message='Requested operation is not valid: domain is not running')
$ minikube status
minikube: Paused
cluster:
kubectl:
It's likely your host ran out of disk-space and qemu automatically paused/suspended the guest
See https://qemu.weilnetz.de/doc/qemu-doc.html
werror=action,rerror=action
Specify which action to take on write and read errors. Valid actions are: "ignore" (ignore
the error and try to continue), "stop" (pause QEMU), "report" (report the error to the
guest), "enospc" (pause QEMU only if the host disk is full; report the error to the guest
otherwise). The default setting is werror=enospc and rerror=report.
Once you've freed up space you can do
$> virsh -c qemu:///system resume minikube
I guess depending on the behavior of other drivers, this issue should be closed or re-titled to "minikube stop should handle a paused/suspended KVM VM"
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue with/reopen.
Mark the issue as fresh with/remove-lifecycle rotten.Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Most helpful comment
It's likely your host ran out of disk-space and qemu automatically paused/suspended the guest
See https://qemu.weilnetz.de/doc/qemu-doc.html
Once you've freed up space you can do