Is this a BUG REPORT or FEATURE REQUEST? (choose one):
This is a bug.
Minikube version (use minikube version):
$ minikube version
minikube version: v0.16.0
Environment:
$ cat /etc/os-release
NAME="Ubuntu"
VERSION="16.04.1 LTS (Xenial Xerus)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 16.04.1 LTS"
VERSION_ID="16.04"
HOME_URL="http://www.ubuntu.com/"
SUPPORT_URL="http://help.ubuntu.com/"
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"
VERSION_CODENAME=xenial
UBUNTU_CODENAME=xenial
cat ~/.minikube/machines/minikube/config.json | grep DriverName):$ cat ~/.minikube/machines/minikube/config.json | grep DriverName
"DriverName": "kvm",
cat ~/.minikube/machines/minikube/config.json | grep ISO):$ cat ~/.minikube/machines/minikube/config.json | grep ISO
"ISO": "/home/lehins/.minikube/machines/minikube/boot2docker.iso",
What happened:
When using KVM driver minikube starts fine and works as expected, but it cannot be started again after it was stopped.
What you expected to happen:
I'd expect it to start back up. Same issue was tested with VirtualBox driver and it works as expected: starts -> stops -> starts back up again
How to reproduce it (as minimally and precisely as possible):
$ minikube start --vm-driver=kvm
Starting local Kubernetes cluster...
Kubectl is now configured to use the cluster.
lehins@lehins-HP:~/test-minikube$ minikube dashboard
Opening kubernetes dashboard in default browser...
lehins@lehins-HP:~/test-minikube$ Created new window in existing browser session.
lehins@lehins-HP:~/test-minikube$ minikube status
minikubeVM: Running
localkube: Running
lehins@lehins-HP:~/test-minikube$ minikube stop --v=7 --logtostderr
W0209 00:28:47.878670 19776 root.go:149] Error reading config file at /home/lehins/.minikube/config/config.json: open /home/lehins/.minikube/config/config.json: no such file or directory
Stopping local Kubernetes cluster...
Found binary path at /usr/local/bin/docker-machine-driver-kvm
Launching plugin server for driver kvm
Plugin server listening at address 127.0.0.1:35525
() Calling .GetVersion
Using API Version 1
() Calling .SetConfigRaw
() Calling .GetMachineName
Stopping "minikube"...
(minikube) Calling .GetState
(minikube) DBG | Getting current state...
(minikube) DBG | Fetching VM...
(minikube) Calling .Stop
(minikube) DBG | Stopping VM minikube
(minikube) DBG | Getting current state...
(minikube) DBG | Getting current state...
(minikube) DBG | VM state: Stopped
(minikube) Calling .GetState
(minikube) DBG | Getting current state...
Machine "minikube" was stopped.
Machine stopped.
Making call to close driver server
(minikube) Calling .Close
Successfully made call to close driver server
Making call to close connection to plugin binary
(minikube) DBG | Closing plugin on server side
Up to this point everything works as expected, but starting it back up is a problem:
lehins@lehins-HP:~/test-minikube$ minikube start --vm-driver=kvm --v=7 --logtostderr
W0209 00:31:46.841489 20550 root.go:149] Error reading config file at /home/lehins/.minikube/config/config.json: open /home/lehins/.minikube/config/config.json: no such file or directory
Starting local Kubernetes cluster...
I0209 00:31:46.841735 20550 cluster.go:78] Machine exists!
Found binary path at /usr/local/bin/docker-machine-driver-kvm
Launching plugin server for driver kvm
Plugin server listening at address 127.0.0.1:37943
() Calling .GetVersion
Using API Version 1
() Calling .SetConfigRaw
() Calling .GetMachineName
(minikube) Calling .GetState
(minikube) DBG | Getting current state...
(minikube) DBG | Fetching VM...
I0209 00:31:46.856169 20550 cluster.go:85] Machine state: Stopped
(minikube) Calling .Start
(minikube) DBG | Starting VM minikube
(minikube) DBG | GetIP called for minikube
(minikube) DBG | Failed to retrieve dnsmasq leases from /var/lib/libvirt/dnsmasq/docker-machines.leases
(minikube) DBG | IP address: 192.168.42.196
(minikube) DBG | Unable to locate IP address for MAC 52:54:00:86:6c:b8
(minikube) Calling .GetConfigRaw
Waiting for SSH to be available...
Getting to WaitForSSH function...
(minikube) Calling .GetSSHHostname
(minikube) DBG | GetIP called for minikube
(minikube) DBG | Failed to retrieve dnsmasq leases from /var/lib/libvirt/dnsmasq/docker-machines.leases
(minikube) DBG | IP address: 192.168.42.196
(minikube) DBG | Unable to locate IP address for MAC 52:54:00:86:6c:b8
(minikube) Calling .GetSSHPort
(minikube) Calling .GetSSHKeyPath
(minikube) DBG | AK: resolvestorepath: /home/lehins/.minikube
(minikube) Calling .GetSSHKeyPath
(minikube) Calling .GetSSHUsername
(minikube) DBG | AK: resolvestorepath: /home/lehins/.minikube
Using SSH client type: external
Using SSH private key: /home/lehins/.minikube/machines/minikube/id_rsa (-rw-------)
&{[-F /dev/null -o PasswordAuthentication=no -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o LogLevel=quiet -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none [email protected] -o IdentitiesOnly=yes -i /home/lehins/.minikube/machines/minikube/id_rsa -p 22] /usr/bin/ssh <nil>}
About to run SSH command:
exit 0
SSH cmd err, output: exit status 255:
Error getting ssh command 'exit 0' : Something went wrong running an SSH command!
command : exit 0
err : exit status 255
output :
Getting to WaitForSSH function...
(minikube) Calling .GetSSHHostname
(minikube) DBG | GetIP called for minikube
(minikube) DBG | Failed to retrieve dnsmasq leases from /var/lib/libvirt/dnsmasq/docker-machines.leases
(minikube) DBG | IP address: 192.168.42.196
(minikube) DBG | Unable to locate IP address for MAC 52:54:00:86:6c:b8
(minikube) Calling .GetSSHPort
(minikube) Calling .GetSSHKeyPath
(minikube) Calling .GetSSHKeyPath
(minikube) DBG | AK: resolvestorepath: /home/lehins/.minikube
(minikube) Calling .GetSSHUsername
(minikube) DBG | AK: resolvestorepath: /home/lehins/.minikube
Using SSH client type: external
Using SSH private key: /home/lehins/.minikube/machines/minikube/id_rsa (-rw-------)
&{[-F /dev/null -o PasswordAuthentication=no -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o LogLevel=quiet -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none [email protected] -o IdentitiesOnly=yes -i /home/lehins/.minikube/machines/minikube/id_rsa -p 22] /usr/bin/ssh <nil>}
About to run SSH command:
exit 0
SSH cmd err, output: exit status 255:
Error getting ssh command 'exit 0' : Something went wrong running an SSH command!
command : exit 0
err : exit status 255
output :
... (repeats indefinitely)
Attached also is the screenshot of the VM, just in case:

Anything else do we need to know:
I'm pretty sure this issue is fixed with a PR that I just sent https://github.com/dhiltgen/docker-machine-kvm/pull/33
The issue was that after stopping, minikube comes up with a different IP and now there are two entries in the leases file. The driver incorrectly picks the first one it finds, even though the leases file should be parsed in log format, i.e., the last entry is the correct one.
I patched this issue in the vendored version of the driver at minikube HEAD. You should be able to start/stop after downloading the nightly build of minikube and running
minikube config set use-vendored-driver true
You can grab a minikube build you need here https://storage.googleapis.com/minikube-builds/1050/minikube-linux-amd64
Alternatively, you can wait for a docker-machine-driver-kvm release with this fix
Let me know if that fixes the issue for you.
Damn, that's quite a timing for a bug report and a fix. Using the CI build 1050 and use-vendored-driver worked like a charm. I hate to ask, but it looks like it will be a while before this fix will get into new release of docker-machine-kvm, considering it's been almost a year since a new release. Am I wrong about that?
Yeah - thats part of the reason why we are starting to vendor in the drivers. This will allow us to patch or vendor in a fork, and more closely control the behavior of these drivers.
If all goes well, eventually --use-vendored-drivers will disappear and become the default. That way, you won't need to download or maintain a separate binary.
@r2d4 I can also confirm that your fix b9a115b3 resolves the issue of VM getting unreachable after stop- & starting the VM, as mentioned in https://github.com/kubernetes/minikube/issues/951#issuecomment-275350365. I had to set "minikube config set use-vendored-driver true" to explicitly make use of the vendored kvm driver. Thanks a lot!
One question: why does libvirt assign a different IP address after each restart? AFAIK its "custom lease file" (*.status) sets a different client-id every time, which causes the IP address to change as well. But I couldn't figure out the reason.
I still see this issue, using minikube v0.17.1 released 3 March. I assume it already includes the patch that fixes this issue? I also then tried with the build from @r2d4 and "set use-vendored-driver true", however with both methods I still get the hang on stop/start.
Build 1050 does eventually work I found, it tries to ssh a few times and succeeds at some point. For the latest release version ssh fails indefinitely for me still. I would use 1050 for now, but it does not have the minikube mount command, which I require. Is there any chance you can build the latest release with the kvm vendor, in which case it'll fix my hanging issue and have minikube mount support?
So the latest release of minikube removed built in kvm, but the latest release of docker-machine-kvm does have the patch merged, so in theory this issue should work with the latest release of both components, but does not. I still see this issue using both components latest releases.
experienceing this issue now also. including the prolonged ssh retry cycle which does eventually work.
archlinux
minikube v0.18.0
docker-machine-kvm 0.7.0-2
both installed from the AUR
@lukeab this issue is fixed in the latest KVM driver.
I think this issue can be closed. Feel free to open a new issue if you're still getting this with the latest version
What commit is meant to have fixed this? This one?
https://github.com/dhiltgen/docker-machine-kvm/commit/cdad5d3227d21c844ef87d96f46e51e3cc08ba8d
I still had the issue with version 0.8.2, and cannot see what in 0.10.0 is supposed to have resolved it?
I switched away from minikube, and just not going to go back to using it until the ssh and mounting works reliably.
For me, upgrading docker-machine-driver-kvm from the tested 0.7.0 to 0.8.2 solved the issue.
OS Fedora 25
Minikube 0.18.0
sudo curl -L https://github.com/dhiltgen/docker-machine-kvm/releases/download/v0.8.2/docker-machine-driver-kvm -o /usr/local/bin/docker-machine-driver-kvm
sudo chmod +x /usr/local/bin/docker-machine-driver-kvm
It should be fixed by commit https://github.com/dhiltgen/docker-machine-kvm/commit/e5724f214cdd24b53cd0e135422f2af04ea50d46, which is included in both v0.8.2 and v0.10.0.
If it doesn't work even with the fix, there must be other reasons. Try to check the version of docker-machine-driver-kvm again.
Hello,
Minikube v0.19.0
Ubuntu 16.04
Docker-machine-driver-kvm
Still need to delete the minikube cluster between restarts