Minikube: kvm2: Machine didn't return an IP after 120 seconds

Created on 21 Jan 2019  ·  19Comments  ·  Source: kubernetes/minikube

Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG

Please provide the following details:

Environment: VMware Workstation 11

Minikube version: minikube version: v0.33.1

  • OS:
PRETTY_NAME="Debian GNU/Linux 9 (stretch)"
NAME="Debian GNU/Linux"
VERSION_ID="9"
VERSION="9 (stretch)"
ID=debian
  • VM Driver: "DriverName": "kvm2",
  • ISO version:
"Boot2DockerURL": "file:///home/cmdpwnd/.minikube/cache/iso/minikube-v0.33.1.iso",
        "ISO": "/home/cmdpwnd/.minikube/machines/minikube/boot2docker.iso",
  • Install tools: N/A
  • Others:
VMware Workstation 11:
    Enable VM Settings/CPU/Virtualize Intel VT-x/EPT or AMD-V/RVI
    Enable VM Settings/CPU/Virtualize CPU performance counters

What happened:
E0121 11:39:38.385150 1862 start.go:205] Error starting host: Error creating host: Error creating machine: Error in driver during machine creation: Machine didn't return an IP after 120 seconds.
What you expected to happen:
Success??
How to reproduce it: Copy/Paste will do

sudo apt install qemu-kvm libvirt-clients libvirt-daemon-system
newgrp libvirtd
sudo adduser $(whoami) libvirtd
sudo adduser $(whoami) libvirt
sudo adduser $(whoami) libvirt-qemu
curl -LO https://storage.googleapis.com/minikube/releases/latest/docker-machine-driver-kvm2
sudo install docker-machine-driver-kvm2 /usr/local/bin/
sudo chown -R $(whoami):libvirtd /var/run/libvirt
sudo systemctl restart libvirtd
virsh --connect qemu:///system net-start default
minikube start -v9 --vm-driver kvm2

Anything else we need to know?: If you don't clear minikube after initial failure, on rerun expect:

Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
causfirewall-or-proxy ckvm2 kinsupport lifecyclfrozen

All 19 comments

I'm pretty confident this is a kvm/libvirt issue that we should be able to detect, but don't know how to yet. Probably also related to the use of nested VM's. Have you tried running minikube outside of VMware workstation?

https://fedoraproject.org/wiki/How_to_debug_Virtualization_problems has some guidance on debugging kvm/libvirt issues, but I am especially curious what this command emits:

virt-host-validate

along with:

virsh net-list --all

Running nested virtualization causes, in general, more headache than needed.
For minishift we had several of these reports and most of them failed
to work properly.

On Wed, Jan 23, 2019 at 12:48 PM Thomas Strömberg
notifications@github.com wrote:
>

I'm pretty confident this is a kvm/libvirt issue that we should be able to detect, but don't know how to yet. Probably also related to the use of nested VM's. Have you tried running minikube outside of VMware workstation?

https://fedoraproject.org/wiki/How_to_debug_Virtualization_problems has some guidance on debugging kvm/libvirt issues, but I am especially curious what this command emits:

virt-host-validate

along with:

`virsh net-list --all


You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, or mute the thread.

--

Gerard Braad | http://gbraad.nl
[ Doing Open Source Matters ]

@tstromberg I'd read through those related minishift issues before opening an issue here... I've replicated my scenario with stretch installed on hardware using the info defined in this issue, and have already ruled out VMware as the root cause, hence the initial title change. This is hardware agnostic, and could be specific to Debian 9. To my knowledge (not sure) kvm2 is the only driver working with Debian 9 and minikube

For additional info though: (same output regardless of virtualization on Intel)

I'm not worried about IOMMU because there's no need for a passthrough device.

cmdpwnd@debian:~$ virt-host-validate
  QEMU: Checking for hardware virtualization                                 : PASS
  QEMU: Checking if device /dev/kvm exists                                   : PASS
  QEMU: Checking if device /dev/kvm is accessible                            : PASS
  QEMU: Checking if device /dev/vhost-net exists                             : PASS
  QEMU: Checking if device /dev/net/tun exists                               : PASS
  QEMU: Checking for cgroup 'memory' controller support                      : PASS
  QEMU: Checking for cgroup 'memory' controller mount-point                  : PASS
  QEMU: Checking for cgroup 'cpu' controller support                         : PASS
  QEMU: Checking for cgroup 'cpu' controller mount-point                     : PASS
  QEMU: Checking for cgroup 'cpuacct' controller support                     : PASS
  QEMU: Checking for cgroup 'cpuacct' controller mount-point                 : PASS
  QEMU: Checking for cgroup 'cpuset' controller support                      : PASS
  QEMU: Checking for cgroup 'cpuset' controller mount-point                  : PASS
  QEMU: Checking for cgroup 'devices' controller support                     : PASS
  QEMU: Checking for cgroup 'devices' controller mount-point                 : PASS
  QEMU: Checking for cgroup 'blkio' controller support                       : PASS
  QEMU: Checking for cgroup 'blkio' controller mount-point                   : PASS
  QEMU: Checking for device assignment IOMMU support                         : WARN (No ACPI DMAR table found, IOMMU either disabled in BIOS or not supported by this hardware platform)
   LXC: Checking for Linux >= 2.6.26                                         : PASS
   LXC: Checking for namespace ipc                                           : PASS
   LXC: Checking for namespace mnt                                           : PASS
   LXC: Checking for namespace pid                                           : PASS
   LXC: Checking for namespace uts                                           : PASS
   LXC: Checking for namespace net                                           : PASS
   LXC: Checking for namespace user                                          : PASS
   LXC: Checking for cgroup 'memory' controller support                      : PASS
   LXC: Checking for cgroup 'memory' controller mount-point                  : PASS
   LXC: Checking for cgroup 'cpu' controller support                         : PASS
   LXC: Checking for cgroup 'cpu' controller mount-point                     : PASS
   LXC: Checking for cgroup 'cpuacct' controller support                     : PASS
   LXC: Checking for cgroup 'cpuacct' controller mount-point                 : PASS
   LXC: Checking for cgroup 'cpuset' controller support                      : PASS
   LXC: Checking for cgroup 'cpuset' controller mount-point                  : PASS
   LXC: Checking for cgroup 'devices' controller support                     : PASS
   LXC: Checking for cgroup 'devices' controller mount-point                 : PASS
   LXC: Checking for cgroup 'blkio' controller support                       : PASS
   LXC: Checking for cgroup 'blkio' controller mount-point                   : PASS
cmdpwnd@debian:~$
cmdpwnd@debian:~$ cat /etc/default/grub | grep intel
GRUB_CMDLINE_LINUX_DEFAULT="intel_iommu=on"
cmdpwnd@debian:~$
cmdpwnd@debian:~$ virsh --connect qemu:///system net-list --all
 Name                 State      Autostart     Persistent
----------------------------------------------------------
 default              active     yes           yes
 minikube-net         active     yes           yes

cmdpwnd@debian:~$

@cmdpwnd - thanks for the update! Do you mind running a few commands to help narrow this issue down a bit further?

I use kvm2 on Debian every day, but I suspect we have some environmental differences. First, let's get the virsh version:

virsh --version
// my output: 4.10.0

We can roughly emulate the path the kvm driver uses to determine the IP address by first finding the name of the bridge for minikube-net, though it's probably virbr1#:

virsh --connect qemu:///system dumpxml minikube | grep minikube-net
// my output: <source network='minikube-net' bridge='virbr1'/>

From there, the kvm driver (even libvirt upstream!) parses dnsmasq status (?!?!?) to get the IP address of the interface from the bridge name we just discovered:

grep ip-address /var/lib/libvirt/dnsmasq/virbr1.status
// my output: ip-address": "192.168.39.150",

It seems like there should be a more straightforward way to do this with more recent releases of libvirt, since virsh has no problem with displaying the IP address here:

sudo virsh domifaddr minikube

// my output:

 Name       MAC address          Protocol     Address
-------------------------------------------------------------------------------
 vnet0      70:16:ec:d1:8c:51    ipv4         192.168.122.182/24
 vnet1      a0:3d:49:b1:84:02    ipv4         192.168.39.150/24

If you don't mind repeating the same commands, I think I can figure out how to improve the kvm driver to do the correct thing here.

@tstromberg alright, I think we're getting somewhere now 👍 . My networking is totally dead.

cmdpwnd@debian:~$ virsh --version
3.0.0
cmdpwnd@debian:~$ virsh --connect qemu:///system dumpxml minikube | grep minikube-ne
      <source network='minikube-net' bridge='virbr1'/>



md5-da3beb86b008fbfcae91af250dc3fb13



cmdpwnd@debian:~$ grep ip-address /var/lib/libvirt/dnsmasq/virbr1.status
#The file is empty, same for virbr0 (virsh network "default")



md5-da3beb86b008fbfcae91af250dc3fb13



cmdpwnd@debian:~$ sudo virsh domifaddr minikube
 Name       MAC address          Protocol     Address
-------------------------------------------------------------------------------
cmdpwnd@debian:~$



md5-301fcbe96ff1e4b8bed822360c73cd9f



cmdpwnd@debian:~$ sudo cat /var/lib/libvirt/dnsmasq/minikube-net.conf
##WARNING:  THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE LIKELY TO BE
##OVERWRITTEN AND LOST.  Changes to this configuration should be made using:
##    virsh net-edit minikube-net
## or other application using the libvirt API.
##
## dnsmasq conf file created by libvirt
strict-order
port=0
pid-file=/var/run/libvirt/network/minikube-net.pid
except-interface=lo
bind-dynamic
interface=virbr1
dhcp-option=3
no-resolv
ra-param=*,0,0
dhcp-range=192.168.39.2,192.168.39.254
dhcp-no-override
dhcp-authoritative
dhcp-lease-max=253
dhcp-hostsfile=/var/lib/libvirt/dnsmasq/minikube-net.hostsfile
cmdpwnd@debian:~$



md5-da3beb86b008fbfcae91af250dc3fb13



cmdpwnd@debian:~$ sudo cat /var/lib/libvirt/dnsmasq/default.conf
##WARNING:  THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE LIKELY TO BE
##OVERWRITTEN AND LOST.  Changes to this configuration should be made using:
##    virsh net-edit default
## or other application using the libvirt API.
##
## dnsmasq conf file created by libvirt
strict-order
pid-file=/var/run/libvirt/network/default.pid
except-interface=lo
bind-dynamic
interface=virbr0
dhcp-range=192.168.122.2,192.168.122.254
dhcp-no-override
dhcp-authoritative
dhcp-lease-max=253
dhcp-hostsfile=/var/lib/libvirt/dnsmasq/default.hostsfile
addn-hosts=/var/lib/libvirt/dnsmasq/default.addnhosts
cmdpwnd@debian:~$

Hi, i have the same problem. This is my output

$ virsh --version
4.6.0
$ virsh --connect qemu:///system dumpxml minikube | grep minikube-net
     <source network='minikube-net' bridge='virbr2'/>
$ grep ip-address /var/lib/libvirt/dnsmasq/virbr1.status
#The file is empty
$ sudo virsh domifaddr minikube
 Name       MAC address          Protocol     Address
-------------------------------------------------------------------------------
 #empty

@cmdpwnd Do you resolved this problem? If, yes, can you tell how?

My logs for command:

$ minikube start --vm-driver kvm2 -v 8 --alsologtostderr
logs.txt

I'm running into this same issue on Ubuntu 18.10 I'm gonna dump all my information to compare against what everyone else is experiencing.

Environment:

Distributor ID: Ubuntu
Description:    Ubuntu 18.10
Release:    18.10
Codename:   cosmic

Minikube Version: 0.34.1

VM Driver: kvm2

virt-host-validate:

  QEMU: Checking for hardware virtualization                                 : PASS
  QEMU: Checking if device /dev/kvm exists                                   : PASS
  QEMU: Checking if device /dev/kvm is accessible                            : PASS
  QEMU: Checking if device /dev/vhost-net exists                             : PASS
  QEMU: Checking if device /dev/net/tun exists                               : PASS
  QEMU: Checking for cgroup 'memory' controller support                      : PASS
  QEMU: Checking for cgroup 'memory' controller mount-point                  : PASS
  QEMU: Checking for cgroup 'cpu' controller support                         : PASS
  QEMU: Checking for cgroup 'cpu' controller mount-point                     : PASS
  QEMU: Checking for cgroup 'cpuacct' controller support                     : PASS
  QEMU: Checking for cgroup 'cpuacct' controller mount-point                 : PASS
  QEMU: Checking for cgroup 'cpuset' controller support                      : PASS
  QEMU: Checking for cgroup 'cpuset' controller mount-point                  : PASS
  QEMU: Checking for cgroup 'devices' controller support                     : PASS
  QEMU: Checking for cgroup 'devices' controller mount-point                 : PASS
  QEMU: Checking for cgroup 'blkio' controller support                       : PASS
  QEMU: Checking for cgroup 'blkio' controller mount-point                   : PASS
  QEMU: Checking for device assignment IOMMU support                         : WARN (No ACPI IVRS table found, IOMMU either disabled in BIOS or not supported by this hardware platform)
   LXC: Checking for Linux >= 2.6.26                                         : PASS
   LXC: Checking for namespace ipc                                           : PASS
   LXC: Checking for namespace mnt                                           : PASS
   LXC: Checking for namespace pid                                           : PASS
   LXC: Checking for namespace uts                                           : PASS
   LXC: Checking for namespace net                                           : PASS
   LXC: Checking for namespace user                                          : PASS
   LXC: Checking for cgroup 'memory' controller support                      : PASS
   LXC: Checking for cgroup 'memory' controller mount-point                  : PASS
   LXC: Checking for cgroup 'cpu' controller support                         : PASS
   LXC: Checking for cgroup 'cpu' controller mount-point                     : PASS
   LXC: Checking for cgroup 'cpuacct' controller support                     : PASS
   LXC: Checking for cgroup 'cpuacct' controller mount-point                 : PASS
   LXC: Checking for cgroup 'cpuset' controller support                      : PASS
   LXC: Checking for cgroup 'cpuset' controller mount-point                  : PASS
   LXC: Checking for cgroup 'devices' controller support                     : PASS
   LXC: Checking for cgroup 'devices' controller mount-point                 : PASS
   LXC: Checking for cgroup 'blkio' controller support                       : PASS
   LXC: Checking for cgroup 'blkio' controller mount-point                   : PASS
   LXC: Checking if device /sys/fs/fuse/connections exists                   : PASS

virsh net-list --all:

 Name                 State      Autostart     Persistent
----------------------------------------------------------
 default              active     yes           yes
 minikube-net         active     yes           yes

virsh --version: 4.6.0

virsh --connect qemu:///system dumpxml minikube | grep minikube-net:

virsh --connect qemu:///system dumpxml minikube | grep minikube-net

grep ip-address /var/lib/libvirt/dnsmasq/virbr1.status: 192.168.39.230

 Name       MAC address          Protocol     Address
-------------------------------------------------------------------------------
 vnet0      28:82:76:da:78:91    ipv4         192.168.122.31/24
 vnet1      6c:a2:16:03:e1:54    ipv4         192.168.39.230/24

Hopefully this adds some more datapoints to help us figure out what's going on here

To add another datapoint, I was running into the same issue (how I got here), but in the process of reproducing, it magically fixed itself (the worst kind of fix!).

Environment: VMWare Fusion 10.1.5 running on Mac OSX 10.14.3

Minikube version: minikube version: v0.35.0

OS:

DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=18.04
DISTRIB_CODENAME=bionic
DISTRIB_DESCRIPTION="Ubuntu 18.04.2 LTS"

VM Driver: "DriverName": "kvm2"

Others:

VMware Workstation 11:
    Enable VM Settings/CPU/Virtualize Intel VT-x/EPT or AMD-V/RVI
    Enable VM Settings/CPU/Virtualize CPU performance counters
    Enable IO MMU

Initial failure was exactly the same as @cmdpwnd 's.

Additional info:

spatel@vm-yelp:~$ virt-host-validate 
  QEMU: Checking for hardware virtualization                                 : PASS
  QEMU: Checking if device /dev/kvm exists                                   : PASS
  QEMU: Checking if device /dev/kvm is accessible                            : PASS
  QEMU: Checking if device /dev/vhost-net exists                             : PASS
  QEMU: Checking if device /dev/net/tun exists                               : PASS
  QEMU: Checking for cgroup 'memory' controller support                      : PASS
  QEMU: Checking for cgroup 'memory' controller mount-point                  : PASS
  QEMU: Checking for cgroup 'cpu' controller support                         : PASS
  QEMU: Checking for cgroup 'cpu' controller mount-point                     : PASS
  QEMU: Checking for cgroup 'cpuacct' controller support                     : PASS
  QEMU: Checking for cgroup 'cpuacct' controller mount-point                 : PASS
  QEMU: Checking for cgroup 'cpuset' controller support                      : PASS
  QEMU: Checking for cgroup 'cpuset' controller mount-point                  : PASS
  QEMU: Checking for cgroup 'devices' controller support                     : PASS
  QEMU: Checking for cgroup 'devices' controller mount-point                 : PASS
  QEMU: Checking for cgroup 'blkio' controller support                       : PASS
  QEMU: Checking for cgroup 'blkio' controller mount-point                   : PASS
  QEMU: Checking for device assignment IOMMU support                         : PASS
  QEMU: Checking if IOMMU is enabled by kernel                               : WARN (IOMMU appears to be disabled in kernel. Add intel_iommu=on to kernel cmdline arguments)
   LXC: Checking for Linux >= 2.6.26                                         : PASS
   LXC: Checking for namespace ipc                                           : PASS
   LXC: Checking for namespace mnt                                           : PASS
   LXC: Checking for namespace pid                                           : PASS
   LXC: Checking for namespace uts                                           : PASS
   LXC: Checking for namespace net                                           : PASS
   LXC: Checking for namespace user                                          : PASS
   LXC: Checking for cgroup 'memory' controller support                      : PASS
   LXC: Checking for cgroup 'memory' controller mount-point                  : PASS
   LXC: Checking for cgroup 'cpu' controller support                         : PASS
   LXC: Checking for cgroup 'cpu' controller mount-point                     : PASS
   LXC: Checking for cgroup 'cpuacct' controller support                     : PASS
   LXC: Checking for cgroup 'cpuacct' controller mount-point                 : PASS
   LXC: Checking for cgroup 'cpuset' controller support                      : PASS
   LXC: Checking for cgroup 'cpuset' controller mount-point                  : PASS
   LXC: Checking for cgroup 'devices' controller support                     : PASS
   LXC: Checking for cgroup 'devices' controller mount-point                 : PASS
   LXC: Checking for cgroup 'blkio' controller support                       : PASS
   LXC: Checking for cgroup 'blkio' controller mount-point                   : PASS
   LXC: Checking if device /sys/fs/fuse/connections exists                   : PASS
spatel@vm-yelp:~$ virsh net-list --all
 Name                 State      Autostart     Persistent
----------------------------------------------------------
 default              active     yes           yes
 minikube-net         active     yes           yes
spatel@vm-yelp:~$ virsh --version
4.0.0
spatel@vm-yelp:~$ virsh --connect qemu:///system dumpxml minikube | grep minikube-net
      <source network='minikube-net' bridge='virbr1'/>
spatel@vm-yelp:~$ grep ip-address /var/lib/libvirt/dnsmasq/virbr1.status
    "ip-address": "192.168.39.242",
spatel@vm-yelp:~$ sudo virsh domifaddr minikube
[sudo] password for spatel: 
 Name       MAC address          Protocol     Address
-------------------------------------------------------------------------------
 vnet0      [redacted]        ipv4         192.168.122.146/24
 vnet1      [redacted]        ipv4         192.168.39.242/24
spatel@vm-yelp:~$ sudo cat /var/lib/libvirt/dnsmasq/minikube-net.conf
##WARNING:  THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE LIKELY TO BE
##OVERWRITTEN AND LOST.  Changes to this configuration should be made using:
##    virsh net-edit minikube-net
## or other application using the libvirt API.
##
## dnsmasq conf file created by libvirt
strict-order
user=libvirt-dnsmasq
port=0
pid-file=/var/run/libvirt/network/minikube-net.pid
except-interface=lo
bind-dynamic
interface=virbr1
dhcp-option=3
no-resolv
ra-param=*,0,0
dhcp-range=192.168.39.2,192.168.39.254
dhcp-no-override
dhcp-authoritative
dhcp-lease-max=253
dhcp-hostsfile=/var/lib/libvirt/dnsmasq/minikube-net.hostsfile
spatel@vm-yelp:~$ sudo cat /var/lib/libvirt/dnsmasq/default.conf
##WARNING:  THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE LIKELY TO BE
##OVERWRITTEN AND LOST.  Changes to this configuration should be made using:
##    virsh net-edit default
## or other application using the libvirt API.
##
## dnsmasq conf file created by libvirt
strict-order
user=libvirt-dnsmasq
pid-file=/var/run/libvirt/network/default.pid
except-interface=lo
bind-dynamic
interface=virbr0
dhcp-range=192.168.122.2,192.168.122.254
dhcp-no-override
dhcp-authoritative
dhcp-lease-max=253
dhcp-hostsfile=/var/lib/libvirt/dnsmasq/default.hostsfile
addn-hosts=/var/lib/libvirt/dnsmasq/default.addnhosts

So how did things magically start working? After a fresh reboot of the VM, minikube delete and then the start:

$ minikube start --vm-driver kvm2 -v 8 --alsologtostderr

output.log

spatel@vm-yelp:~$ minikube status
host: Running
kubelet: Running
apiserver: Running
kubectl: Correctly Configured: pointing to minikube-vm at 192.168.39.242

another reproducing here, checked all above virsh outputs got similar results:

[...]
(minikube) DBG | Waiting for machine to come up 19/40
(minikube) DBG | Waiting for machine to come up 20/40
(minikube) DBG | Waiting for machine to come up 21/40
(minikube) DBG | Waiting for machine to come up 22/40
(minikube) DBG | Waiting for machine to come up 23/40
(minikube) DBG | Waiting for machine to come up 24/40
(minikube) DBG | Waiting for machine to come up 25/40
(minikube) DBG | Waiting for machine to come up 26/40
(minikube) DBG | Waiting for machine to come up 27/40
(minikube) DBG | Waiting for machine to come up 28/40
(minikube) DBG | Waiting for machine to come up 29/40
(minikube) DBG | Waiting for machine to come up 30/40
(minikube) DBG | Waiting for machine to come up 31/40
(minikube) DBG | Waiting for machine to come up 32/40
(minikube) DBG | Waiting for machine to come up 33/40
(minikube) DBG | Waiting for machine to come up 34/40
(minikube) DBG | Waiting for machine to come up 35/40
(minikube) DBG | Waiting for machine to come up 36/40
(minikube) DBG | Waiting for machine to come up 37/40
(minikube) DBG | Waiting for machine to come up 38/40
(minikube) DBG | Waiting for machine to come up 39/40
(minikube) DBG | Waiting for machine to come up 40/40
I0424 17:06:53.580893   28159 start.go:384] StartHost: create: Error creating machine: Error in driver during machine creation: Machine didn't return an IP after 120 seconds
I0424 17:06:53.580971   28159 utils.go:122] non-retriable error: create: Error creating machine: Error in driver during machine creation: Machine didn't return an IP after 120 seconds
W0424 17:06:53.581310   28159 exit.go:99] Unable to start VM: create: Error creating machine: Error in driver during machine creation: Machine didn't return an IP after 120 seconds

💣  Unable to start VM: create: Error creating machine: Error in driver during machine creation: Machine didn't return an IP after 120 seconds

😿  Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
👉  https://github.com/kubernetes/minikube/issues/new
Command exited with non-zero status 70
0.12user 0.02system 2:07.16elapsed 0%CPU (0avgtext+0avgdata 29468maxresident)k
16inputs+16outputs (0major+2903minor)pagefaults 0swaps

real    2m7.165s
user    0m0.128s
sys 0m0.031s

tstromberg commented on Jan 30 •
@cmdpwnd - thanks for the update! Do you mind running a few commands to help narrow this issue down a bit further?

to @tstromberg I don't mind to run more debugging commands, while do you (or any other developer ) have an update on this?

If you run into this, please try upgrading to the most recent kvm2 machine driver and report back:

curl -LO https://storage.googleapis.com/minikube/releases/latest/docker-machine-driver-kvm2 && sudo install docker-machine-driver-kvm2 /usr/local/bin/

Then run minikube delete to remove the old state.

Updating to 1.1.0 did not resolve the issue for me. I wish I could add something useful to what's already been described here, but my situation is pretty much exactly the same as what's described above.

__Edit:__ The root cause was conflicts with my nftables rules, and so the quick fix was to simply nft flush ruleset. Minikube came up with no problems after doing so.

Also having this issue as described with v1.2.0 of the driver.

I was having the same issue (even with kvm driver v1.2.0). Turns out that for me too conflicting nftables rules were at fault and sudo nft flush ruleset fixed the issue.

Now i just have to figure out what rules to add to /etc/nftables.conf to solve this properly.

I found out what was causing problems in my config. The rules in my nftables input chain were dropping packets coming from the minikube network interfaces. I had to add

iifname "virbr1" counter return
iifname "virbr0" counter return

to fix that.

Here's the full set of rules in my input chain, if anyone's interested:

table inet filter {
  chain input {
    type filter hook input priority 0;

    # allow established/related connections
    ct state {established, related} accept

    # early drop of invalid connections
    ct state invalid counter drop

    # allow from loopback
    iifname lo accept

    # allow icmp
    ip protocol icmp accept
    ip6 nexthdr icmpv6 accept

    # allow ssh
    tcp dport ssh accept

    # don't clash with minikube
    iifname "virbr1" counter return
    iifname "virbr0" counter return

    # everything else
    counter
    reject with icmpx type port-unreachable
  }
}

I also made sure to use iptables-nft instead of iptables-legacy (installed iptables-nft on Arch Linux which replaces iptables) to rule out conflicts between iptables and nftables.

It's seem issue on vmware nested virtualization,, I got it working on Vmware player 15 on Windows 10 1903 and with this setting in vmx:

vhv.enable = "TRUE"
vpmc.enable = "TRUE"
vvtd.enable = "TRUE"

But if vpmc.enable = "TRUE" option deleted in vmx or using hypervisor.cpuid.v0 = "FALSE" option exist in vmx, minikube failed to start,

I also test the minikube iso on libvirt using this command, and showing the same behaviour :

virt-install --virt-type=kvm --name=test --ram 2048 --vcpus=1 --virt-type=kvm --hvm --cdrom  ~/.minikube/cache/iso/minikube-v1.3.0.iso --network network=default --disk pool=default,size=20,bus=virtio,format=qcow2 

Reference:
https://fabianlee.org/2018/08/27/kvm-bare-metal-virtualization-on-ubuntu-with-kvm/
https://communities.vmware.com/docs/DOC-8970

minikube v1.4 now gives a little bit more documentation around this, but I'll leave this open for others who run into this support issue.

@cmdpwnd I will close this issue due to no update, and sorry to hear you had this issue, please feel free to reopen.

meanwhile I recommend to give our newest driver a try in latest release

minikube start --vm-driver=docker
that might solve your issue !

JFYI I was able to workaround this just restarting libvirtd service.

sudo systemctl restart libvirtd.service

Without changing anything else minukube started perfectly.

Was this page helpful?
0 / 5 - 0 ratings