Minikube: kvm2 1.3.0 regression: Error dialing tcp via ssh client: dial tcp :22: connect: connection refused

Created on 7 Aug 2019  路  20Comments  路  Source: kubernetes/minikube

The exact command to reproduce the issue:

minikube start --vm-driver kvm2

The full output of the command that failed:

馃槃  minikube v1.3.0 on Ubuntu 18.04
馃敟  Creating kvm2 VM (CPUs=8, Memory=12288MB, Disk=20000MB) ...
E0807 14:34:18.549366   23184 start.go:723] StartHost: create: Error creating machine: Error in driver during machine creation: machine didn't return an IP after 120 seconds

馃挘  Unable to start VM: create: Error creating machine: Error in driver during machine creation: machine didn't return an IP after 120 seconds

馃樋  Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
馃憠  https://github.com/kubernetes/minikube/issues/new/choose

馃挘  disable failed: [command runner: getting ssh client for bootstrapper: Error dialing tcp via ssh client: dial tcp :22: connect: connection refused]

馃樋  Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
馃憠  https://github.com/kubernetes/minikube/issues/new/choose
ssh: dial tcp :22: connect: connection refused

The output of the minikube logs command:

馃挘  command runner: getting ssh client for bootstrapper: Error dialing tcp via ssh client: dial tcp :22: connect: connection refused

The operating system version:

Ubuntu 18.04, running kernel 5.0.0-23-generic.

ckvm2 prioritbacklog

All 20 comments

@andrebraitc first of all thank you for taking the time to create this issue.

I suspect there might be left over VMs.
do you mind providing me with the following outputs:

virsh -c qemu:///system list --all
and aslo

sudo virsh -c qemu:///system list --all

also virsh net-list

also sudo virsh net-list

and also most importantly could you please provide,

docker-machine-driver-kvm2 version

I also suspect you might have an older kvm2 driver, since the new driver doest a better job handling left overs

@medyagh No problem :) I'm one of the parties interested in seeing it fixed, after all.

I will be able to provide those tomorrow. Meanwhile, I may have two questions that might be relevant here:

  1. Minikube is the only thing I have that uses libvirt, at the moment, and I always make sure I run minikube delete after I'm finished. Is a leftover VM something 1.2.0 can deal with but 1.3.0 can't? I tested multiple times and the result was always that 1.2.0 ran just fine and 1.3.0 would produce the output I sent.

  2. Our script always fetches both the minikube binary and the kvm2 driver binary every time we switch versions, so we guarantee both are in sync. I have kept an eye in the process and I remember that the driver that was fetched was for 1.3.0 as well.

good question ! infact we have integeation tests that ensures upgrading from a version to another will run smoothly ! the only thing I was reffering to was the docker-machine-driver-kvm2

unfrutnately the driver had issues that failed deleting or recreating machines.... that was fixed the new driver ( in minikube we ship our own kvm2 driver) that when users upgrade minikube they often don't upgrade the driver. since the driver doesn't change often.

(we are in process of adding more automation for installing drivers automaticly)

Got it. I will provide the requested data tomorrow.

Thanks!

Here we go:

  • virsh -c qemu:///system list --all
Id    Name                           State
----------------------------------------------------
  • sudo virsh -c qemu:///system list --all
 Id    Name                           State
----------------------------------------------------
  • sudo virsh net-list
 Name                 State      Autostart     Persistent
----------------------------------------------------------
 default              active     yes           yes
  • docker-machine-driver-kvm2 version
version: v1.3.0
commit: 43969594266d77b555a207b0f3e9b3fa1dc92b1f

And it fails to start up with the mentioned output. After that, the commands gave me the following output:

  • virsh -c qemu:///system list --all
 Id    Name                           State
----------------------------------------------------
 2     minikube                       running
  • sudo virsh -c qemu:///system list --all
 Id    Name                           State
----------------------------------------------------
 2     minikube                       running
  • sudo virsh net-list
 Name                 State      Autostart     Persistent
----------------------------------------------------------
 default              active     yes           yes
 minikube-net         active     yes           yes

We're fetching the binaries from:
https://storage.googleapis.com/minikube/releases/v1.3.0/minikube-linux-amd64
https://storage.googleapis.com/minikube/releases/v1.3.0/docker-machine-driver-kvm2

I hope this helps

I just tested using the following combinations:

  • minikube 1.3.0 & docker-machine-driver-kvm2 1.2.0
    Result: FAIL

  • minikube 1.2.0 & docker-machine-driver-kvm2 1.3.0
    Result: OK

I think it's definitely something with minikube itself.

Try adding -v8 --alsologtostderr for more debugging output... ?

@afbjorklund here it is

I0809 14:47:18.807866   23597 start.go:223] hostinfo: {"hostname":"andre-wks","uptime":104232,"bootTime":1565250606,"procs":513,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"18.04","kernelVersion":"5.0.0-23-generic","virtualizationSystem":"kvm","virtualizationRole":"host","hostid":"5d87d2ca-912e-4b7f-9dee-dfe499ef48d0"}
I0809 14:47:18.808676   23597 start.go:233] virtualization: kvm host
馃槃  minikube v1.3.0 on Ubuntu 18.04
I0809 14:47:18.809032   23597 downloader.go:59] Not caching ISO, using https://storage.googleapis.com/minikube/iso/minikube-v1.3.0.iso
I0809 14:47:18.809270   23597 start.go:922] Saving config:
{
    "MachineConfig": {
        "KeepContext": false,
        "MinikubeISO": "https://storage.googleapis.com/minikube/iso/minikube-v1.3.0.iso",
        "Memory": 12288,
        "CPUs": 8,
        "DiskSize": 20000,
        "VMDriver": "kvm2",
        "ContainerRuntime": "docker",
        "HyperkitVpnKitSock": "",
        "HyperkitVSockPorts": [],
        "DockerEnv": null,
        "InsecureRegistry": null,
        "RegistryMirror": null,
        "HostOnlyCIDR": "192.168.99.1/24",
        "HypervVirtualSwitch": "",
        "KVMNetwork": "default",
        "KVMQemuURI": "qemu:///system",
        "KVMGPU": false,
        "KVMHidden": false,
        "DockerOpt": null,
        "DisableDriverMounts": false,
        "NFSShare": [],
        "NFSSharesRoot": "/nfsshares",
        "UUID": "",
        "NoVTXCheck": false,
        "DNSProxy": false,
        "HostDNSResolver": true
    },
    "KubernetesConfig": {
        "KubernetesVersion": "v1.10.8",
        "NodeIP": "",
        "NodePort": 8443,
        "NodeName": "minikube",
        "APIServerName": "minikubeCA",
        "APIServerNames": null,
        "APIServerIPs": null,
        "DNSDomain": "cluster.local",
        "ContainerRuntime": "docker",
        "CRISocket": "",
        "NetworkPlugin": "",
        "FeatureGates": "",
        "ServiceCIDR": "10.96.0.0/12",
        "ImageRepository": "",
        "ExtraOptions": null,
        "ShouldLoadCachedImages": true,
        "EnableDefaultCNI": false
    }
}
I0809 14:47:18.809464   23597 cache_images.go:286] Attempting to cache image: k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.8 at /home/andre/.minikube/cache/images/k8s.gcr.io/k8s-dns-kube-dns-amd64_1.14.8
I0809 14:47:18.809489   23597 cluster.go:93] Machine does not exist... provisioning new machine
I0809 14:47:18.809530   23597 cache_images.go:286] Attempting to cache image: k8s.gcr.io/kube-addon-manager:v9.0 at /home/andre/.minikube/cache/images/k8s.gcr.io/kube-addon-manager_v9.0
I0809 14:47:18.809529   23597 cache_images.go:286] Attempting to cache image: k8s.gcr.io/etcd-amd64:3.1.12 at /home/andre/.minikube/cache/images/k8s.gcr.io/etcd-amd64_3.1.12
I0809 14:47:18.809544   23597 cluster.go:94] Provisioning machine with config: {KeepContext:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.3.0.iso Memory:12288 CPUs:8 DiskSize:20000 VMDriver:kvm2 ContainerRuntime:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false Downloader:{} DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true}
I0809 14:47:18.809564   23597 cache_images.go:286] Attempting to cache image: k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.8 at /home/andre/.minikube/cache/images/k8s.gcr.io/k8s-dns-sidecar-amd64_1.14.8
I0809 14:47:18.809462   23597 cache_images.go:286] Attempting to cache image: gcr.io/k8s-minikube/storage-provisioner:v1.8.1 at /home/andre/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1
I0809 14:47:18.809462   23597 cache_images.go:286] Attempting to cache image: k8s.gcr.io/kube-controller-manager-amd64:v1.10.8 at /home/andre/.minikube/cache/images/k8s.gcr.io/kube-controller-manager-amd64_v1.10.8
馃敟  Creating kvm2 VM (CPUs=8, Memory=12288MB, Disk=20000MB) ...
I0809 14:47:18.809500   23597 cache_images.go:286] Attempting to cache image: k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.8 at /home/andre/.minikube/cache/images/k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64_1.14.8
I0809 14:47:18.809513   23597 cache_images.go:286] Attempting to cache image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1 at /home/andre/.minikube/cache/images/k8s.gcr.io/kubernetes-dashboard-amd64_v1.10.1
I0809 14:47:18.809505   23597 cache_images.go:286] Attempting to cache image: k8s.gcr.io/kube-scheduler-amd64:v1.10.8 at /home/andre/.minikube/cache/images/k8s.gcr.io/kube-scheduler-amd64_v1.10.8
I0809 14:47:18.809497   23597 cache_images.go:286] Attempting to cache image: k8s.gcr.io/kube-apiserver-amd64:v1.10.8 at /home/andre/.minikube/cache/images/k8s.gcr.io/kube-apiserver-amd64_v1.10.8
I0809 14:47:18.809520   23597 cache_images.go:286] Attempting to cache image: k8s.gcr.io/kube-proxy-amd64:v1.10.8 at /home/andre/.minikube/cache/images/k8s.gcr.io/kube-proxy-amd64_v1.10.8
I0809 14:47:18.809526   23597 cache_images.go:286] Attempting to cache image: k8s.gcr.io/pause:3.1 at /home/andre/.minikube/cache/images/k8s.gcr.io/pause_3.1
Found binary path at /usr/local/bin/docker-machine-driver-kvm2
I0809 14:47:18.809856   23597 cache_images.go:83] Successfully cached all images.
Launching plugin server for driver kvm2
Plugin server listening at address 127.0.0.1:43567
() Calling .GetVersion
Using API Version  1
() Calling .SetConfigRaw
() Calling .GetMachineName
(minikube) Calling .GetMachineName
(minikube) Calling .DriverName
Reading certificate data from /home/andre/.minikube/certs/ca.pem
Decoding PEM data...
Parsing certificate...
Reading certificate data from /home/andre/.minikube/certs/cert.pem
Decoding PEM data...
Parsing certificate...
Running pre-create checks...
(minikube) Calling .PreCreateCheck
(minikube) Calling .GetConfigRaw
Creating machine...
(minikube) Calling .Create
(minikube) Creating KVM machine...
(minikube) Setting up store path in /home/andre/.minikube/machines/minikube ...
(minikube) Building disk image from file:///home/andre/.minikube/cache/iso/minikube-v1.3.0.iso
(minikube) DBG | ERROR: logging before flag.Parse: I0809 14:47:18.943950   23615 drivers.go:96] Making disk image using store path: /home/andre/.minikube
(minikube) Downloading /home/andre/.minikube/cache/boot2docker.iso from file:///home/andre/.minikube/cache/iso/minikube-v1.3.0.iso...
(minikube) DBG | ERROR: logging before flag.Parse: I0809 14:47:19.061130   23615 drivers.go:103] Creating ssh key: /home/andre/.minikube/machines/minikube/id_rsa...
(minikube) DBG | ERROR: logging before flag.Parse: I0809 14:47:19.154823   23615 drivers.go:109] Creating raw disk image: /home/andre/.minikube/machines/minikube/minikube.rawdisk...
(minikube) DBG | Writing magic tar header
(minikube) DBG | Writing SSH key tar header
(minikube) DBG | ERROR: logging before flag.Parse: I0809 14:47:19.154903   23615 drivers.go:123] Fixing permissions on /home/andre/.minikube/machines/minikube ...
(minikube) DBG | Checking permissions on dir: /home/andre/.minikube/machines/minikube
(minikube) Setting executable bit set on /home/andre/.minikube/machines/minikube (perms=drwx------)
(minikube) DBG | Checking permissions on dir: /home/andre/.minikube/machines
(minikube) Setting executable bit set on /home/andre/.minikube/machines (perms=drwxr-xr-x)
(minikube) DBG | Checking permissions on dir: /home/andre/.minikube
(minikube) Setting executable bit set on /home/andre/.minikube (perms=drwxr-xr-x)
(minikube) DBG | Checking permissions on dir: /home/andre
(minikube) Setting executable bit set on /home/andre (perms=drwxr-xr-x)
(minikube) Creating domain...
(minikube) DBG | Checking permissions on dir: /home
(minikube) DBG | Skipping /home - not owner
(minikube) Creating network...
(minikube) Ensuring networks are active...
(minikube) Ensuring network default is active
(minikube) Ensuring network minikube-net is active
(minikube) Getting domain xml...
(minikube) Creating domain...
(minikube) Waiting to get IP...
(minikube) DBG | Waiting for machine to come up 0/40
(minikube) DBG | Waiting for machine to come up 1/40
(minikube) DBG | Waiting for machine to come up 2/40
(minikube) DBG | Waiting for machine to come up 3/40
(minikube) DBG | Waiting for machine to come up 4/40
(minikube) DBG | Waiting for machine to come up 5/40
(minikube) DBG | Waiting for machine to come up 6/40
(minikube) DBG | Waiting for machine to come up 7/40
(minikube) DBG | Waiting for machine to come up 8/40
(minikube) DBG | Waiting for machine to come up 9/40
(minikube) DBG | Waiting for machine to come up 10/40
(minikube) DBG | Waiting for machine to come up 11/40
(minikube) DBG | Waiting for machine to come up 12/40
(minikube) DBG | Waiting for machine to come up 13/40
(minikube) DBG | Waiting for machine to come up 14/40
(minikube) DBG | Waiting for machine to come up 15/40
(minikube) DBG | Waiting for machine to come up 16/40
(minikube) DBG | Waiting for machine to come up 17/40
(minikube) DBG | Waiting for machine to come up 18/40
(minikube) DBG | Waiting for machine to come up 19/40
(minikube) DBG | Waiting for machine to come up 20/40
(minikube) DBG | Waiting for machine to come up 21/40
(minikube) DBG | Waiting for machine to come up 22/40
(minikube) DBG | Waiting for machine to come up 23/40
(minikube) DBG | Waiting for machine to come up 24/40
(minikube) DBG | Waiting for machine to come up 25/40
(minikube) DBG | Waiting for machine to come up 26/40
(minikube) DBG | Waiting for machine to come up 27/40
(minikube) DBG | Waiting for machine to come up 28/40
(minikube) DBG | Waiting for machine to come up 29/40
(minikube) DBG | Waiting for machine to come up 30/40
(minikube) DBG | Waiting for machine to come up 31/40
(minikube) DBG | Waiting for machine to come up 32/40
(minikube) DBG | Waiting for machine to come up 33/40
(minikube) DBG | Waiting for machine to come up 34/40
(minikube) DBG | Waiting for machine to come up 35/40
(minikube) DBG | Waiting for machine to come up 36/40
(minikube) DBG | Waiting for machine to come up 37/40
(minikube) DBG | Waiting for machine to come up 38/40
(minikube) DBG | Waiting for machine to come up 39/40
(minikube) DBG | Waiting for machine to come up 40/40
(minikube) KVM machine creation complete!
E0809 14:49:25.136662   23597 start.go:723] StartHost: create: Error creating machine: Error in driver during machine creation: machine didn't return an IP after 120 seconds
I0809 14:49:25.137132   23597 utils.go:127] non-retriable error: create: Error creating machine: Error in driver during machine creation: machine didn't return an IP after 120 seconds
W0809 14:49:25.137195   23597 exit.go:99] Unable to start VM: create: Error creating machine: Error in driver during machine creation: machine didn't return an IP after 120 seconds

馃挘  Unable to start VM: create: Error creating machine: Error in driver during machine creation: machine didn't return an IP after 120 seconds

馃樋  Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
馃憠  https://github.com/kubernetes/minikube/issues/new/choose

馃挘  disable failed: [command runner: getting ssh client for bootstrapper: Error dialing tcp via ssh client: dial tcp :22: connect: connection refused]

馃樋  Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
馃憠  https://github.com/kubernetes/minikube/issues/new/choose
ssh: dial tcp :22: connect: connection refused

@josedonizetti - any clues here?

@andrebraitc does docker-machine-driver-kvm2 version work for you when using the 1.2.0?

on my machine I get for version 1.2.0:

docker-machine-driver-kvm2: /usr/lib/x86_64-linux-gnu/libvirt-lxc.so.0: version `LIBVIRT_LXC_2.0.0' not found (required by docker-machine-driver-kvm2)
docker-machine-driver-kvm2: /usr/lib/x86_64-linux-gnu/libvirt.so.0: version `LIBVIRT_2.2.0' not found (required by docker-machine-driver-kvm2)
docker-machine-driver-kvm2: /usr/lib/x86_64-linux-gnu/libvirt.so.0: version `LIBVIRT_3.0.0' not found (required by docker-machine-driver-kvm2)
docker-machine-driver-kvm2: /usr/lib/x86_64-linux-gnu/libvirt.so.0: version `LIBVIRT_1.3.3' not found (required by docker-machine-driver-kvm2)
docker-machine-driver-kvm2: /usr/lib/x86_64-linux-gnu/libvirt.so.0: version `LIBVIRT_2.0.0' not found (required by docker-machine-driver-kvm2)

but at the same time minikube/kvm2 1.3.0 works without any problem.

@afbjorklund could it be related to the linked libviirt version?

@josedonizetti It does not. I get the same output.

EDIT: I remember getting an error output like that, but I don't recall if it's exactly that. I'll confirm tomorrow.

Notice, however, that the docker-machine-driver-kvm2 driver version does not matter in this particular issue. Only the minikube version seems to make any difference. Crashes with 1.3.0, works fine with 1.2.0 (I haven't tested with 1.3.1).

@andrebraitc I know that this happened after upgrading, however I still believe it worth trying with the latest driver version, we had some bugs in older driver versions about "minikube stop" and "start"

here is link to install latest driver:
https://minikube.sigs.k8s.io/docs/reference/drivers/kvm2/#driver-installation

@andrebraitc if possible, can you provide some other info?

can you run a tail -f /var/log/libvirt/qemu/minikube.log when starting the version that error out, and check if anything pops up?

also, it does seem the VM is getting created, but yet the driver is not able to get it's IP in 120 seconds, I wonder what virsh will show about the vm ip address, can you run:

virsh domifaddr minikube

thanks :)

Updates: bug persists on minikube 1.3.1 and docker-machine-driver-kvm2 1.3.1

  • docker-machine-driver-kvm2 versionhappens
version: v1.3.1
commit: ca60a424ce69a4d79f502650199ca2b52f29e631
  • minikube version
minikube version: v1.3.1
commit: ca60a424ce69a4d79f502650199ca2b52f29e631

Exact same results.

@medyagh

libvirt version is 4.0.0-1ubuntu8.12
qemu-kvm version is 2.11+dfsg-1ubuntu7.17

And the docker-machine-driver-kvm2 has been tested with versions 1.2.0, 1.3.0 and 1.3.1, and it seems to be irrelevant. Only the minikube binary seems to make any difference. But I found something interesting:

  • virt-host-validate
  QEMU: Checking for hardware virtualization                                 : PASS
  QEMU: Checking if device /dev/kvm exists                                   : PASS
  QEMU: Checking if device /dev/kvm is accessible                            : FAIL (Check /dev/kvm is world writable or you are in a group that is allowed to access it)
  QEMU: Checking if device /dev/vhost-net exists                             : PASS
  QEMU: Checking if device /dev/net/tun exists                               : PASS
  QEMU: Checking for cgroup 'memory' controller support                      : PASS
  QEMU: Checking for cgroup 'memory' controller mount-point                  : PASS
  QEMU: Checking for cgroup 'cpu' controller support                         : PASS
  QEMU: Checking for cgroup 'cpu' controller mount-point                     : PASS
  QEMU: Checking for cgroup 'cpuacct' controller support                     : PASS
  QEMU: Checking for cgroup 'cpuacct' controller mount-point                 : PASS
  QEMU: Checking for cgroup 'cpuset' controller support                      : PASS
  QEMU: Checking for cgroup 'cpuset' controller mount-point                  : PASS
  QEMU: Checking for cgroup 'devices' controller support                     : PASS
  QEMU: Checking for cgroup 'devices' controller mount-point                 : PASS
  QEMU: Checking for cgroup 'blkio' controller support                       : PASS
  QEMU: Checking for cgroup 'blkio' controller mount-point                   : PASS
  QEMU: Checking for device assignment IOMMU support                         : PASS
  QEMU: Checking if IOMMU is enabled by kernel                               : WARN (IOMMU appears to be disabled in kernel. Add intel_iommu=on to kernel cmdline arguments)
   LXC: Checking for Linux >= 2.6.26                                         : PASS
   LXC: Checking for namespace ipc                                           : PASS
   LXC: Checking for namespace mnt                                           : PASS
   LXC: Checking for namespace pid                                           : PASS
   LXC: Checking for namespace uts                                           : PASS
   LXC: Checking for namespace net                                           : PASS
   LXC: Checking for namespace user                                          : PASS
   LXC: Checking for cgroup 'memory' controller support                      : PASS
   LXC: Checking for cgroup 'memory' controller mount-point                  : PASS
   LXC: Checking for cgroup 'cpu' controller support                         : PASS
   LXC: Checking for cgroup 'cpu' controller mount-point                     : PASS
   LXC: Checking for cgroup 'cpuacct' controller support                     : PASS
   LXC: Checking for cgroup 'cpuacct' controller mount-point                 : PASS
   LXC: Checking for cgroup 'cpuset' controller support                      : PASS
   LXC: Checking for cgroup 'cpuset' controller mount-point                  : PASS
   LXC: Checking for cgroup 'devices' controller support                     : PASS
   LXC: Checking for cgroup 'devices' controller mount-point                 : PASS
   LXC: Checking for cgroup 'blkio' controller support                       : PASS
   LXC: Checking for cgroup 'blkio' controller mount-point                   : PASS
   LXC: Checking if device /sys/fs/fuse/connections exists                   : PASS

@josedonizetti

  • tail -f /var/log/libvirt/qemu/minikube.log during minikube start and minikube stop
2019-08-18 08:15:11.427+0000: starting up libvirt version: 4.0.0, package: 1ubuntu8.12 (Marc Deslauriers <[email protected]> Tue, 02 Jul 2019 09:19:33 -0400), qemu version: 2.11.1(Debian 1:2.11+dfsg-1ubuntu7.17), hostname: andre-wks
LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin QEMU_AUDIO_DRV=none /usr/bin/kvm-spice -name guest=minikube,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-3-minikube/master-key.aes -machine pc-i440fx-bionic,accel=kvm,usb=off,dump-guest-core=off -cpu host -m 11719 -realtime mlock=off -smp 8,sockets=8,cores=1,threads=1 -uuid 7fff0544-2aa7-4fba-8ffa-c94c8f26528b -display none -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-3-minikube/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -boot menu=off,strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device lsi,id=scsi0,bus=pci.0,addr=0x4 -drive file=/home/andre/.minikube/machines/minikube/boot2docker.iso,format=raw,if=none,id=drive-scsi0-0-2,readonly=on -device scsi-cd,bus=scsi0.0,scsi-id=2,drive=drive-scsi0-0-2,id=scsi0-0-2,bootindex=1 -drive file=/home/andre/.minikube/machines/minikube/minikube.rawdisk,format=raw,if=none,id=drive-virtio-disk0,aio=threads -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=2 -netdev tap,fd=30,id=hostnet0,vhost=on,vhostfd=32 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=ac:b1:52:e3:f4:11,bus=pci.0,addr=0x2 -netdev tap,fd=33,id=hostnet1,vhost=on,vhostfd=34 -device virtio-net-pci,netdev=hostnet1,id=net1,mac=10:df:e9:11:f5:bc,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 -object rng-random,id=objrng0,filename=/dev/random -device virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x7 -msg timestamp=on
2019-08-18 08:15:11.427+0000: Domain id=3 is tainted: host-cpu
2019-08-18T08:15:11.447697Z qemu-system-x86_64: -chardev pty,id=charserial0: char device redirected to /dev/pts/2 (label charserial0)
2019-08-18T08:18:16.870679Z qemu-system-x86_64: terminating on signal 15 from pid 1105 (/usr/sbin/libvirtd)
2019-08-18 08:18:17.271+0000: shutting down, reason=destroyed
  • virsh domifaddr minikube
 Name       MAC address          Protocol     Address
-------------------------------------------------------------------------------

Maybe I got the wrong group for the user and it only makes any difference with recent minikube? I'll try to check that.

Update: got the virt-host-validate to pass by adding the user to group kvm. Minikube, however, still fails.

When I have the time to do so, I will git bisect this.

That is valuable info you provided @andrebraitc thank you for not giving up ! once we figure out the root cause, we should make sure minikube handles it from now on.

I did git bisect today and I found this:

  • Last good commit: de84f83fa13c8453f20ff3dba6baf7a8817efa5f
  • First bad commit: 84d4874c5fdac6a5523e6cc1366e4591ada17203
    But this commit is actually unusable (because there is no image with version 1.2.1)
  • Actual first bad commit: 63db60a54360e5f0a7b454c54e1b60acfde02d53

So I guess the problem is in the minikube 1.3.0 image itself, not in the binary here.

@medyagh so, after git bisecting everything, including the disk image (thanks, contributor documentation!), I just couldn't find a bad commit!
It turns out there was something weird with the 1.3.0 ISO in my disk. Apparently, the cached ISO was corrupt. I cleaned the cache and voi la! It works again!

I'm closing this. Thanks, guys.

Was this page helpful?
0 / 5 - 0 ratings