Minikube: kvm: Error creating host: qemu-kvm: unrecognized feature kvm

Created on 18 Jan 2019  路  17Comments  路  Source: kubernetes/minikube

Is this a BUG REPORT or FEATURE REQUEST? (choose one):

BUG REPORT

Please provide the following details:

Environment:

Minikube version (use minikube version):

  • 0.32.0
  • 0.33.0
  • OS (e.g. from /etc/os-release):
OS:
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"

CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"

  • VM Driver
    "DriverName": "kvm2"

  • ISO version
    "Boot2DockerURL": "file:///home/bos/mlamouri/.minikube/cache/iso/minikube-v0.32.0.iso",
    "ISO": "/home/bos/mlamouri/.minikube/machines/minikube/boot2docker.iso",

What happened:

E0118 17:59:09.066947    4355 start.go:193] Error starting host:  Error creating host: Error creating machine: Error in driver during machine creation: Error creating VM: virError(Code=1, Domain=10, Message='internal error: qemu unexpectedly closed the monitor: 2019-01-18 17:59:06.474+0000: Domain id=5 is tainted: host-cpu
2019-01-18T17:59:06.731964Z qemu-kvm: unrecognized feature kvm')

virsh --connect qemu:///system list --all
 Id    Name                           State
----------------------------------------------------
 -     minikube                       shut off

virsh --connect qemu:///system start minikube
error: Failed to start domain minikube
error: internal error: qemu unexpectedly closed the monitor: 2019-01-18 18:01:06.348+0000: Domain id=6 is tainted: host-cpu
2019-01-18T18:01:06.602604Z qemu-kvm: unrecognized feature kvm

`

What you expected to happen:

Everything looks great. Please enjoy minikube!

How to reproduce it (as minimally and precisely as possible):

MINIKUBE_VERSION=${MINIKUBE_VERSION:=0.33.0}
MINIKUBE_BIN=${MINIKUBE_BIN:=~/bin}

curl --silent -L -o ${MINIKUBE_BIN}/kubectl https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl

curl --silent -L -o ${MINIKUBE_BIN}/minikube https://storage.googleapis.com/minikube/releases/v${MINIKUBE_VERSION}/minikube-linux-amd64

curl --silent -L -o ${MINIKUBE_BIN}/docker-machine-driver-kvm2 https://storage.googleapis.com/minikube/releases/v${MINIKUBE_VERSION}/docker-machine-driver-kvm2

chmod a+x ${MINIKUBE_BIN}/*

minikube start --vm-driver kvm2

Output of minikube logs (if applicable):

Anything else do we need to know:

This appeared in v0.32.0 and continues in v0.33.0. The start process leaves an unstartable KVM instance in the libvirt system. It must be removed using virsh undefine minikube

causnested-vm-config ckvm2 help wanted olinux prioritawaiting-more-evidence

Most helpful comment

I've just tested it by removing those lines and building the docker-machine-driver-kvm2 binary and it works in centos7. I can confirm it is not related to nested, it is just the "hidden state" feature is not enabled in centos qemu-kvm.

A temporary solution that also works is executing the following command:

curl -LO https://github.com/kubernetes/minikube/releases/download/v0.30.0/docker-machine-driver-kvm2 && sudo install docker-machine-driver-kvm2 /usr/local/bin/

This downloads and installs an older version of docker-machine-driver-kvm2 that doesn't use the hidden feature.

All 17 comments

This happens with nested virt enabled:

cat /etc/modprobe.d/kvm.conf 
options kvm_intel nested=1

cat /sys/module/kvm_intel/parameters/nested
Y

I suspect this is due to virsh/libvirtd not being happy with the nested virtualization environment, but minikube fails to do the necessary pre-flight checks to tell you what's going on. https://fedoraproject.org/wiki/How_to_debug_Virtualization_problems has a great page for debugging virt issues, but it doesn't cover much about nesting, so https://www.redhat.com/en/blog/inception-how-usable-are-nested-kvm-guests might be more useful.

Can you see what happens if you run:

virt-host-validate && echo happy

You can also try if qemu-kvm works. If not, then minikube is going to have a bad time with kvm as well. What is the host OS virtualization layer?

  QEMU: Checking for hardware virtualization                                 : PASS
  QEMU: Checking if device /dev/kvm exists                                   : PASS
  QEMU: Checking if device /dev/kvm is accessible                            : PASS
  QEMU: Checking if device /dev/vhost-net exists                             : PASS
  QEMU: Checking if device /dev/net/tun exists                               : PASS
  QEMU: Checking for cgroup 'memory' controller support                      : PASS
  QEMU: Checking for cgroup 'memory' controller mount-point                  : PASS
  QEMU: Checking for cgroup 'cpu' controller support                         : PASS
  QEMU: Checking for cgroup 'cpu' controller mount-point                     : PASS
  QEMU: Checking for cgroup 'cpuacct' controller support                     : PASS
  QEMU: Checking for cgroup 'cpuacct' controller mount-point                 : PASS
  QEMU: Checking for cgroup 'cpuset' controller support                      : PASS
  QEMU: Checking for cgroup 'cpuset' controller mount-point                  : PASS
  QEMU: Checking for cgroup 'devices' controller support                     : PASS
  QEMU: Checking for cgroup 'devices' controller mount-point                 : PASS
  QEMU: Checking for cgroup 'blkio' controller support                       : PASS
  QEMU: Checking for cgroup 'blkio' controller mount-point                   : PASS
  QEMU: Checking for device assignment IOMMU support                         : PASS
  QEMU: Checking if IOMMU is enabled by kernel                               : WARN (IOMMU appears to be disabled in kernel. Add intel_iommu=on to kernel cmdline arguments)
   LXC: Checking for Linux >= 2.6.26                                         : PASS
   LXC: Checking for namespace ipc                                           : PASS
   LXC: Checking for namespace mnt                                           : PASS
   LXC: Checking for namespace pid                                           : PASS
   LXC: Checking for namespace uts                                           : PASS
   LXC: Checking for namespace net                                           : PASS
   LXC: Checking for namespace user                                          : PASS
   LXC: Checking for cgroup 'memory' controller support                      : PASS
   LXC: Checking for cgroup 'memory' controller mount-point                  : PASS
   LXC: Checking for cgroup 'cpu' controller support                         : PASS
   LXC: Checking for cgroup 'cpu' controller mount-point                     : PASS
   LXC: Checking for cgroup 'cpuacct' controller support                     : PASS
   LXC: Checking for cgroup 'cpuacct' controller mount-point                 : PASS
   LXC: Checking for cgroup 'cpuset' controller support                      : PASS
   LXC: Checking for cgroup 'cpuset' controller mount-point                  : PASS
   LXC: Checking for cgroup 'devices' controller support                     : PASS
   LXC: Checking for cgroup 'devices' controller mount-point                 : PASS
   LXC: Checking for cgroup 'blkio' controller support                       : PASS
   LXC: Checking for cgroup 'blkio' controller mount-point                   : PASS
   LXC: Checking if device /sys/fs/fuse/connections exists                   : PASS

I'm adding the intel_iommu switch to the kernel boot line on that host and rebooting per:
https://serverfault.com/questions/743256/activate-intel-vt-d-in-the-kernel-on-centos7

Same result:
E0121 14:04:19.581817 19073 start.go:211] Error starting host: Error creating host: Error creating machine: Error in driver during machine creation: Error creating VM: virError(Code=1, Domain=10, Message='internal error: process exited while connecting to monitor: 2019-01-21 14:04:16.231+0000: Domain id=1 is tainted: host-cpu
2019-01-21T14:04:16.855361Z qemu-kvm: unrecognized feature kvm')

I can start 0.32.0 and 0.33.0 on Fedora 29, but not CentOS7

Reconfirmed. Identically configured CentOS 7 and Fedora 29. minikube starts correctly on Fedora. CentOS compains that the image is corrupt.

I think the issue is the feature is not supported in CentOS7 qemu-kvm. I've managed to at least boot the instance by:

  • minikube start --disk-size 40g --memory 2048 --v 99
  • Fails
  • Edit the xml to remove the "\ sudo virsh edit minikube
  • Run minikube start again

I think it is because https://github.com/kubernetes/minikube/commit/7ba01b40a9fa911f05fa0141cb8c564953ec3dfd

And it seems it won't be included in RHEL https://bugzilla.redhat.com/show_bug.cgi?id=1492173

I've just tested it by removing those lines and building the docker-machine-driver-kvm2 binary and it works in centos7. I can confirm it is not related to nested, it is just the "hidden state" feature is not enabled in centos qemu-kvm.

I've just tested it by removing those lines and building the docker-machine-driver-kvm2 binary and it works in centos7. I can confirm it is not related to nested, it is just the "hidden state" feature is not enabled in centos qemu-kvm.

A temporary solution that also works is executing the following command:

curl -LO https://github.com/kubernetes/minikube/releases/download/v0.30.0/docker-machine-driver-kvm2 && sudo install docker-machine-driver-kvm2 /usr/local/bin/

This downloads and installs an older version of docker-machine-driver-kvm2 that doesn't use the hidden feature.

I can confirm as well - still present.

Using:

minikube version: v0.33.1
3.10.0-957.5.1.el7.x86_64
CentOS Linux release 7.6.1810 (Core)
Kubernetes v1.13.2
libvirt-4.5.0-10.el7_6.4.x86_64
libvirt-daemon-kvm-4.5.0-10.el7_6.4.x86_64
qemu-kvm-1.5.3-160.el7_6.1.x86_64

@basvandenbrink That was my solution too. I started downloading the minikube binary and the docker-machine-driver-kvm2 binary separately and providing separate version numbers. minikube 0.32.0 and 0.33.0 work on CentOS using the docker-machine-driver-kvm2 version 0.31.0.

I installed virtual box on centos 7 and forgot to name it as the driver and suddenly kvm2 worked as the driver.

@RynDgl I'd be really interested in confirmation of that. I have Virtualbox on CentOS 7 with KVM and kvm2 with 0.35.0 still fails the same way. The only way I can get it work is to use the docker-machine-driver-kvm2 version 0.31.0

afbjorklund submitted:

https://github.com/kubernetes/minikube/pull/3947/files

Included in v1.0.0. Testing on CentOS now.

Included in v1.0.0. Testing on CentOS now.

Let us know how it goes ? Would be nice to verify if it still works with GPU, as well.

I'm closing this issue as it hasn't seen activity in awhile, and it's unclear if this issue still exists. If this issue does continue to exist in the most recent release of minikube, please feel free to re-open it.

Thank you for opening the issue!

Was this page helpful?
0 / 5 - 0 ratings