Minikube: kvm2: create host timed out in 120.000000 seconds on a slow host

Created on 12 Apr 2020  ยท  9Comments  ยท  Source: kubernetes/minikube


Steps to reproduce the issue:
Starting the minikube vm on a slow host causes timeout, even if the VM starts correctly a minute later

  1. minikube start

  2. 3.


Full output of failed command:

Full output of minikube start command used, if not already included:

๐Ÿ’ฃ Failed to start kvm2 VM. "minikube start" may fix it.: creating host: create: Error creating machine: Error in driver during machine creation: getting ip during machine start: host is not running

๐Ÿ˜ฟ minikube is exiting due to an error. If the above message is not useful, open an issue:
๐Ÿ‘‰ https://github.com/kubernetes/minikube/issues/new/choose

Successive runs of "minikube start" causes it to try reading configuration but it is empty

[centos@centos-minikube ~]$ minikube config view
[centos@centos-minikube ~]$

[centos@centos-minikube ~]$ minikube start --v=10 --alsologtostderr=true
W0412 14:56:27.766137 3740 root.go:248] Error reading config file at /home/centos/.minikube/config/config.json: open /home/centos/.minikube/config/config.json: no such file or directory
I0412 14:56:27.766890 3740 notify.go:125] Checking for updates...
I0412 14:56:28.043449 3740 start.go:262] hostinfo: {"hostname":"centos-minikube","uptime":830,"bootTime":1586702558,"procs":264,"os":"linux","platform":"centos","platformFamily":"rhel","platformVersion":"8.1.1911","kernelVersion":"4.18.0-147.3.1.el8_1.x86_64","virtualizationSystem":"kvm","virtualizationRole":"host","hostid":"7a5ce2b9-b6fd-51aa-ae74-37e9875a49a4"}
I0412 14:56:28.044196 3740 start.go:272] virtualization: kvm host
๐Ÿ˜„ minikube v1.9.2 on Centos 8.1.1911
I0412 14:56:28.109228 3740 driver.go:245] Setting default libvirt URI to qemu:///system
I0412 14:56:28.110115 3740 main.go:110] libmachine: Found binary path at /home/centos/.minikube/bin/docker-machine-driver-kvm2
I0412 14:56:28.110341 3740 main.go:110] libmachine: Launching plugin server for driver kvm2
I0412 14:56:28.214166 3740 main.go:110] libmachine: Plugin server listening at address 127.0.0.1:42565
I0412 14:56:28.215492 3740 main.go:110] libmachine: () Calling .GetVersion
I0412 14:56:28.216587 3740 main.go:110] libmachine: Using API Version 1
I0412 14:56:28.216655 3740 main.go:110] libmachine: () Calling .SetConfigRaw
I0412 14:56:28.217795 3740 main.go:110] libmachine: () Calling .GetMachineName
I0412 14:56:28.218073 3740 main.go:110] libmachine: (minikube) Calling .DriverName
โœจ Using the kvm2 driver based on existing profile
I0412 14:56:28.553449 3740 start.go:310] selected driver: kvm2
I0412 14:56:28.553483 3740 start.go:656] validating driver "kvm2" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.9.0.iso Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false NodeIP: NodePort:0 NodeName:} Nodes:[{Name:m01 IP: Port:8443 KubernetesVersion:v1.18.0 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true]}
I0412 14:56:28.553702 3740 start.go:662] status for kvm2: {Installed:true Healthy:true Error: Fix: Doc:}
I0412 14:56:28.553771 3740 install.go:49] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:}
I0412 14:56:28.554013 3740 install.go:115] Validating docker-machine-driver-kvm2, PATH=/home/centos/.minikube/bin:/home/centos/.local/bin:/home/centos/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin
I0412 14:56:28.579672 3740 start.go:1004] Using suggested 2200MB memory alloc based on sys=6318MB, container=0MB
I0412 14:56:28.579778 3740 start.go:1210] Wait components to verify : map[apiserver:true system_pods:true]
I0412 14:56:28.579820 3740 iso.go:119] acquiring lock: {Name:mk01300d37e9facb9d5a0a386bcbc0f52dc30a7e Clock:{} Delay:500ms Timeout:10m0s Cancel:}
๐Ÿ‘ Starting control plane node m01 in cluster minikube
I0412 14:56:28.592411 3740 preload.go:81] Checking if preload exists for k8s version v1.18.0 and runtime docker
I0412 14:56:28.592463 3740 preload.go:97] Found local preload: /home/centos/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
I0412 14:56:28.592480 3740 cache.go:46] Caching tarball of preloaded images
I0412 14:56:28.592493 3740 preload.go:123] Found /home/centos/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0412 14:56:28.592500 3740 cache.go:49] Finished downloading the preloaded tar for v1.18.0 on docker
I0412 14:56:28.592676 3740 profile.go:138] Saving config to /home/centos/.minikube/profiles/minikube/config.json ...
I0412 14:56:28.593593 3740 cache.go:117] Successfully downloaded all kic artifacts
I0412 14:56:28.593626 3740 start.go:260] acquiring machines lock for minikube: {Name:mk6fdb6fad8da03ff7832caec82395ee87589e10 Clock:{} Delay:500ms Timeout:15m0s Cancel:}
I0412 14:56:28.593747 3740 start.go:264] acquired machines lock for "minikube" in 99.399ยตs
I0412 14:56:28.593780 3740 start.go:90] Skipping create...Using existing machine configuration
I0412 14:56:28.593791 3740 fix.go:53] fixHost starting: m01
I0412 14:56:28.594192 3740 main.go:110] libmachine: Found binary path at /home/centos/.minikube/bin/docker-machine-driver-kvm2
I0412 14:56:28.594342 3740 main.go:110] libmachine: Launching plugin server for driver kvm2
I0412 14:56:28.723078 3740 main.go:110] libmachine: Plugin server listening at address 127.0.0.1:43313
I0412 14:56:28.724389 3740 main.go:110] libmachine: () Calling .GetVersion
I0412 14:56:28.725322 3740 main.go:110] libmachine: Using API Version 1
I0412 14:56:28.725341 3740 main.go:110] libmachine: () Calling .SetConfigRaw
I0412 14:56:28.725870 3740 main.go:110] libmachine: () Calling .GetMachineName
I0412 14:56:28.726974 3740 main.go:110] libmachine: (minikube) Calling .DriverName
I0412 14:56:28.727470 3740 main.go:110] libmachine: (minikube) Calling .GetState
I0412 14:56:28.762906 3740 fix.go:105] recreateIfNeeded on minikube: state=Running err=
W0412 14:56:28.762947 3740 fix.go:130] unexpected machine state, will restart:
๐Ÿƒ Updating the running kvm2 "minikube" VM ...
I0412 14:56:28.807877 3740 main.go:110] libmachine: (minikube) Calling .DriverName
I0412 14:56:28.808516 3740 machine.go:86] provisioning docker machine ...
I0412 14:56:28.808605 3740 main.go:110] libmachine: (minikube) Calling .DriverName
I0412 14:56:28.809174 3740 main.go:110] libmachine: (minikube) Calling .GetMachineName
I0412 14:56:28.809519 3740 buildroot.go:163] provisioning hostname "minikube"
I0412 14:56:28.809647 3740 main.go:110] libmachine: (minikube) Calling .GetMachineName
I0412 14:56:28.809830 3740 main.go:110] libmachine: (minikube) Calling .GetSSHHostname
I0412 14:56:28.866391 3740 main.go:110] libmachine: (minikube) Calling .GetSSHPort
I0412 14:56:28.866670 3740 main.go:110] libmachine: (minikube) Calling .GetSSHKeyPath
I0412 14:56:28.866863 3740 main.go:110] libmachine: (minikube) Calling .GetSSHKeyPath
I0412 14:56:28.867359 3740 main.go:110] libmachine: (minikube) Calling .GetSSHUsername
I0412 14:56:28.868060 3740 main.go:110] libmachine: Using SSH client type: native
I0412 14:56:28.868373 3740 main.go:110] libmachine: &{{{ 0 [] [] []} docker [0x7bf5d0] 0x7bf5a0 [] 0s} 22 }
I0412 14:56:28.868423 3740 main.go:110] libmachine: About to run SSH command:
sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname
I0412 14:56:28.919632 3740 main.go:110] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
I0412 14:56:31.995120 3740 main.go:110] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
I0412 14:56:35.037567 3740 main.go:110] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
I0412 14:56:38.129529 3740 main.go:110] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
I0412 14:56:41.218743 3740 main.go:110] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
I0412 14:56:44.335636 3740 main.go:110] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [publickey none], no supported methods remain
I0412 14:56:47.385436 3740 main.go:110] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
I0412 14:56:50.477780 3740 main.go:110] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
I0412 14:56:53.545354 3740 main.go:110] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
I0412 14:56:56.607476 3740 main.go:110] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
I0412 14:56:59.699377 3740 main.go:110] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
I0412 14:57:02.761546 3740 main.go:110] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
I0412 14:57:05.807205 3740 main.go:110] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
I0412 14:57:08.867539 3740 main.go:110] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
I0412 14:57:11.927056 3740 main.go:110] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [publicke

Suggestion is to set an option to extend the time needed to access the vm on start.

areperformance ckvm2 kinbug prioritimportant-soon

Most helpful comment

@kubealex glad to see that is solved, it will be included in our next release

All 9 comments

We were hoping that the RNG fixes in 1.9 would make it faster to boot, but seems it still too slow

I just merged a PR to increase the time out for host creation for slower system.

@kubealex meanwhile can u confirm if the system u are using is slower than normal ? or under PID or cpu or memory pressure?

@medyagh

I can confirm It was a fresh install of the OS, so no pressure of any kind!

@medyagh

I can confirm It was a fresh install of the OS, so no pressure of any kind!

do you mind trying HEAD minikube and see if increasing the time solved it ?

and also do u mind sharing your CPU count and Ram size?

Sure thing, i Will try It ASAP.

Btw It is a 6G 2vcpu with SSD.

Waaaay better!
Now the VM starts very fast, honestly, and managed to configure all.
It timeouts on healthcheck:

[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.

Unfortunately, an error has occurred:
    timed out waiting for the condition

This error is likely caused by:
    - The kubelet is not running
    - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
    - 'systemctl status kubelet'
    - 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.

Here is one example how you may list all Kubernetes containers running in docker:
    - 'docker ps -a | grep kube | grep -v pause'
    Once you have found the failing container, you can inspect its logs with:
    - 'docker logs CONTAINERID'

stderr:
W0417 08:21:01.752195 2469 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0417 08:21:22.819977 2469 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0417 08:21:22.826675 2469 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

๐ŸŒŸ Enabling addons: default-storageclass, storage-provisioner
๐Ÿ„ Done! kubectl is now configured to use "minikube"
๐Ÿ’ก For best results, install kubectl: https://kubernetes.io/docs/tasks/tools/install-kubectl/

But eventually all is fine!

[alex@fedora ~]$ kubectl get pod --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-66bff467f8-4rd2j 1/1 Running 0 53m
kube-system coredns-66bff467f8-ppgvb 1/1 Running 0 53m
kube-system etcd-minikube 1/1 Running 0 55m
kube-system kube-apiserver-minikube 1/1 Running 0 55m
kube-system kube-controller-manager-minikube 1/1 Running 3 55m
kube-system kube-proxy-s2m4g 1/1 Running 0 53m
kube-system kube-scheduler-minikube 1/1 Running 1 55m
kube-system storage-provisioner 1/1 Running 0 54m

@kubealex glad to see that is solved, it will be included in our next release

Was this page helpful?
0 / 5 - 0 ratings