Minikube: Unable to connect to the server: dial tcp <IP>: i/o timeout

Created on 26 Apr 2017  ·  10Comments  ·  Source: kubernetes/minikube

This is a BUG REPORT.

Minikube version: v0.18.0

Environment:

  • OS: OS X EI Capitan 10.11.6
  • VM Driver: virtualbox
  • ISO version: minikube-v0.18.0.iso
  • Install tools:
  • Others:

Minikube status:

minikubeVM: Running
localkube: Running

What happened:
Not able to run minikube dashboard command:

Could not find finalized endpoint being pointed to by kubernetes-dashboard: Error validating service: Error getting service kubernetes-dashboard: Get https://192.168.99.100:8443/api/v1/namespaces/kube-system/services/kubernetes-dashboard: dial tcp 192.168.99.100:8443: i/o timeout

What you expected to happen:
Open up dashboard in my default browser

How to reproduce it (as minimally and precisely as possible):

  1. minikube start
  2. minikube dashboard

Anything else do we need to know:
Minikube was working fine for me yesterday, the only thing I did between now and then was a restart.

I tried recreating the cluster from scratch (also purged ~/.minikube), but it didn't help.

output from minikube ssh systemctl status systemd-networkd-wait-online.service

● systemd-networkd-wait-online.service - Wait for Network to be Configured
   Loaded: loaded (/lib/systemd/system/systemd-networkd-wait-online.service; enabled; vendor preset: disabled)
   Active: failed (Result: exit-code) since Wed 2017-04-26 02:58:59 UTC; 21s ago
     Docs: man:systemd-networkd-wait-online.service(8)
  Process: 3350 ExecStart=/lib/systemd/systemd-networkd-wait-online (code=exited, status=1/FAILURE)
 Main PID: 3350 (code=exited, status=1/FAILURE)

Apr 26 02:56:58 minikube systemd[1]: Starting Wait for Network to be Configured...
Apr 26 02:57:00 minikube systemd-networkd-wait-online[3350]: ignoring: lo
Apr 26 02:58:59 minikube systemd[1]: systemd-networkd-wait-online.service: Main process exited, code=exited, status=1/FAILURE
Apr 26 02:58:59 minikube systemd[1]: Failed to start Wait for Network to be Configured.
Apr 26 02:58:59 minikube systemd[1]: systemd-networkd-wait-online.service: Unit entered failed state.
Apr 26 02:58:59 minikube systemd[1]: systemd-networkd-wait-online.service: Failed with result 'exit-code'.
E0425 19:59:19.844385   80923 ssh.go:44] Error attempting to ssh/run-ssh-command: exit status 3

output from minikube ssh systemctl status localkube:

● localkube.service - Localkube
   Loaded: loaded (/lib/systemd/system/localkube.service; enabled; vendor preset: enabled)
   Active: active (running) since Wed 2017-04-26 02:59:06 UTC; 8min ago
     Docs: https://github.com/kubernetes/minikube/tree/master/pkg/localkube
 Main PID: 3488 (localkube)
    Tasks: 15 (limit: 4915)
   Memory: 135.0M
      CPU: 31.116s
   CGroup: /system.slice/localkube.service
           ├─3488 /usr/local/bin/localkube --generate-certs=false --logtostderr=true --enable-dns=false --node-ip=192.168.99.100
           └─3602 journalctl -k -f

Apr 26 03:01:39 minikube localkube[3488]: I0426 03:01:39.467719    3488 operation_generator.go:597] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/584854b6-2a2c-11e7-8be3-0800279373c3-default-token-5qd90" (spec.Name: "default-token-5qd90") pod "584854b6-2a2c-11e7-8be3-0800279373c3" (UID: "584854b6-2a2c-11e7-8be3-0800279373c3").
Apr 26 03:01:47 minikube localkube[3488]: I0426 03:01:47.543096    3488 operation_generator.go:597] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/5823f546-2a2c-11e7-8be3-0800279373c3-default-token-5qd90" (spec.Name: "default-token-5qd90") pod "5823f546-2a2c-11e7-8be3-0800279373c3" (UID: "5823f546-2a2c-11e7-8be3-0800279373c3").
Apr 26 03:02:44 minikube localkube[3488]: I0426 03:02:44.472033    3488 operation_generator.go:597] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/584854b6-2a2c-11e7-8be3-0800279373c3-default-token-5qd90" (spec.Name: "default-token-5qd90") pod "584854b6-2a2c-11e7-8be3-0800279373c3" (UID: "584854b6-2a2c-11e7-8be3-0800279373c3").
Apr 26 03:03:00 minikube localkube[3488]: I0426 03:03:00.524800    3488 operation_generator.go:597] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/5823f546-2a2c-11e7-8be3-0800279373c3-default-token-5qd90" (spec.Name: "default-token-5qd90") pod "5823f546-2a2c-11e7-8be3-0800279373c3" (UID: "5823f546-2a2c-11e7-8be3-0800279373c3").
Apr 26 03:03:48 minikube localkube[3488]: I0426 03:03:48.483685    3488 operation_generator.go:597] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/584854b6-2a2c-11e7-8be3-0800279373c3-default-token-5qd90" (spec.Name: "default-token-5qd90") pod "584854b6-2a2c-11e7-8be3-0800279373c3" (UID: "584854b6-2a2c-11e7-8be3-0800279373c3").
Apr 26 03:04:09 minikube localkube[3488]: I0426 03:04:09.560177    3488 operation_generator.go:597] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/5823f546-2a2c-11e7-8be3-0800279373c3-default-token-5qd90" (spec.Name: "default-token-5qd90") pod "5823f546-2a2c-11e7-8be3-0800279373c3" (UID: "5823f546-2a2c-11e7-8be3-0800279373c3").
Apr 26 03:04:53 minikube localkube[3488]: I0426 03:04:53.545020    3488 operation_generator.go:597] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/584854b6-2a2c-11e7-8be3-0800279373c3-default-token-5qd90" (spec.Name: "default-token-5qd90") pod "584854b6-2a2c-11e7-8be3-0800279373c3" (UID: "584854b6-2a2c-11e7-8be3-0800279373c3").
Apr 26 03:05:12 minikube localkube[3488]: I0426 03:05:12.536285    3488 operation_generator.go:597] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/5823f546-2a2c-11e7-8be3-0800279373c3-default-token-5qd90" (spec.Name: "default-token-5qd90") pod "5823f546-2a2c-11e7-8be3-0800279373c3" (UID: "5823f546-2a2c-11e7-8be3-0800279373c3").
Apr 26 03:06:15 minikube localkube[3488]: I0426 03:06:15.521764    3488 operation_generator.go:597] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/584854b6-2a2c-11e7-8be3-0800279373c3-default-token-5qd90" (spec.Name: "default-token-5qd90") pod "584854b6-2a2c-11e7-8be3-0800279373c3" (UID: "584854b6-2a2c-11e7-8be3-0800279373c3").
Apr 26 03:06:34 minikube localkube[3488]: I0426 03:06:34.510598    3488 operation_generator.go:597] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/5823f546-2a2c-11e7-8be3-0800279373c3-default-token-5qd90" (spec.Name: "default-token-5qd90") pod "5823f546-2a2c-11e7-8be3-0800279373c3" (UID: "5823f546-2a2c-11e7-8be3-0800279373c3").
kinsupport lifecyclrotten

Most helpful comment

There are couple more people reporting the same bug in https://github.com/kubernetes/minikube/issues/1224.

All 10 comments

There are couple more people reporting the same bug in https://github.com/kubernetes/minikube/issues/1224.

Do you have a OS X firewall enabled? Some of the users in that thread were able to solve their problem by disabling it

https://github.com/kubernetes/minikube/issues/1224#issuecomment-284263360

Turning off firewall doesn't help in my case. I would like to point out that it was working fine the day before even with firewall on. Is systemd-networkd-wait-online.service supposed to be down?

I just double checked on this, the routing table on my mac is not setup correctly. All the traffic to the VM is sent to my local network gateway instead.

is the routing table supposed to be managed by minikube?

Hi,
I get all the same simptoms as @houqp.
My routing table does not look ok.

Let me know if i can do something to help you track and eventual bug.

Cheers!

EDIT:
https://github.com/kubernetes/minikube/issues/549 sugestion fixed it for me:
There is a package missing: https://www.archlinux.org/packages/core/i686/net-tools/

I was facing similar issues when using with docker-machine from "Docker for Mac".
I overwrited it with the one from "brew install docker-machine-driver-xhyve", now it works.

Hope it helps

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle rotten
/remove-lifecycle stale

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Was this page helpful?
0 / 5 - 0 ratings