Is this a BUG REPORT or FEATURE REQUEST? (choose one): Bug report
Minikube version (use minikube version
): minikube version: v0.17.1
Environment:
cat ~/.minikube/machines/minikube/config.json | grep DriverName
): virtualboxcat ~/.minikube/machines/minikube/config.json | grep -i ISO
or minikube ssh cat /etc/VERSION
): minikube-v1.0.7.isoWhat happened:
$ ./minikube-linux-amd64 start
Starting local Kubernetes cluster...
Starting VM...
SSH-ing files into VM...
Setting up certs...
Starting cluster components...
Connecting to cluster...
Setting up kubeconfig...
Kubectl is now configured to use the cluster.
$ kubectl get pods
Unable to connect to the server: dial tcp 192.168.99.100:8443: i/o timeout
What you expected to happen:
Get an empty list of pods
How to reproduce it (as minimally and precisely as possible):
Anything else do we need to know:
Output of minikube ssh systemctl status localkube
:
● localkube.service - Localkube
Loaded: loaded (/lib/systemd/system/localkube.service; enabled; vendor preset: enabled)
Active: active (running) since Mon 2017-03-06 14:09:08 UTC; 14min ago
Docs: https://github.com/kubernetes/minikube/tree/master/pkg/localkube
Main PID: 3271 (localkube)
Tasks: 16 (limit: 4915)
Memory: 141.8M
CPU: 1min 54.669s
CGroup: /system.slice/localkube.service
├─3271 /usr/local/bin/localkube --generate-certs=false --logtostderr=true --enable-dns=false --node-ip=192.168.99.100 --apiserver-name=minikubeCA
└─3350 journalctl -k -f
Mar 06 14:19:12 minikube localkube[3271]: I0306 14:19:12.296016 3271 replication_controller.go:322] Observed updated replication controller kubernetes-dashboard. Desired pod count change: 1->1
Mar 06 14:19:12 minikube localkube[3271]: I0306 14:19:12.296150 3271 replication_controller.go:322] Observed updated replication controller kube-dns-v20. Desired pod count change: 1->1
Mar 06 14:19:17 minikube localkube[3271]: I0306 14:19:17.895294 3271 operation_executor.go:917] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/8357ed5c-0276-11e7-9885-0800272e2447-default-token-77nfr" (spec.Name: "default-token-77nfr") pod "8357ed5c-0276-11e7-9885-0800272e2447" (UID: "8357ed5c-0276-11e7-9885-0800272e2447").
Mar 06 14:20:30 minikube localkube[3271]: I0306 14:20:30.883835 3271 operation_executor.go:917] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/836b1214-0276-11e7-9885-0800272e2447-default-token-77nfr" (spec.Name: "default-token-77nfr") pod "836b1214-0276-11e7-9885-0800272e2447" (UID: "836b1214-0276-11e7-9885-0800272e2447").
Mar 06 14:20:47 minikube localkube[3271]: I0306 14:20:47.820727 3271 operation_executor.go:917] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/8357ed5c-0276-11e7-9885-0800272e2447-default-token-77nfr" (spec.Name: "default-token-77nfr") pod "8357ed5c-0276-11e7-9885-0800272e2447" (UID: "8357ed5c-0276-11e7-9885-0800272e2447").
Mar 06 14:21:02 minikube localkube[3271]: apply entries took too long [11.765144ms for 1 entries]
Mar 06 14:21:02 minikube localkube[3271]: avoid queries with large range/delete range!
Mar 06 14:21:09 minikube localkube[3271]: E0306 14:21:09.826764 3271 repair.go:132] the node port 30000 for service kubernetes-dashboard/kube-system is not allocated; repairing
Mar 06 14:21:57 minikube localkube[3271]: I0306 14:21:57.895042 3271 operation_executor.go:917] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/836b1214-0276-11e7-9885-0800272e2447-default-token-77nfr" (spec.Name: "default-token-77nfr") pod "836b1214-0276-11e7-9885-0800272e2447" (UID: "836b1214-0276-11e7-9885-0800272e2447").
Mar 06 14:21:58 minikube localkube[3271]: I0306 14:21:58.899802 3271 operation_executor.go:917] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/8357ed5c-0276-11e7-9885-0800272e2447-default-token-77nfr" (spec.Name: "default-token-77nfr") pod "8357ed5c-0276-11e7-9885-0800272e2447" (UID: "8357ed5c-0276-11e7-9885-0800272e2447").
If I SSH in, I can see that there is 1 failed systemd unit, no idea if it matters or not:
$ systemctl status systemd-networkd-wait-online.service
● systemd-networkd-wait-online.service - Wait for Network to be Configured
Loaded: loaded (/lib/systemd/system/systemd-networkd-wait-online.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Mon 2017-03-06 14:09:08 UTC; 18min ago
Docs: man:systemd-networkd-wait-online.service(8)
Process: 3241 ExecStart=/lib/systemd/systemd-networkd-wait-online (code=exited, status=1/FAILURE)
Main PID: 3241 (code=exited, status=1/FAILURE)
Mar 06 14:07:08 minikube systemd[1]: Starting Wait for Network to be Configured...
Mar 06 14:07:08 minikube systemd-networkd-wait-online[3241]: ignoring: lo
Mar 06 14:09:08 minikube systemd[1]: systemd-networkd-wait-online.service: Main process exited, code=exited, status=1/FAILURE
Mar 06 14:09:08 minikube systemd[1]: Failed to start Wait for Network to be Configured.
Mar 06 14:09:08 minikube systemd[1]: systemd-networkd-wait-online.service: Unit entered failed state.
Mar 06 14:09:08 minikube systemd[1]: systemd-networkd-wait-online.service: Failed with result 'exit-code'.
@norbert-yoimo I think I'm seeing a similar issue to you. Is the kube API server flapping up and down? I mean, are you able to run kubectl get pods
at some times, and then within a few seconds, it returns the dial tcp
error?
I just tried a few times, but I could never get it to connect.
Can you post the output of minkube ssh journalctl
and minikube logs
?
Hi, I have the same issue
minikube.log.txt
journalctl.txt
My OS is Void Linux, with VirtualBox 5.1.14.
Oh I checked that from the vm (via minikube ssh
) I can access the world:
docker run -ti alpine /bin/sh
And both from the vm and from the just started alpine docker.
curl https://www.xs4all.nl/index.html
The failed systemd-networkd-wait-online.service is a red herring for this issue. See #1277.
Tried again with 0.18.0, still the same problem, attaching the requested files:
Same problem
same problem on ubuntu 17.04
@RaananHadar I found the solution for ubuntu 17.04 the problem is in the docker version in the Ubuntu repo, you need to uninstall docker docker-engine and install docker-ce.
Here are the steps that I've done:
$sudo apt-get remove docker docker-engine
follow the steps to install docker-ce but you need to change the sources from zesty to xenial because the repo for zesty does not exist yet (tricky bit).
$ sudo apt-get update
$ sudo apt-get install
linux-image-extra-$(uname -r)
linux-image-extra-virtual
$ sudo apt-get install
apt-transport-https
ca-certificates
curl
software-properties-common
$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
$ sudo apt-key fingerprint 0EBFCD88
the key should be equal to this : 9DC8 5822 9FC7 DD38 854A E2D8 8D81 803C 0EBF CD88.
IMPORTANT!!!!
add the repository using xenial instead of zesty
$ sudo add-apt-repository
"deb [arch=amd64] https://download.docker.com/linux/ubuntu xenial stable"
$ sudo apt-get update
$ sudo apt-get install docker-ce
at this point run minikube
$ minikube start
and check the status
$minikube status
minikubeVM: Running
localkube: Running
Now you should be able to run kubectl cluster-info
@david1983 I tried the instructions above but in the end got the same network error, like @RaananHadar I was trying this on Ubuntu 17.04.
same problem here :-(
For others that run into this problem. https://github.com/kubernetes/minikube/issues/1224#issuecomment-316411907 solved it for me.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
Prevent issues from auto-closing with an /lifecycle frozen
comment.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or @fejta
.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
/reopen
Hi, I had this issue:
Then I check the minikube: "minikubeis not running"
Then I restarted minikube and It started to works.
I hope this will be useful to us.
In my case use-context was not set:
Use this if you have kubernetes running with Docker for Desktop:
kubectl config use-context docker-for-desktop
Use this if you have kubernetes running with minikube:
kubectl config use-context minikube
If you run into problems with Minikube, the best is to remove and start it over again:
minikube stop; minikube delete
rm /usr/local/bin/minikube
rm -rf ~/.minikube
minikube start
In my case, my minikube wasnt active. had to start it by "minikube start"
Hey @editaxz did restarting the cluster via
minikube delete
minikube start
fix it for you? I'd suggest upgrading to the latest version of minikube, v1.13.1, as well.
Closing as this is due to speaking to a stopped cluster, but opened #9410 to make this less confusing for users.
Most helpful comment
same problem on ubuntu 17.04