Environment:
minikube version: v0.26.1
OS:
NAME=Gentoo
ID=gentoo
PRETTY_NAME="Gentoo/Linux"
ANSI_COLOR="1;32"
HOME_URL="https://www.gentoo.org/"
SUPPORT_URL="https://www.gentoo.org/support/"
BUG_REPORT_URL="https://bugs.gentoo.org/"
VM driver:
grep: /home/g4s8/.minikube/machines/minikube/config.json: No such file or directory
ISO version
grep: /home/g4s8/.minikube/machines/minikube/config.json: No such file or directory
What happened:
After updating from 0.25.2 to 0.26.1 minikube failed to create a cluster.
What you expected to happen:
minikube creates cluster successfuly.
How to reproduce it (as minimally and precisely as possible):
minikube start
Output of minikube logs (if applicable):
Starting local Kubernetes v1.10.0 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
E0505 11:38:47.549953 7600 start.go:281] Error restarting cluster: restarting kube-proxy: waiting for kube-proxy to be up for configmap update: timed out waiting for the condition
Anything else do we need to know:
I downgraded minikube back to 0.25.2 and it's working fine.
Workaround:
it can be fixed with minikube delete after updating to new version, see https://github.com/kubernetes/minikube/issues/2786#issuecomment-386836430
have the same issue on High Serria
I've had the same on linux mint, ubuntu and windows 10 home. This is a duplicate of https://github.com/kubernetes/minikube/issues/2765
@DickChesterwood thanks, minikube delete && minikube start solved this issue. But I think it's still a bug, because IMO minikube should automatically check compatibility with previous version and perform some kind of migration if incompatible with old config.
I installed 0.26.1 from scratch. No previous Minikube and got this problem. delete & start did not solve the problem.
Had to revert to 0.25.2 too, and working fine.
Windows 10 Pro, VirualBox 5.2.12
I think this has been fixed I think this was fixed with https://github.com/kubernetes/minikube/pull/2791
EDIT: I can start v0.28.2 using localkube work-around.:
minikube start --bootstrapper=localkube --vm-driver=vmwarefusion
See: #2765, #2791
_Original comment:_
Not fixed. I cannot run any version of minikube > 0.25.2.
The vm is up but kube-proxy does not startup properly.
Running under VMware Fusion 10.1.2 on Mac OS X 10.11.6.
I have tried minikube versions: 0.26.1, 0.28.0, and 0.28.2.
This is from latest, minikube version 0.28.2:
Starting cluster components...
E0729 04:39:55.127133 1355 start.go:305] Error restarting cluster: restarting kube-proxy: waiting for kube-proxy to be up for configmap update: timed out waiting for the condition
Pods:
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
kube-system etcd-minikube 1/1 Running 0 3m 172.16.139.141 minikube
kube-system kube-addon-manager-minikube 1/1 Running 0 4m 172.16.139.141 minikube
kube-system kube-apiserver-minikube 1/1 Running 0 3m 172.16.139.141 minikube
kube-system kube-controller-manager-minikube 1/1 Running 0 3m 172.16.139.141 minikube
kube-system kube-scheduler-minikube 1/1 Running 0 2m 172.16.139.141 minikube
kube-system kubernetes-dashboard-5498ccf677-rqlfr 0/1 CrashLoopBackOff 3 4m 172.17.0.2 minikube
kube-system storage-provisioner 1/1 Running 4 4m 172.16.139.141 minikube
kube-system tiller-deploy-f9b8476d-rkbdl 0/1 Running 0 12s 172.17.0.3 minikube
Docker:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3830f1e78003 4689081edb10 "/storage-provisioner" 2 seconds ago Up 2 seconds k8s_storage-provisioner_storage-provisioner_kube-system_fd503fe6-9353-11e8-abde-000c29909dc8_2
5c02d0ed225a e94d2f21bc0c "/dashboard --insecu…" 21 seconds ago Up 20 seconds k8s_kubernetes-dashboard_kubernetes-dashboard-5498ccf677-rqlfr_kube-system_fd3e5bf5-9353-11e8-abde-000c29909dc8_1
59a7ac87d868 k8s.gcr.io/pause-amd64:3.1 "/pause" About a minute ago Up About a minute k8s_POD_storage-provisioner_kube-system_fd503fe6-9353-11e8-abde-000c29909dc8_0
a96e978f1ca2 k8s.gcr.io/pause-amd64:3.1 "/pause" About a minute ago Up About a minute k8s_POD_kubernetes-dashboard-5498ccf677-rqlfr_kube-system_fd3e5bf5-9353-11e8-abde-000c29909dc8_0
48a62339d58f k8s.gcr.io/etcd-amd64 "etcd --advertise-cl…" About a minute ago Up About a minute k8s_etcd_etcd-minikube_kube-system_4db1aafced3cb396e2392e93c013392f_0
367f979ab90e k8s.gcr.io/kube-apiserver-amd64 "kube-apiserver --ad…" 2 minutes ago Up 2 minutes k8s_kube-apiserver_kube-apiserver-minikube_kube-system_a0c54ff92bcc25d861bc3ebf744256fa_0
a8f245f67916 k8s.gcr.io/kube-addon-manager "/opt/kube-addons.sh" 2 minutes ago Up 2 minutes k8s_kube-addon-manager_kube-addon-manager-minikube_kube-system_3afaf06535cc3b85be93c31632b765da_0
d389e081387e k8s.gcr.io/kube-scheduler-amd64 "kube-scheduler --ad…" 3 minutes ago Up 3 minutes k8s_kube-scheduler_kube-scheduler-minikube_kube-system_31cf0ccbee286239d451edb6fb511513_0
ade31ddd00d8 k8s.gcr.io/kube-controller-manager-amd64 "kube-controller-man…" 3 minutes ago Up 3 minutes k8s_kube-controller-manager_kube-controller-manager-minikube_kube-system_274b8507dd36aaa98b28e15ffc7d29d5_0
8aa0e559940d k8s.gcr.io/pause-amd64:3.1 "/pause" 3 minutes ago Up 3 minutes k8s_POD_kube-scheduler-minikube_kube-system_31cf0ccbee286239d451edb6fb511513_0
7befc6140cdb k8s.gcr.io/pause-amd64:3.1 "/pause" 3 minutes ago Up 3 minutes k8s_POD_kube-addon-manager-minikube_kube-system_3afaf06535cc3b85be93c31632b765da_0
fe09a09ed01f k8s.gcr.io/pause-amd64:3.1 "/pause" 3 minutes ago Up 3 minutes k8s_POD_kube-apiserver-minikube_kube-system_a0c54ff92bcc25d861bc3ebf744256fa_0
1028f0641c36 k8s.gcr.io/pause-amd64:3.1 "/pause" 3 minutes ago Up 3 minutes k8s_POD_kube-controller-manager-minikube_kube-system_274b8507dd36aaa98b28e15ffc7d29d5_0
4952e1468573 k8s.gcr.io/pause-amd64:3.1 "/pause" 3 minutes ago Up 3 minutes k8s_POD_etcd-minikube_kube-system_4db1aafced3cb396e2392e93c013392f_0
Most helpful comment
@DickChesterwood thanks,
minikube delete && minikube startsolved this issue. But I think it's still a bug, because IMOminikubeshould automatically check compatibility with previous version and perform some kind of migration if incompatible with old config.