Is this a BUG REPORT or FEATURE REQUEST? (choose one):
BUG REPORT
Please provide the following details:
Environment: homebrew/macOS Sierra 10.12.6
Minikube version (use minikube version): v0.28.0
cat ~/.minikube/machines/minikube/config.json | grep DriverName):hyperkitcat ~/.minikube/machines/minikube/config.json | grep -i ISO or minikube ssh cat /etc/VERSION):minikube-v0.25.1.isominikube version
echo "";
echo "OS:";
cat /etc/os-release
echo "";
echo "VM driver":
grep DriverName ~/.minikube/machines/minikube/config.json
echo "";
echo "ISO version";
grep -i ISO ~/.minikube/machines/minikube/config.json
What happened:
start after update hangs for 10 minutes, then fails.
What you expected to happen:
the cluster to start and hopefully restore my previous environment (including deployments)
How to reproduce it (as minimally and precisely as possible):
minikube --profile XX stop
brew cask reinstall minikube
minikube --profile XX start --log_dir /Users/me/.minikube/logs --loglevel 0 --vm-driver=hyperkit --memory 6144 --extra-config=apiserver.Authorization.Mode=RBAC --kubernetes-version v1.9.4
Starting local Kubernetes v1.9.4 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Downloading kubeadm v1.9.4
Downloading kubelet v1.9.4
Finished Downloading kubeadm v1.9.4
Finished Downloading kubelet v1.9.4
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
E0629 10:48:17.561126 59862 start.go:299] Error restarting cluster: restarting kube-proxy: waiting for kube-proxy to be up for configmap update: timed out waiting for the condition
Output of minikube logs (if applicable):
Log file created at: 2018/06/29 10:37:21
Running on machine: my-MacBook-Pro-2
Binary: Built with gc go1.9.1 for darwin/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0629 10:37:21.534775 59862 cluster.go:73] Skipping create...Using existing machine configuration
I0629 10:37:21.613741 59862 cluster.go:82] Machine state: Stopped
I0629 10:37:57.388732 59862 ssh_runner.go:57] Run: sudo rm -f /etc/docker/ca.pem
I0629 10:37:57.393259 59862 ssh_runner.go:57] Run: sudo mkdir -p /etc/docker
I0629 10:37:57.402288 59862 ssh_runner.go:57] Run: sudo rm -f /etc/docker/server.pem
I0629 10:37:57.406755 59862 ssh_runner.go:57] Run: sudo mkdir -p /etc/docker
I0629 10:37:57.415557 59862 ssh_runner.go:57] Run: sudo rm -f /etc/docker/server-key.pem
I0629 10:37:57.419806 59862 ssh_runner.go:57] Run: sudo mkdir -p /etc/docker
I0629 10:38:00.119003 59862 kubeadm.go:214] Container runtime flag provided with no value, using defaults.
I0629 10:38:11.173843 59862 ssh_runner.go:57] Run: sudo rm -f /usr/bin/kubeadm
I0629 10:38:11.178797 59862 ssh_runner.go:57] Run: sudo mkdir -p /usr/bin
I0629 10:38:11.356665 59862 ssh_runner.go:57] Run: sudo rm -f /usr/bin/kubelet
I0629 10:38:11.376755 59862 ssh_runner.go:57] Run: sudo mkdir -p /usr/bin
I0629 10:38:14.020108 59862 ssh_runner.go:57] Run: sudo rm -f /lib/systemd/system/kubelet.service
I0629 10:38:14.025607 59862 ssh_runner.go:57] Run: sudo mkdir -p /lib/systemd/system
I0629 10:38:14.034484 59862 ssh_runner.go:57] Run: sudo rm -f /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
I0629 10:38:14.038851 59862 ssh_runner.go:57] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d
I0629 10:38:14.048248 59862 ssh_runner.go:57] Run: sudo rm -f /var/lib/kubeadm.yaml
I0629 10:38:14.052414 59862 ssh_runner.go:57] Run: sudo mkdir -p /var/lib
I0629 10:38:14.060590 59862 ssh_runner.go:57] Run: sudo rm -f /etc/kubernetes/addons/influxGrafana-rc.yaml
I0629 10:38:14.064443 59862 ssh_runner.go:57] Run: sudo mkdir -p /etc/kubernetes/addons
I0629 10:38:14.072901 59862 ssh_runner.go:57] Run: sudo rm -f /etc/kubernetes/addons/grafana-svc.yaml
I0629 10:38:14.076879 59862 ssh_runner.go:57] Run: sudo mkdir -p /etc/kubernetes/addons
I0629 10:38:14.085344 59862 ssh_runner.go:57] Run: sudo rm -f /etc/kubernetes/addons/influxdb-svc.yaml
I0629 10:38:14.089525 59862 ssh_runner.go:57] Run: sudo mkdir -p /etc/kubernetes/addons
I0629 10:38:14.097275 59862 ssh_runner.go:57] Run: sudo rm -f /etc/kubernetes/addons/heapster-rc.yaml
I0629 10:38:14.101334 59862 ssh_runner.go:57] Run: sudo mkdir -p /etc/kubernetes/addons
I0629 10:38:14.109731 59862 ssh_runner.go:57] Run: sudo rm -f /etc/kubernetes/addons/heapster-svc.yaml
I0629 10:38:14.113421 59862 ssh_runner.go:57] Run: sudo mkdir -p /etc/kubernetes/addons
I0629 10:38:14.121348 59862 ssh_runner.go:57] Run: sudo rm -f /etc/kubernetes/addons/ingress-configmap.yaml
I0629 10:38:14.125062 59862 ssh_runner.go:57] Run: sudo mkdir -p /etc/kubernetes/addons
I0629 10:38:14.133312 59862 ssh_runner.go:57] Run: sudo rm -f /etc/kubernetes/addons/ingress-dp.yaml
I0629 10:38:14.137404 59862 ssh_runner.go:57] Run: sudo mkdir -p /etc/kubernetes/addons
I0629 10:38:14.145839 59862 ssh_runner.go:57] Run: sudo rm -f /etc/kubernetes/addons/ingress-svc.yaml
I0629 10:38:14.150074 59862 ssh_runner.go:57] Run: sudo mkdir -p /etc/kubernetes/addons
I0629 10:38:14.158233 59862 ssh_runner.go:57] Run: sudo rm -f /etc/kubernetes/addons/dashboard-dp.yaml
I0629 10:38:14.162441 59862 ssh_runner.go:57] Run: sudo mkdir -p /etc/kubernetes/addons
I0629 10:38:14.170904 59862 ssh_runner.go:57] Run: sudo rm -f /etc/kubernetes/addons/dashboard-svc.yaml
I0629 10:38:14.175031 59862 ssh_runner.go:57] Run: sudo mkdir -p /etc/kubernetes/addons
I0629 10:38:14.183064 59862 ssh_runner.go:57] Run: sudo rm -f /etc/kubernetes/addons/storageclass.yaml
I0629 10:38:14.187064 59862 ssh_runner.go:57] Run: sudo mkdir -p /etc/kubernetes/addons
I0629 10:38:14.195355 59862 ssh_runner.go:57] Run: sudo rm -f /etc/kubernetes/addons/storage-provisioner.yaml
I0629 10:38:14.199127 59862 ssh_runner.go:57] Run: sudo mkdir -p /etc/kubernetes/addons
I0629 10:38:14.207200 59862 ssh_runner.go:57] Run: sudo rm -f /etc/kubernetes/manifests/addon-manager.yaml
I0629 10:38:14.211194 59862 ssh_runner.go:57] Run: sudo mkdir -p /etc/kubernetes/manifests/
I0629 10:38:14.220131 59862 ssh_runner.go:57] Run:
sudo systemctl daemon-reload &&
sudo systemctl enable kubelet &&
sudo systemctl start kubelet
I0629 10:38:14.337164 59862 certs.go:47] Setting up certificates for IP: 192.168.64.2
I0629 10:38:14.362578 59862 ssh_runner.go:57] Run: sudo rm -f /var/lib/localkube/certs/ca.crt
I0629 10:38:14.368629 59862 ssh_runner.go:57] Run: sudo mkdir -p /var/lib/localkube/certs/
I0629 10:38:14.378100 59862 ssh_runner.go:57] Run: sudo rm -f /var/lib/localkube/certs/ca.key
I0629 10:38:14.381915 59862 ssh_runner.go:57] Run: sudo mkdir -p /var/lib/localkube/certs/
I0629 10:38:14.396455 59862 ssh_runner.go:57] Run: sudo rm -f /var/lib/localkube/certs/apiserver.crt
I0629 10:38:14.400023 59862 ssh_runner.go:57] Run: sudo mkdir -p /var/lib/localkube/certs/
I0629 10:38:14.411899 59862 ssh_runner.go:57] Run: sudo rm -f /var/lib/localkube/certs/apiserver.key
I0629 10:38:14.416019 59862 ssh_runner.go:57] Run: sudo mkdir -p /var/lib/localkube/certs/
I0629 10:38:14.427430 59862 ssh_runner.go:57] Run: sudo rm -f /var/lib/localkube/certs/proxy-client-ca.crt
I0629 10:38:14.435294 59862 ssh_runner.go:57] Run: sudo mkdir -p /var/lib/localkube/certs/
I0629 10:38:14.450769 59862 ssh_runner.go:57] Run: sudo rm -f /var/lib/localkube/certs/proxy-client-ca.key
I0629 10:38:14.458054 59862 ssh_runner.go:57] Run: sudo mkdir -p /var/lib/localkube/certs/
I0629 10:38:14.467578 59862 ssh_runner.go:57] Run: sudo rm -f /var/lib/localkube/certs/proxy-client.crt
I0629 10:38:14.475324 59862 ssh_runner.go:57] Run: sudo mkdir -p /var/lib/localkube/certs/
I0629 10:38:14.496392 59862 ssh_runner.go:57] Run: sudo rm -f /var/lib/localkube/certs/proxy-client.key
I0629 10:38:14.500726 59862 ssh_runner.go:57] Run: sudo mkdir -p /var/lib/localkube/certs/
I0629 10:38:14.509679 59862 ssh_runner.go:57] Run: sudo rm -f /var/lib/localkube/kubeconfig
I0629 10:38:14.513918 59862 ssh_runner.go:57] Run: sudo mkdir -p /var/lib/localkube
I0629 10:38:14.522098 59862 config.go:101] Using kubeconfig: /Users/me/.kube/config
I0629 10:38:14.533658 59862 ssh_runner.go:57] Run:
sudo kubeadm alpha phase certs all --config /var/lib/kubeadm.yaml &&
sudo /usr/bin/kubeadm alpha phase kubeconfig all --config /var/lib/kubeadm.yaml &&
sudo /usr/bin/kubeadm alpha phase controlplane all --config /var/lib/kubeadm.yaml &&
sudo /usr/bin/kubeadm alpha phase etcd local --config /var/lib/kubeadm.yaml
I0629 10:38:17.574664 59862 kubernetes.go:119] error getting Pods with label selector "k8s-app=kube-proxy" [Get https://192.168.64.2:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-proxy: dial tcp 192.168.64.2:8443: getsockopt: connection refused]
[repeat line above every half second for 10 minutes before failure]
Anything else do we need to know:
The workaround is to manually delete at least the previous machine directory and clear the dhcpd_leases file but this generally makes updating minikube something to be avoided.
Note that the iso above is still 0.25.0. An updated minikube (0.28.0) should notice this and either fail with an incompatible machine iso or version message or (better) update my machine based on the new iso.
Could you remove the config dir(~/.minikube) and try again?
I can confirm minikube works after removing ~/.minikube
Env: minikube 0.28.0 + virtualbox 5.2.12 on MacOS 10.13.5
The issue appears to be specific to Hyperkit. I removed ~/.minikube - still could not start. (I also removed /var/db/dhcpd_leases with no effect.)
Is this a BUG REPORT or FEATURE REQUEST? (choose one):
Please provide the following details:
Environment:
Minikube version (use minikube version): v0.28.0 (https://github.com/kubernetes/minikube/releases/download/v0.28.0/minikube-linux-amd64)
cat ~/.minikube/machines/minikube/config.json | grep DriverName): nonecat ~/.minikube/machines/minikube/config.json | grep -i ISO or minikube ssh cat /etc/VERSION):minikube version
echo "";
echo "OS:";
cat /etc/os-release
echo "";
echo "VM driver":
grep DriverName ~/.minikube/machines/minikube/config.json
echo "";
echo "ISO version";
grep -i ISO ~/.minikube/machines/minikube/config.json
What happened:
When ssh login to a machine with pts, minikube start --vm-driver=none works. However, the same command line, it does not work on ssh login to the machine with notty (for ci pipeline)
E0715 21:51:24.503899 32212 start.go:252] Error updating cluster: starting kubelet: running command:
sudo systemctl daemon-reload &&
sudo systemctl enable kubelet &&
sudo systemctl start kubelet
: exit status 1
What you expected to happen:
both should work to set up local kube.
How to reproduce it (as minimally and precisely as possible):
ssh2 library to connect to the centos and run minikube start --vm-driver=none (then it will have notty login)Output of minikube logs (if applicable):
+ minikube start --vm-driver=none
Starting local Kubernetes v1.10.0 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Downloading kubeadm v1.10.0
Downloading kubelet v1.10.0
Finished Downloading kubelet v1.10.0
Finished Downloading kubeadm v1.10.0
E0715 21:51:24.503899 32212 start.go:252] Error updating cluster: starting kubelet: running command:
sudo systemctl daemon-reload &&
sudo systemctl enable kubelet &&
sudo systemctl start kubelet
: exit status 1
Anything else do we need to know:
@dna2github - I think your comment represents a different issue. Please open a new one. Thanks!
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
Marking as obsolete, since this shouldn't happen anymore in v0.33.
Still the same with v0.34.1.
I0227 18:25:12.625477 13250 kubernetes.go:121] error getting Pods with label selector "k8s-app=kube-proxy" [Get https://192.168.99.102:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-proxy: dial tcp 192.168.99.102:8443: connect: connection refused]
Could you remove the config dir(
~/.minikube) and try again?
With v0.34.1, this seems to fix it.
And, backup & restore ~/.minikube/cache by hand, would be helpful to avoid download the caches again.
Most helpful comment
Could you remove the config dir(
~/.minikube) and try again?