when i run it always hangs forever and sometimes throws 503
garretsidzaka@$$$$$:~$ sudo minikube dashboard
Verifying dashboard health ...
Launching proxy ...
Verifying proxy health ...
^C
garretsidzaka@$$$$$:~$
garretsidzaka@$$$$$$:/usr/bin$ minikube version
minikube version: v1.4.0
commit: 7969c25
ubuntu vm 1804
vm-driver=none
ping?
@GarretSidzaka
Thank you for sharing your experience! If you don't mind, could you please provide:
This will help us isolate the problem further. Thank you!
and additionally, I wonder
do you use a corp network or VPN or proxy ?
Bullet one:
sudo minikube start
sudo minikube dashboard
Bullet two:
garretsidzaka@$$$$$:~$ sudo minikube dashboard
Verifying dashboard health ...
Launching proxy ...
Verifying proxy health ...
^C
garretsidzaka@$$$$$:~$
Bullet three:
garretsidzaka@cloudstack:/$ sudo minikube logs
*
X Error getting config: stat /home/garretsidzaka/.minikube/profiles/minikube/config.json: no such file or directory
*
Bullet Four:
Ubuntu 18.04.3
The answer to your last question is no.
@GarretSidzaka would you please share the full output of
sudo minikube start --alsologtostderr -v=8
@GarretSidzaka I am curious when you said it hangs forever, did you mean in terminal it is stuck at this ?
medya@~/workspace/minikube (clean_cron) $ minikube dashboard
🔌 Enabling dashboard ...
🤔 Verifying dashboard health ...
🚀 Launching proxy ...
🤔 Verifying proxy health ...
🎉 Opening http://127.0.0.1:65504/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ in your default browser...
if that is the case, then that is the expected behaviour, minikube will hang there and run a webserver, so you can access the dashboard in your browser
@GarretSidzaka would you please share the full output of
sudo minikube start --alsologtostderr -v=8
Welcome to Ubuntu 18.04.3 LTS (GNU/Linux 4.15.0-66-generic x86_64)
Support: https://ubuntu.com/advantage
System information as of Wed Nov 6 00:33:39 UTC 2019
System load: 0.0 Processes: 207
Usage of /: 15.0% of 72.83GB Users logged in: 0
Memory usage: 48% IP address for eth0: 66.55.156.94
Swap usage: 0% IP address for docker0: 172.17.0.1
Kata Containers are now fully integrated in Charmed Kubernetes 1.16!
Yes, charms take the Krazy out of K8s Kata Kluster Konstruction.
Canonical Livepatch is available for installation.
6 packages can be updated.
0 updates are security updates.
Last login: Mon Nov 4 23:29:43 2019 from 71.209.166.96
garretsidzaka@cloudstack:~$ sudo minikube start --alsologtostderr -v=8
[sudo] password for garretsidzaka:
I1106 00:34:13.321761 183139 notify.go:125] Checking for updates...
I1106 00:34:13.502222 183139 lock.go:41] attempting to write to file "/home/garretsidzaka/.minikube/last_update_check" with filemode -rw-r--r--
I1106 00:34:13.503829 183139 start.go:236] hostinfo: {"hostname":"cloudstack","uptime":931694,"bootTime":1572068759,"procs":266,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"18.04","kernelVersion":"4.15.0-66-generic","virtualizationSystem":"","virtualizationRole":"","hostid":"73fa3e87-061f-4111-9c29-9a2074fc4bec"}
I1106 00:34:13.504201 183139 start.go:246] virtualization:
! minikube v1.4.0 on Ubuntu 18.04
I1106 00:34:13.504790 183139 profile.go:66] Saving config to /home/garretsidzaka/.minikube/profiles/minikube/config.json ...
I1106 00:34:13.504883 183139 cache_images.go:295] CacheImage: gcr.io/k8s-minikube/storage-provisioner:v1.8.1 -> /home/garretsidzaka/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1
I1106 00:34:13.504927 183139 cache_images.go:301] /home/garretsidzaka/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1 exists
I1106 00:34:13.504952 183139 cache_images.go:297] CacheImage: gcr.io/k8s-minikube/storage-provisioner:v1.8.1 -> /home/garretsidzaka/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1 completed in 74.699µs
I1106 00:34:13.505046 183139 cache_images.go:82] CacheImage gcr.io/k8s-minikube/storage-provisioner:v1.8.1 -> /home/garretsidzaka/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1 succeeded
I1106 00:34:13.505085 183139 cache_images.go:295] CacheImage: k8s.gcr.io/kube-proxy:v1.16.0 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.16.0
I1106 00:34:13.505125 183139 cache_images.go:301] /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.16.0 exists
I1106 00:34:13.505139 183139 cache_images.go:297] CacheImage: k8s.gcr.io/kube-proxy:v1.16.0 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.16.0 completed in 66.699µs
I1106 00:34:13.505171 183139 cache_images.go:82] CacheImage k8s.gcr.io/kube-proxy:v1.16.0 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.16.0 succeeded
I1106 00:34:13.505210 183139 cache_images.go:295] CacheImage: k8s.gcr.io/kube-scheduler:v1.16.0 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.16.0
I1106 00:34:13.505287 183139 cache_images.go:301] /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.16.0 exists
I1106 00:34:13.505300 183139 cache_images.go:297] CacheImage: k8s.gcr.io/kube-scheduler:v1.16.0 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.16.0 completed in 94.698µs
I1106 00:34:13.505363 183139 cache_images.go:82] CacheImage k8s.gcr.io/kube-scheduler:v1.16.0 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.16.0 succeeded
I1106 00:34:13.505403 183139 cache_images.go:295] CacheImage: k8s.gcr.io/kube-controller-manager:v1.16.0 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.16.0
I1106 00:34:13.505445 183139 cache_images.go:301] /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.16.0 exists
I1106 00:34:13.505460 183139 cache_images.go:297] CacheImage: k8s.gcr.io/kube-controller-manager:v1.16.0 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.16.0 completed in 60.799µs
I1106 00:34:13.505511 183139 cache_images.go:82] CacheImage k8s.gcr.io/kube-controller-manager:v1.16.0 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.16.0 succeeded
I1106 00:34:13.505550 183139 cache_images.go:295] CacheImage: k8s.gcr.io/kube-apiserver:v1.16.0 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.16.0
I1106 00:34:13.505593 183139 cache_images.go:301] /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.16.0 exists
I1106 00:34:13.505724 183139 cache_images.go:297] CacheImage: k8s.gcr.io/kube-apiserver:v1.16.0 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.16.0 completed in 178.797µs
I1106 00:34:13.505797 183139 cache_images.go:82] CacheImage k8s.gcr.io/kube-apiserver:v1.16.0 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.16.0 succeeded
I1106 00:34:13.505751 183139 cache_images.go:295] CacheImage: k8s.gcr.io/etcd:3.3.15-0 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/etcd_3.3.15-0
I1106 00:34:13.505905 183139 cache_images.go:301] /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/etcd_3.3.15-0 exists
I1106 00:34:13.505950 183139 cache_images.go:297] CacheImage: k8s.gcr.io/etcd:3.3.15-0 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/etcd_3.3.15-0 completed in 204.497µs
I1106 00:34:13.505991 183139 cache_images.go:82] CacheImage k8s.gcr.io/etcd:3.3.15-0 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/etcd_3.3.15-0 succeeded
I1106 00:34:13.505796 183139 cache_images.go:295] CacheImage: kubernetesui/dashboard:v2.0.0-beta4 -> /home/garretsidzaka/.minikube/cache/images/kubernetesui/dashboard_v2.0.0-beta4
I1106 00:34:13.506091 183139 cache_images.go:301] /home/garretsidzaka/.minikube/cache/images/kubernetesui/dashboard_v2.0.0-beta4 exists
I1106 00:34:13.506134 183139 cache_images.go:297] CacheImage: kubernetesui/dashboard:v2.0.0-beta4 -> /home/garretsidzaka/.minikube/cache/images/kubernetesui/dashboard_v2.0.0-beta4 completed in 354.195µs
I1106 00:34:13.506175 183139 cache_images.go:82] CacheImage kubernetesui/dashboard:v2.0.0-beta4 -> /home/garretsidzaka/.minikube/cache/images/kubernetesui/dashboard_v2.0.0-beta4 succeeded
I1106 00:34:13.505646 183139 cache_images.go:295] CacheImage: k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.13 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/k8s-dns-kube-dns-amd64_1.14.13
I1106 00:34:13.506260 183139 cache_images.go:301] /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/k8s-dns-kube-dns-amd64_1.14.13 exists
I1106 00:34:13.506301 183139 cache_images.go:297] CacheImage: k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.13 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/k8s-dns-kube-dns-amd64_1.14.13 completed in 654.591µs
I1106 00:34:13.505666 183139 cache_images.go:295] CacheImage: k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.13 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64_1.14.13
I1106 00:34:13.506373 183139 cache_images.go:301] /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64_1.14.13 exists
I1106 00:34:13.506390 183139 cache_images.go:297] CacheImage: k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.13 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64_1.14.13 completed in 729.289µs
I1106 00:34:13.506406 183139 cache_images.go:82] CacheImage k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.13 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64_1.14.13 succeeded
I1106 00:34:13.505715 183139 cache_images.go:295] CacheImage: k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.13 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/k8s-dns-sidecar-amd64_1.14.13
I1106 00:34:13.505775 183139 cache_images.go:295] CacheImage: k8s.gcr.io/kube-addon-manager:v9.0.2 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/kube-addon-manager_v9.0.2
I1106 00:34:13.506562 183139 cache_images.go:301] /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/kube-addon-manager_v9.0.2 exists
I1106 00:34:13.506581 183139 cache_images.go:297] CacheImage: k8s.gcr.io/kube-addon-manager:v9.0.2 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/kube-addon-manager_v9.0.2 completed in 813.489µs
I1106 00:34:13.506597 183139 cache_images.go:82] CacheImage k8s.gcr.io/kube-addon-manager:v9.0.2 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/kube-addon-manager_v9.0.2 succeeded
I1106 00:34:13.505785 183139 cache_images.go:295] CacheImage: k8s.gcr.io/coredns:1.6.2 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/coredns_1.6.2
I1106 00:34:13.506642 183139 cache_images.go:301] /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/coredns_1.6.2 exists
I1106 00:34:13.506656 183139 cache_images.go:297] CacheImage: k8s.gcr.io/coredns:1.6.2 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/coredns_1.6.2 completed in 872.988µs
I1106 00:34:13.506662 183139 cache_images.go:82] CacheImage k8s.gcr.io/coredns:1.6.2 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/coredns_1.6.2 succeeded
I1106 00:34:13.505622 183139 cache_images.go:295] CacheImage: k8s.gcr.io/pause:3.1 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/pause_3.1
I1106 00:34:13.506421 183139 cache_images.go:82] CacheImage k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.13 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/k8s-dns-kube-dns-amd64_1.14.13 succeeded
I1106 00:34:13.506690 183139 cache_images.go:301] /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/pause_3.1 exists
I1106 00:34:13.506719 183139 cache_images.go:297] CacheImage: k8s.gcr.io/pause:3.1 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/pause_3.1 completed in 1.094384ms
I1106 00:34:13.506734 183139 cache_images.go:82] CacheImage k8s.gcr.io/pause:3.1 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/pause_3.1 succeeded
I1106 00:34:13.506449 183139 cache_images.go:301] /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/k8s-dns-sidecar-amd64_1.14.13 exists
I1106 00:34:13.506764 183139 cache_images.go:297] CacheImage: k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.13 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/k8s-dns-sidecar-amd64_1.14.13 completed in 1.054385ms
I1106 00:34:13.506773 183139 cache_images.go:82] CacheImage k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.13 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/k8s-dns-sidecar-amd64_1.14.13 succeeded
I1106 00:34:13.506785 183139 cache_images.go:89] Successfully cached all images.
I1106 00:34:13.518345 183139 lock.go:41] attempting to write to file "/home/garretsidzaka/.minikube/profiles/minikube/config.json" with filemode -rw-------
I1106 00:34:13.518649 183139 cluster.go:93] Machine does not exist... provisioning new machine
I1106 00:34:13.518671 183139 cluster.go:94] Provisioning machine with config: {KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.4.0.iso Memory:2000 CPUs:2 DiskSize:20000 VMDriver:none ContainerRuntime:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false Downloader:{} DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true}
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --authorization-mode=Webhook --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroup-driver=cgroupfs --client-ca-file=/var/lib/minikube/certs/ca.crt --cluster-dns=10.96.0.10 --cluster-domain=cluster.local --container-runtime=docker --fail-swap-on=false --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --pod-manifest-path=/etc/kubernetes/manifests --resolv-conf=/run/systemd/resolve/resolv.conf
[Install]
I1106 00:34:19.223625 183139 exec_runner.go:40] Run: pgrep kubelet && sudo systemctl stop kubelet
W1106 00:34:19.257686 183139 kubeadm.go:615] unable to stop kubelet: running command: pgrep kubelet && sudo systemctl stop kubelet: exit status 1
I1106 00:34:19.258065 183139 cache_binaries.go:63] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubeadm
I1106 00:34:19.258089 183139 vm_assets.go:82] NewFileAsset: /home/garretsidzaka/.minikube/cache/v1.16.0/kubeadm -> /var/lib/minikube/binaries/v1.16.0/kubeadm
I1106 00:34:19.258075 183139 cache_binaries.go:63] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubelet
I1106 00:34:19.258211 183139 vm_assets.go:82] NewFileAsset: /home/garretsidzaka/.minikube/cache/v1.16.0/kubelet -> /var/lib/minikube/binaries/v1.16.0/kubelet
I1106 00:34:19.862786 183139 exec_runner.go:40] Run: sudo systemctl daemon-reload && sudo systemctl start kubelet
I1106 00:34:20.089159 183139 certs.go:71] acquiring lock: {Name:setupCerts Clock:{} Delay:15s Timeout:0s Cancel:
I1106 00:34:20.089310 183139 certs.go:79] Setting up /home/garretsidzaka/.minikube for IP: 66.55.156.94
I1106 00:34:20.089412 183139 crypto.go:69] Generating cert /home/garretsidzaka/.minikube/client.crt with IP's: []
I1106 00:34:20.099224 183139 crypto.go:157] Writing cert to /home/garretsidzaka/.minikube/client.crt ...
I1106 00:34:20.099264 183139 lock.go:41] attempting to write to file "/home/garretsidzaka/.minikube/client.crt" with filemode -rw-r--r--
I1106 00:34:20.099549 183139 crypto.go:165] Writing key to /home/garretsidzaka/.minikube/client.key ...
I1106 00:34:20.099565 183139 lock.go:41] attempting to write to file "/home/garretsidzaka/.minikube/client.key" with filemode -rw-------
I1106 00:34:20.099683 183139 crypto.go:69] Generating cert /home/garretsidzaka/.minikube/apiserver.crt with IP's: [66.55.156.94 10.96.0.1 10.0.0.1]
I1106 00:34:20.107698 183139 crypto.go:157] Writing cert to /home/garretsidzaka/.minikube/apiserver.crt ...
I1106 00:34:20.107730 183139 lock.go:41] attempting to write to file "/home/garretsidzaka/.minikube/apiserver.crt" with filemode -rw-r--r--
I1106 00:34:20.107995 183139 crypto.go:165] Writing key to /home/garretsidzaka/.minikube/apiserver.key ...
I1106 00:34:20.108028 183139 lock.go:41] attempting to write to file "/home/garretsidzaka/.minikube/apiserver.key" with filemode -rw-------
I1106 00:34:20.108174 183139 crypto.go:69] Generating cert /home/garretsidzaka/.minikube/proxy-client.crt with IP's: []
I1106 00:34:20.113461 183139 crypto.go:157] Writing cert to /home/garretsidzaka/.minikube/proxy-client.crt ...
I1106 00:34:20.114504 183139 lock.go:41] attempting to write to file "/home/garretsidzaka/.minikube/proxy-client.crt" with filemode -rw-r--r--
I1106 00:34:20.115320 183139 crypto.go:165] Writing key to /home/garretsidzaka/.minikube/proxy-client.key ...
I1106 00:34:20.115358 183139 lock.go:41] attempting to write to file "/home/garretsidzaka/.minikube/proxy-client.key" with filemode -rw-------
I1106 00:34:20.115979 183139 vm_assets.go:82] NewFileAsset: /home/garretsidzaka/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I1106 00:34:20.116130 183139 vm_assets.go:82] NewFileAsset: /home/garretsidzaka/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I1106 00:34:20.116470 183139 vm_assets.go:82] NewFileAsset: /home/garretsidzaka/.minikube/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I1106 00:34:20.116543 183139 vm_assets.go:82] NewFileAsset: /home/garretsidzaka/.minikube/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I1106 00:34:20.116596 183139 vm_assets.go:82] NewFileAsset: /home/garretsidzaka/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I1106 00:34:20.116649 183139 vm_assets.go:82] NewFileAsset: /home/garretsidzaka/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I1106 00:34:20.116676 183139 vm_assets.go:82] NewFileAsset: /home/garretsidzaka/.minikube/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I1106 00:34:20.116701 183139 vm_assets.go:82] NewFileAsset: /home/garretsidzaka/.minikube/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I1106 00:34:20.125897 183139 vm_assets.go:82] NewFileAsset: /home/garretsidzaka/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I1106 00:34:20.131832 183139 exec_runner.go:40] Run: which openssl
I1106 00:34:20.133923 183139 exec_runner.go:40] Run: sudo test -f '/etc/ssl/certs/minikubeCA.pem'
I1106 00:34:20.142240 183139 exec_runner.go:51] Run with output: openssl x509 -hash -noout -in '/usr/share/ca-certificates/minikubeCA.pem'
I1106 00:34:20.167041 183139 exec_runner.go:40] Run: sudo test -f '/etc/ssl/certs/b5213941.0'
@GarretSidzaka I am curious when you said it hangs forever, did you mean in terminal it is stuck at this ?
medya@~/workspace/minikube (clean_cron) $ minikube dashboard 🔌 Enabling dashboard ... 🤔 Verifying dashboard health ... 🚀 Launching proxy ... 🤔 Verifying proxy health ... 🎉 Opening http://127.0.0.1:65504/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ in your default browser...if that is the case, then that is the expected behaviour, minikube will hang there and run a webserver, so you can access the dashboard in your browser
i cannot load a local browser on a production headless VM, there is no X-server
hello the terminal is only getting this far now.
garretsidzaka@cloudstack:~$ sudo minikube dashboard
r on a production headless VM, there is no X-server
hello the terminal is only getting this far now.
the point of the dashboard is actually a UI experience for kubernetes, if you dont have a browser, you might not actually need dashboard.
do you mind sharing the output of curl for the dashboard url in a separate terminal ?
btw do you happen to use vpn or proxies ?
The output of this command would be helpful for us to help with debugging:
sudo minikube dashboard --alsologtostderr -v=1
It's worth noting that with sudo, the dashboard command will only output a URL rather than attempting to open a browser.
r on a production headless VM, there is no X-server
hello the terminal is only getting this far now.the point of the dashboard is actually a UI experience for kubernetes, if you dont have a browser, you might not actually need dashboard.
do you mind sharing the output of curl for the dashboard url in a separate terminal ?
btw do you happen to use vpn or proxies ?
no unusual network. this is a front end bridged VM, production style. this network port has a static IP that is IANA, not NAT. there is no proxy or VPN. and yes its very nice to have this kind of research VM
The output of this command would be helpful for us to help with debugging:
sudo minikube dashboard --alsologtostderr -v=1It's worth noting that with sudo, the dashboard command will only output a URL rather than attempting to open a browser.
yes and when i click such a link after obviously replacing 127 with the actual IP address, it gives 503.
attached is the log you requested
teraterm.log
Ping :3
Hey @GarretSidzaka -- looks like there are a couple of related issues (#4352 and #4749).
I see you already commented on #4352, and I'm guessing none of those solutions fixed your issue?
The output of this command would be helpful for us to help with debugging:
sudo minikube dashboard --alsologtostderr -v=1It's worth noting that with sudo, the dashboard command will only output a URL rather than attempting to open a browser.
@tstromberg
sudo minikube dashboard --alsologtostderr -v=1
I1113 18:16:48.105064 30332 none.go:257] checking for running kubelet ...
I1113 18:16:48.105097 30332 exec_runner.go:42] (ExecRunner) Run: systemctl is-active --quiet service kubelet
🤔 Verifying dashboard health ...
I1113 18:16:48.136357 30332 service.go:236] Found service: &Service{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:kubernetes-dashboard,GenerateName:,Namespace:kubernetes-dashboard,SelfLink:/api/v1/namespaces/kubernetes-dashboard/services/kubernetes-dashboard,UID:50ffb80e-1e61-41a3-8ee6-15fec08c9d0c,ResourceVersion:384,Generation:0,CreationTimestamp:2019-11-13 18:08:36 +0000 GMT,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,k8s-app: kubernetes-dashboard,kubernetes.io/minikube-addons: dashboard,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ServiceSpec{Ports:[{ TCP 80 {0 9090 } 0}],Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.109.8.235,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[],},},}
🚀 Launching proxy ...
I1113 18:16:48.136574 30332 dashboard.go:167] Executing: /usr/bin/kubectl [/usr/bin/kubectl --context minikube proxy --port=0]
I1113 18:16:48.136855 30332 dashboard.go:172] Waiting for kubectl to output host:port ...
I1113 18:16:48.294744 30332 dashboard.go:190] proxy stdout: Starting to serve on 127.0.0.1:43299
🤔 Verifying proxy health ...
I1113 18:16:48.304599 30332 dashboard.go:227] http://127.0.0.1:43299/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Wed, 13 Nov 2019 18:16:48 GMT]] Body:0xc00035d840 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0004aa800 TLS:<nil>}
I1113 18:16:49.412722 30332 dashboard.go:227] http://127.0.0.1:43299/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Wed, 13 Nov 2019 18:16:49 GMT]] Body:0xc000297100 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00012ed00 TLS:<nil>}
I1113 18:16:51.579128 30332 dashboard.go:227] http://127.0.0.1:43299/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Wed, 13 Nov 2019 18:16:51 GMT]] Body:0xc0002971c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00012ee00 TLS:<nil>}
I1113 18:16:54.225172 30332 dashboard.go:227] http://127.0.0.1:43299/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Wed, 13 Nov 2019 18:16:54 GMT]] Body:0xc0003cf940 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000486300 TLS:<nil>}
I1113 18:16:57.412075 30332 dashboard.go:227] http://127.0.0.1:43299/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Wed, 13 Nov 2019 18:16:57 GMT]] Body:0xc00035d9c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0004aa900 TLS:<nil>}
I1113 18:17:02.101728 30332 dashboard.go:227] http://127.0.0.1:43299/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Wed, 13 Nov 2019 18:17:02 GMT]] Body:0xc0003cfa80 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0004aaa00 TLS:<nil>}
I1113 18:17:11.123045 30332 dashboard.go:227] http://127.0.0.1:43299/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:
And if I hit the URL which is giving 503, I get:
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "no endpoints available for service \"http:kubernetes-dashboard:\"",
"reason": "ServiceUnavailable",
"code": 503
}
Thanks for the update @sebinsua
Based on what you've shared, I now suspect that part of the issue may be that we are attempting to check the URL before checking if the pod is actually running. Why the dashboard service isn't healthy though, we still need to investigate. Do you mind helping us root cause this?
Once you see the dashboard hanging at "Verifying proxy health ...", can you get the output of and share it with us?
kubectl get po -n kubernetes-dashboard --show-labelskubectl describe po -l k8s-app=kubernetes-dashboard -n kubernetes-dashboardkubectl logs -l k8s-app=kubernetes-dashboard -n kubernetes-dashboardDepending on what you share, I believe that part of the solution may be to insert a new health check that blocks until the pod is in 'Running' state, by calling client.CoreV1().Pods(ns).List() and checking that pod.Status.Phase == core.PodRunning before checking for the port here:
What the
these results were taking after the bug was replicated, in a separate SSH terminal. At the same time the proxy message was hanging in the other SSH window
this this outputted eventually in the main window:
X http://127.0.0.1:42337/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ is not accessible: Temporary Error: unexpected response code: 503
garretsidzaka@cloudstack:~$
Interesting. Do you mind adding the output of 'kubectl describe node' as
well? Thanks!
On Wed, Nov 13, 2019, 6:19 PM GarretSidzaka notifications@github.com
wrote:
output.txt
https://github.com/kubernetes/minikube/files/3844296/output.txt—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/kubernetes/minikube/issues/5815?email_source=notifications&email_token=AAAYYMA76IJ6FPNETGAYGETQTSYTTA5CNFSM4JIBWDFKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEEALAAA#issuecomment-553693184,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AAAYYMBP527WNFL3THBZR4DQTSYTTANCNFSM4JIBWDFA
.
Interesting. Do you mind adding the output of 'kubectl describe node' as well? Thanks!
…
On Wed, Nov 13, 2019, 6:19 PM GarretSidzaka @.*> wrote: output.txt https://github.com/kubernetes/minikube/files/3844296/output.txt — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub <#5815?email_source=notifications&email_token=AAAYYMA76IJ6FPNETGAYGETQTSYTTA5CNFSM4JIBWDFKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEEALAAA#issuecomment-553693184>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAAYYMBP527WNFL3THBZR4DQTSYTTANCNFSM4JIBWDFA .
sudo kubectl describe node
[sudo] password for garretsidzaka:
Name: minikube
Roles: master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=minikube
kubernetes.io/os=linux
node-role.kubernetes.io/master=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Wed, 06 Nov 2019 00:34:43 +0000
Taints:
Unschedulable: false
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Thu, 14 Nov 2019 03:36:59 +0000 Wed, 06 Nov 2019 00:34:43 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Thu, 14 Nov 2019 03:36:59 +0000 Wed, 06 Nov 2019 00:34:43 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Thu, 14 Nov 2019 03:36:59 +0000 Wed, 06 Nov 2019 00:34:43 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Thu, 14 Nov 2019 03:36:59 +0000 Wed, 06 Nov 2019 00:34:43 +0000 KubeletReady kubelet is posting ready status. AppArmor enabled
Addresses:
InternalIP: 66.55.156.94
Hostname: minikube
Capacity:
cpu: 4
ephemeral-storage: 76366628Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 8220652Ki
pods: 110
Allocatable:
cpu: 4
ephemeral-storage: 70379484249
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 8118252Ki
pods: 110
System Info:
Machine ID: b2bbbdb30f0c427595b8a91758ac298c
System UUID: 73FA3E87-061F-4111-9C29-9A2074FC4BEC
Boot ID: 369c5c7e-06d5-46e9-87f5-b597ebadce65
Kernel Version: 4.15.0-66-generic
OS Image: Ubuntu 18.04.3 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://18.9.7
Kubelet Version: v1.16.0
Kube-Proxy Version: v1.16.0
Non-terminated Pods: (11 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system coredns-5644d7b6d9-hpm6t 100m (2%) 0 (0%) 70Mi (0%) 170Mi (2%) 8d
kube-system coredns-5644d7b6d9-m2rpm 100m (2%) 0 (0%) 70Mi (0%) 170Mi (2%) 8d
kube-system etcd-minikube 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8d
kube-system kube-addon-manager-minikube 5m (0%) 0 (0%) 50Mi (0%) 0 (0%) 8d
kube-system kube-apiserver-minikube 250m (6%) 0 (0%) 0 (0%) 0 (0%) 8d
kube-system kube-controller-manager-minikube 200m (5%) 0 (0%) 0 (0%) 0 (0%) 8d
kube-system kube-proxy-xcw6z 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8d
kube-system kube-scheduler-minikube 100m (2%) 0 (0%) 0 (0%) 0 (0%) 8d
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8d
kubernetes-dashboard dashboard-metrics-scraper-76585494d8-m2sn9 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8d
kubernetes-dashboard kubernetes-dashboard-57f4cb4545-vkwpj 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8d
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 755m (18%) 0 (0%)
memory 190Mi (2%) 340Mi (4%)
ephemeral-storage 0 (0%) 0 (0%)
Events:
garretsidzaka@cloudstack:~$
hi :heart_eyes:
have this issue solved?i got the same problem.
thoughts on this issue?
I have the same issue here.
minikube version
minikube version: v1.7.3
commit: 436667c819c324e35d7e839f8116b968a2d0a3ff
cat /etc/os-release
NAME="Ubuntu"
VERSION="18.04.4 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04.4 LTS"
VERSION_ID="18.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=bionic
UBUNTU_CODENAME=bionic
reopening issue. i have given up hope of making minikube work on ubuntu, but it seems others are still trying.
# check pods
kubectl -n kubernetes-dashboard get pods
# check pod Events, dashboard-metrics-scraper-7b64584c5c-lf82k is a pod name
kubectl -n kubernetes-dashboard describe pod dashboard-metrics-scraper-7b64584c5c-lf82k
if you got below error
Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 192.168.64.1:53: read udp 192.168.64.22:53724->192.168.64.1:53: read: connection refused
problem reason maybe related to https://github.com/kubernetes/minikube/issues/3036
you can check if the machine has the software that listens port 53
if yes, please stop the software, then restart minikube cluster
@jk2K @GarretSidzaka sorry to hear about the bad experience, do you mind sharing the minikube version also the output of
minikube start --wait=true --alsologtostderr
I'm a newbie.
I have the exact same symptoms - it hangs for a while on the "Verifying proxy health" step and then finally errors out in a 503:
llorllale:~$ minikube dashboard
🤔 Verifying dashboard health ...
🚀 Launching proxy ...
🤔 Verifying proxy health ...
💣 http://127.0.0.1:44627/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ is not accessible: Temporary Error: unexpected response code: 503
llorllale:~$
There are two pods under the kubernetes-dashboard namespace:
llorllale:~$ kubectl -n kubernetes-dashboard get pods
NAME READY STATUS RESTARTS AGE
dashboard-metrics-scraper-84bfdf55ff-scw95 0/1 Pending 0 25m
kubernetes-dashboard-bc446cc64-5dkxx 0/1 Pending 0 25m
llorllale:~$
Describing both of these reveals events with warnings:
llorllale:~$ kubectl -n kubernetes-dashboard describe pod dashboard-metrics-scraper-84bfdf55ff-scw95
(...removed extra bits for brevity...)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling <unknown> default-scheduler 0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/disk-pressure: }, that the pod didn't tolerate.
Warning FailedScheduling <unknown> default-scheduler 0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/disk-pressure: }, that the pod didn't tolerate.
Warning FailedScheduling 24m (x8 over 26m) default-scheduler 0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/disk-pressure: }, that the pod didn't tolerate.
Describing my minikube node I see:
llorllale:~$ kubectl describe node minikube
(...removed extra bits for brevity...)
Taints: node.kubernetes.io/disk-pressure:NoSchedule
I tried removing the taint:
llorllale:~$ kubectl taint nodes minikube node.kubernetes.io/disk-pressure:NoSchedule-
node/minikube untainted
... but that didn't work. The new pods failed scheduling with the same problem. I'm still left wondering if this is the issue?
docker system prune may have solved my issue?
I tried minikube dashboard immediately after docker system prune and ran into same issue. However, a few minutes later, minikube dashboard worked.
I have the same issue, but two pods under the kubernetes-dashboard namespace are running well.
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 25m default-scheduler Successfully assigned kubernetes-dashboard/kubernetes-dashboard-696dbcc666-gbgwl to minikube-m03
Normal Pulled 25m kubelet, minikube-m03 Container image "kubernetesui/dashboard:v2.0.0" already present on machine
Normal Created 25m kubelet, minikube-m03 Created container kubernetes-dashboard
Normal Started 25m kubelet, minikube-m03 Started container kubernetes-dashboard
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 26m default-scheduler Successfully assigned kubernetes-dashboard/dashboard-metrics-scraper-84bfdf55ff-gxdwz to minikube-m04
Normal Pulled 26m kubelet, minikube-m04 Container image "kubernetesui/metrics-scraper:v1.0.2" already present on machine
Normal Created 26m kubelet, minikube-m04 Created container dashboard-metrics-scraper
Normal Started 26m kubelet, minikube-m04 Started container dashboard-metrics-scraper
So I tried port-forward (kubectl -n kubernetes-dashboard port-forward 9090:9090), and I can access dashboard via http://127.0.0.1:9090/
Changing this to a feature so that users can see why a dashboard deployment is blocked. There are many possibilities outlined here, including DiskPressure.
I support this feature to implemented in a way that in some addons before "openning" or "enabling" we check the node conditions to get better errors for usrs.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/remove-lifecycle stale
Most helpful comment
have this issue solved?i got the same problem.