Minikube: `minikube dashboard` failed just after `minikube start`

Created on 12 Mar 2018  ·  15Comments  ·  Source: kubernetes/minikube

Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT

Please provide the following details:

Environment:

Minikube version (use minikube version):v0.25.0

  • OS (e.g. from /etc/os-release):Mac OS X 10.13.3 (BuildVersion: 17D102)
  • VM Driver (e.g. cat ~/.minikube/machines/minikube/config.json | grep DriverName):virtualbox
  • ISO version (e.g. cat ~/.minikube/machines/minikube/config.json | grep -i ISO or minikube ssh cat /etc/VERSION):minikube-v0.25.1.iso
  • Install tools: brew cask install minikube
  • Others:
    The above can be generated in one go with the following commands (can be copied and pasted directly into your terminal):
minikube version
echo "";
echo "OS:";
cat /etc/os-release
echo "";
echo "VM driver": 
grep DriverName ~/.minikube/machines/minikube/config.json
echo "";
echo "ISO version";
grep -i ISO ~/.minikube/machines/minikube/config.json

minikube version: v0.25.0

OS:
cat: /etc/os-release: No such file or directory

VM driver:
    "DriverName": "virtualbox",

ISO version
        "Boot2DockerURL": "file:///Users/ichen/.minikube/cache/iso/minikube-v0.25.1.iso",

What happened:
This is my first time to use minikube, I was following the “Introduction to Kubernetes” course on edX.org.
After starting the minikube, I got the below error while running minikube dashboard

$ minikube status
minikube: Running
cluster: Running
kubectl: Correctly Configured: pointing to minikube-vm at 192.168.99.102

$ minikube dashboard
Could not find finalized endpoint being pointed to by kubernetes-dashboard: Error validating service: Error getting service kubernetes-dashboard: services "kubernetes-dashboard" not found

What you expected to happen:
It would open a new tab on our web browser, displaying the Kubernetes dashboard.

How to reproduce it (as minimally and precisely as possible):
I try the below steps again, get the same error:

  1. minikube delete
  2. minikube start
  3. minikube dashboard

Output of minikube logs (if applicable):
the output of logs is quite large, so I uploaded it as an attachment.
logs.txt

Anything else do we need to know:

cdashboard kinsupport lifecyclrotten long-term-support prioritawaiting-more-evidence prioritimportant-longterm

Most helpful comment

I found the reason, it's the f**king GFW of mainland China!

I travel HK today and try it again in the hotel, everything works now!

All 15 comments

Did you enable the dashboard addon?

> minikube addons list
# Looking for this
...
- dashboard: enabled
...

How to enable

> minikube addons enable dashboard

Then re-create minikube VM. Delete and start.

@syndbg thanks for ur reply, the "dashboard" addon is already enabled. And I have deleted and started couple times, but still the same error.

minikube addons list                                                                                                                                                                       10:15:56
- addon-manager: enabled
- coredns: disabled
- dashboard: enabled
- default-storageclass: enabled
- efk: disabled
- freshpod: disabled
- heapster: disabled
- ingress: disabled
- kube-dns: enabled
- registry: disabled
- registry-creds: disabled
- storage-provisioner: enabled

You need to run the following commands to check whether the dashboard addon running correctly.

kubectl get pods -n kube-system
kubectl describe pod <dashboard-addon-name> -n kube-system

I found the reason, it's the f**king GFW of mainland China!

I travel HK today and try it again in the hotel, everything works now!

I am still having issues while trying to launch dashboard.
Below is the output for above two commands-

Runnning setup on win-10.

kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
kube-addon-manager-minikube 1/1 Running 1 1d
kube-dns-855ff7856c-7hlwk 2/3 CrashLoopBackOff 16 1d
kubernetes-dashboard-tsvx8 0/1 CrashLoopBackOff 12 1d
kubectl describe pod kubernetes-dashboard-tsvx8 -n kube-system
Name: kubernetes-dashboard-tsvx8
Namespace: kube-system
Node: minikube/192.168.99.100
Start Time: Sat, 21 Apr 2018 13:46:02 +0530
Labels: addonmanager.kubernetes.io/mode=Reconcile
app=kubernetes-dashboard
version=v1.7.0
Annotations:
Status: Running
IP: 172.17.0.3
Controlled By: ReplicationController/kubernetes-dashboard
Containers:
kubernetes-dashboard:
Container ID: docker://8cca8fba67c981390641a5dbaf00d4575f7c34512207bb239b88f99a0e2b4267
Image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.7.0
Image ID: docker-pullable://gcr.io/google_containers/kubernetes-dashboard-amd64@sha256:c94b57ce6849365033203a00ef5cfaaf92319bd5ff311a62b17cd9f6a3b69d83
Port: 9090/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Sun, 22 Apr 2018 20:27:54 +0530
Finished: Sun, 22 Apr 2018 20:27:54 +0530
Ready: False
Restart Count: 12
Liveness: http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-jjtqj (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
default-token-jjtqj:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-jjtqj
Optional: false
QoS Class: BestEffort
Node-Selectors:
Tolerations:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 1d default-scheduler Successfully assigned kubernetes-dashboard-tsvx8 to minikube
Normal SuccessfulMountVolume 1d kubelet, minikube MountVolume.SetUp succeeded for volume "default-token-jjtqj"
Normal Pulling 1d kubelet, minikube pulling image "gcr.io/google_containers/kubernetes-dashboard-amd64:v1.7.0"
Normal Pulled 1d kubelet, minikube Successfully pulled image "gcr.io/google_containers/kubernetes-dashboard-amd64:v1.7.0"
Normal Created 1d (x4 over 1d) kubelet, minikube Created container
Normal Started 1d (x4 over 1d) kubelet, minikube Started container
Warning BackOff 1d (x10 over 1d) kubelet, minikube Back-off restarting failed container
Normal Pulled 1d (x4 over 1d) kubelet, minikube Container image "gcr.io/google_containers/kubernetes-dashboard-amd64:v1.7.0" already present on machine
Normal SuccessfulMountVolume 14m kubelet, minikube MountVolume.SetUp succeeded for volume "default-token-jjtqj"
Normal SandboxChanged 14m kubelet, minikube Pod sandbox changed, it will be killed and re-created.
Normal Pulled 13m (x4 over 14m) kubelet, minikube Container image "gcr.io/google_containers/kubernetes-dashboard-amd64:v1.7.0" already present on machine
Normal Created 13m (x4 over 14m) kubelet, minikube Created container
Normal Started 13m (x4 over 14m) kubelet, minikube Started container
Warning BackOff 4m (x52 over 14m) kubelet, minikube Back-off restarting failed container

Just to update that its happening only when I am starting minikube with k8s version 1.9.0. If I start with default i.e. 1.8.0, its working.

@deepforu47 I suggest you create a new issue since this issue is closed already, I think people will pay lesser attention to it.

for a workaround of GFW, log in to the minikube virtual machine and set up HTTP/HTTPS proxy environmental variables for docker (https://stackoverflow.com/questions/23111631/cannot-download-docker-images-behind-a-proxy) and then everything should works fine.

minikube version: v1.2.0 exists the same bug!!!

minikube version: v1.8.1 exists the same error on ubuntu 18 Digital Ocean :-(

minikube version: v1.11.0 exists the same error on MacOS Catalina :/

Hey @YassineHk -- could you (or anybody seeing this issue with v1.11.0), please provide the following:

The output of:

minikube addons list
minikube dashboard --alsologtostderr

Thank you!

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

This bug has been quiet for a few months now, I'm hoping it's been resolved in newer versions of minikube. I'm going to go ahead and close it for now, please comment if you're still experiencing this and I will reopen!

Was this page helpful?
0 / 5 - 0 ratings