Minikube: Cannot access minikube dashboard

Created on 10 Jul 2017  Â·  35Comments  Â·  Source: kubernetes/minikube

Is this a BUG REPORT or FEATURE REQUEST? : BUG REPORT

Minikube version : v0.20.0

Environment:

  • OS (e.g. from /etc/os-release): Ubuntu 12.04.5 LTS
  • VM Driver (e.g. cat ~/.minikube/machines/minikube/config.json | grep DriverName): "DriverName": "virtualbox"
  • ISO version (e.g. cat ~/.minikube/machines/minikube/config.json | grep -i ISO or minikube ssh cat /etc/VERSION): "Boot2DockerURL": "file:///home/nszig/.minikube/cache/iso/minikube-v0.20.0.iso"
  • Install tools:
  • Others:

What happened:
I have installed minikube and kubectl on Ubuntu. However i cannot access the dashboard both through the CLI and through the GUI.
http://127.0.0.1:8001/ui give the below error
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "no endpoints available for service \"kubernetes-dashboard\"",
"reason": "ServiceUnavailable",
"code": 503
}

And minikube dashboard on the CLI does not open the dashboard

What you expected to happen:
I should be able to view the kubernetes dashboard

How to reproduce it (as minimally and precisely as possible):
Try the below on Ubuntu 12.04LTS

Output:
Waiting, endpoint for service is not ready yet...
Waiting, endpoint for service is not ready yet...
Waiting, endpoint for service is not ready yet...
Waiting, endpoint for service is not ready yet...
.......
Could not find finalized endpoint being pointed to by kubernetes-dashboard: Temporary Error: Endpoint for service is not ready yet
Temporary Error: Endpoint for service is not ready yet
Temporary Error: Endpoint for service is not ready yet
Temporary Error: Endpoint for service is not ready yet

http://127.0.0.1:8001/ui
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "no endpoints available for service \"kubernetes-dashboard\"",
"reason": "ServiceUnavailable",
"code": 503
}

Anything else do we need to know:
kubectl version: Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.0", GitCommit:"d3ada0119e776222f11ec7945e6d860061339aad", GitTreeState:"clean", BuildDate:"2017-06-29T23:15:59Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.4", GitCommit:"d6f433224538d4f9ca2f7ae19b252e6fcb66a3ae", GitTreeState:"dirty", BuildDate:"2017-06-22T04:31:09Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}

minikube logs also reports the errors below:
.....
Jul 10 08:46:12 minikube localkube[3237]: I0710 08:46:12.901880 3237 kuberuntime_manager.go:458] Container {Name:php-redis Image:gcr.io/google-samples/gb-frontend:v4 Command:[] Args:[] WorkingDir: Ports:[{Name: HostPort:0 ContainerPort:80 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:GET_HOSTS_FROM Value:dns ValueFrom:nil}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:} s:100m Format:DecimalSI} memory:{i:{value:104857600 scale:0} d:{Dec:} s:100Mi Format:BinarySI}]} VolumeMounts:[{Name:default-token-gqtvf ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath:}] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jul 10 08:46:14 minikube localkube[3237]: E0710 08:46:14.139555 3237 remote_runtime.go:86] RunPodSandbox from runtime service failed: rpc error: code = 2 desc = unable to pull sandbox image "gcr.io/google_containers/pause-amd64:3.0": Error response from daemon: Get https://gcr.io/v1/_ping: x509: certificate signed by unknown authority
.....

kinbug lifecyclrotten

All 35 comments

Can you post the output of kubectl get pods --all-namespaces? And possible a kubectl describe on the dashboard pod if you see it?

`spnzig@rd:~$ kubectl get pods --all-namespaces

NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system kube-addon-manager-minikube 0/1 ContainerCreating 0 16h
kube-system kubernetes-dashboard-2039414953-5mww0 0/1 ContainerCreating 0 16h

`

Name: kubernetes-dashboard-2039414953-czptd
Namespace: kube-system
Node: minikube/192.168.99.102
Start Time: Fri, 14 Jul 2017 09:31:58 +0530
Labels: k8s-app=kubernetes-dashboard
pod-template-hash=2039414953
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"kube-system","name":"kubernetes-dashboard-2039414953","uid":"2eb39682-6849-11e7-8...
Status: Pending
IP:
Created By: ReplicaSet/kubernetes-dashboard-2039414953
Controlled By: ReplicaSet/kubernetes-dashboard-2039414953
Containers:
kubernetes-dashboard:
Container ID:
Image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.6.1
Image ID:
Port: 9090/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Liveness: http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kubernetes-dashboard-token-12gdj (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
kubernetes-dashboard-token-12gdj:
Type: Secret (a volume populated by a Secret)
SecretName: kubernetes-dashboard-token-12gdj
Optional: false
QoS Class: BestEffort
Node-Selectors:
Tolerations: node-role.kubernetes.io/master:NoSchedule
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
1h 11s 443 kubelet, minikube Warning FailedSync Error syncing pod, skipping: failed to "CreatePodSandbox" for "kubernetes-dashboard-2039414953-czptd_kube-system(2eb57d9b-6849-11e7-8a56-080027206461)" with CreatePodSandboxError: "CreatePodSandbox for pod \"kubernetes-dashboard-2039414953-czptd_kube-system(2eb57d9b-6849-11e7-8a56-080027206461)\" failed: rpc error: code = 2 desc = unable to pull sandbox image \"gcr.io/google_containers/pause-amd64:3.0\": Error response from daemon: Get https://gcr.io/v1/_ping: x509: certificate signed by unknown authority"

I even upgraded to Ubuntu host to 16.04LTS. Still the same issue persists.
all the pods are in ContainerCreating state and when services are invoked, i get Waiting, endpoint for service is not ready yet...

Are you behind a proxy? It looks like the VM isn't able to reach out to the internet and pull the images. If so, make sure you set the HTTP_PROXY or HTTPS_PROXY variables in minikube start --docker-env

Though i didn't have specific proxy set, there were a few network accessibility issues which was blocking the image from being pulled. I was able to resolve this and get the dashboard up and running by using a VPN. Thanks for your help.

Using a VPN for the install actually resolved the issue. But for some reasons i had to delete the minikube instance and restart it. But now even if the kubernetes-dashboard pod is with the status "Running" i get the same error.. "Waiting, endpoint for service not ready yet"

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

Still getting this problem.

I am getting the same issue.
Inspite of setting proxy while starting minikube, I am still getting this error Please find the attachment sdhowing this problem.
1

+1

+1

`# loki @ Loki-iMac in ~/.minikube [18:12:24] C:146
$ minikube version
minikube version: v0.25.2

loki @ Loki-iMac in ~/.minikube [18:13:03]

$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default kube-nginx9999-64f48c6857-t4mbk 0/1 ContainerCreating 0 3h
kube-system kube-addon-manager-minikube 0/1 ContainerCreating 0 4h
kube-system kubernetes-dashboard-5bd6f767c7-bvjmp 0/1 ContainerCreating 0 6m`

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Same problem here

# minikube version
minikube version: v0.27.0
# kubectl get pods --all-namespaces
NAMESPACE     NAME                                    READY     STATUS    RESTARTS   AGE
kube-system   etcd-minikube                           1/1       Running   0          2h
kube-system   kube-addon-manager-minikube             1/1       Running   0          2h
kube-system   kube-apiserver-minikube                 1/1       Running   0          2h
kube-system   kube-controller-manager-minikube        1/1       Running   0          2h
kube-system   kube-dns-86f4d74b45-pt96w               3/3       Running   0          2h
kube-system   kube-proxy-t2j22                        1/1       Running   0          2h
kube-system   kube-scheduler-minikube                 1/1       Running   0          2h
kube-system   kubernetes-dashboard-7d5dcdb6d9-4nz6c   1/1       Running   0          1h
kube-system   storage-provisioner                     1/1       Running   0          2h



md5-a0a5c129000c09fd75ed0a02bfba6eaf



# minikube dashboard --logtostderr --v=2
I0517 16:51:54.368688   29842 notify.go:109] Checking for updates...
Waiting, endpoint for service is not ready yet...
Waiting, endpoint for service is not ready yet...
Waiting, endpoint for service is not ready yet...
Waiting, endpoint for service is not ready yet...
Waiting, endpoint for service is not ready yet...
Waiting, endpoint for service is not ready yet...
Waiting, endpoint for service is not ready yet...
Waiting, endpoint for service is not ready yet...
Waiting, endpoint for service is not ready yet...



md5-a0a5c129000c09fd75ed0a02bfba6eaf



# kubectl describe rc kubernetes-dashboard --namespace=kube-system
Error from server (NotFound): replicationcontrollers "kubernetes-dashboard" not found

+1

same here

+1

+1

+1

+1, So Why there have so many issues on Mac for minikube

+1

+1

+1

+1

+1

+1
minikube version: v0.28.0

Dashboard and storage provisioner both remain in pending state. Any suggestions ?
minikube version v0.28.0 / Windows

kube-system kubernetes-dashboard-5498ccf677-844s7 0/1 Pending 0 8h
kube-system storage-provisioner 0/1 Pending 0 8h

+1
minikube version: v0.25.0 (windows)

+1 v0.23.0 (mac)

+1

fixed it by deleting cluster, and deleting minikube completely and and reinstalling to 0.28.2 through brew (on mac)

On Aug 4, 2018, at 11:25 PM, Pogui notifications@github.com wrote:

+1

—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or mute the thread.

+1, still getting this.

$ minikube version
minikube version: v0.28.2

Fedora 28

Same thing, Linux Mint 18.2.
Minikube version: v0.30.0

minikube delete
followed by
minikube start --vm-driver hyperv
also worked for me under Win 10 Pro.

I fixed issue deleting .kube and .minikube directories from my home , maybe previous testing using virtualbox let some problems there.

try it ;)

Was this page helpful?
0 / 5 - 0 ratings