Dashboard: Unable to open dashboard.

Created on 3 Nov 2018  Â·  36Comments  Â·  Source: kubernetes/dashboard

I'm trying to access the dashboard, but getting the following error on opening http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/.

Error: 'dial tcp 172.17.0.2:9090: getsockopt: connection refused'
Trying to reach: 'https://172.17.0.2:9090/

sometimes I also get:
"no endpoints available for service "kubernetes-dashboard""

I've tried reinstalling virtual box , but to no avail. kubectl and minikube are correctly configured, and kubectl is pointing in the right direction .
Also I haven't installed the docker app-could that be a reason?
I'm still in school so I don't really have any professional background in this sector , but any help would be appreciated. Thanks!

triagneeds-information

Most helpful comment

@hiteshsardana99 Came back from vacation and was facing the same issue right now, the problem is that with the URL http://172.19.63.33:9999/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/ you're trying to access the service kubernetes-dashboard in the kube-system namespace, but for some reason the service doesnt exist there anymore. Try http://172.19.63.33:9999/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/. This resolved it for me.

All 36 comments

Can you provide some more steps to reproduce? What is your minikube version? Which Kubernetes version are you starting?

/triage needs-information

Hi

Reproduced @chirothespearow's issue, after a fresh install on a kubeadm-installed 1.12.2 cluster I get this error when trying to access the web UI using kubectl proxy:

{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {

  },
  "status": "Failure",
  "message": "no endpoints available for service \"kubernetes-dashboard\"",
  "reason": "ServiceUnavailable",
  "code": 503
}

mac 上也是这个错误

No endpoints available leads me to believe there are no pods available for the kubernetes-dashboard service to target.

How did you install dashboard on the kubeadm-installed cluster?

Thanks!

@jeefy as documented in the README:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml

@falzm You mind pasting a couple things?

kubectl -n kube-system get svc -o wide

kubectl -n kube-system describe pod <whatever the dashboard pod is>

When you do, please make sure there's no sensitive info. :) Thanks!

@jeefy there you go:

$ kubectl -n kube-system get svc -o wide
NAME                   TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)         AGE     SELECTOR
kube-dns               ClusterIP   10.96.0.10       <none>        53/UDP,53/TCP   4d22h   k8s-app=kube-dns
kubernetes-dashboard   ClusterIP   10.101.226.104   <none>

$ kubectl -n kube-system describe pod kubernetes-dashboard-77fd78f978-f8bxd
Name:               kubernetes-dashboard-77fd78f978-f8bxd
Namespace:          kube-system
Priority:           0
PriorityClassName:  <none>
Node:               <none>
Labels:             k8s-app=kubernetes-dashboard
                    pod-template-hash=77fd78f978
Annotations:        <none>
Status:             Pending
IP:
Controlled By:      ReplicaSet/kubernetes-dashboard-77fd78f978
Containers:
  kubernetes-dashboard:
    Image:      k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0
    Port:       8443/TCP
    Host Port:  0/TCP
    Args:
      --auto-generate-certificates
    Liveness:     http-get https://:8443/ delay=30s timeout=30s period=10s #success=1 #failure=3
    Environment:  <none>
    Mounts:
      /certs from kubernetes-dashboard-certs (rw)
      /tmp from tmp-volume (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kubernetes-dashboard-token-x278v (ro)
Conditions:
  Type           Status
  PodScheduled   False
Volumes:
  kubernetes-dashboard-certs:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  kubernetes-dashboard-certs
    Optional:    false
  tmp-volume:
    Type:    EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
  kubernetes-dashboard-token-x278v:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  kubernetes-dashboard-token-x278v
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node-role.kubernetes.io/master:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age                        From               Message
  ----     ------            ----                       ----               -------
  Warning  FailedScheduling  3m26s (x42452 over 4d22h)  default-scheduler  0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.

Well, looking a little more closer at the node it looks like it's not ready to run pods:

$ kubectl describe node mynode
Name:               mynode
Roles:              master
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/hostname=mynode
                    node-role.kubernetes.io/master=
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Fri, 09 Nov 2018 17:08:59 +0100
Taints:             node.kubernetes.io/not-ready:NoSchedule
Unschedulable:      false
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  OutOfDisk        False   Wed, 14 Nov 2018 15:23:24 +0100   Fri, 09 Nov 2018 17:08:51 +0100   KubeletHasSufficientDisk     kubelet has sufficient disk space available
  MemoryPressure   False   Wed, 14 Nov 2018 15:23:24 +0100   Fri, 09 Nov 2018 17:08:51 +0100   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Wed, 14 Nov 2018 15:23:24 +0100   Fri, 09 Nov 2018 17:08:51 +0100   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Wed, 14 Nov 2018 15:23:24 +0100   Fri, 09 Nov 2018 17:08:51 +0100   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            False   Wed, 14 Nov 2018 15:23:24 +0100   Fri, 09 Nov 2018 17:08:51 +0100   KubeletNotReady              runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
...

Sorry for the false alarm...

Not a problem! 😸 Sometimes we all just need a nudge in the right direction.

@chirothespearow @aaronsu Could both of you follow those directions and see if something similar is going on?

Thanks!

As this is a cluster issue we can close.
/close

@floreks: Closing this issue.

In response to this:

As this is a cluster issue we can close.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Hi all,

I am facing similar issue. Not able open Kubernetes dashboard.

Steps that I have used to installed kube-dashboard :-

  1. kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta1/aio/deploy/recommended.yaml

After that two pods are successfully created of dashboard namespaces.

image

After that I have started kubectl porxy on 9999 port.

Trying to access dashboard
http://172.19.63.33:9999/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/

Note: -
172.19.63.33 is ip of my setup

But getting this below data

kind | "Status"
-- | --
apiVersion | "v1"
metadata | {}
status | "Failure"
message | "services \"kubernetes-dashboard\" not found"
reason | "NotFound"
details |  
name | "kubernetes-dashboard"
kind | "services"
code | 404

I am using v0.15.0 kubernetes version.
My Node is in running state.

@hiteshsardana99 Came back from vacation and was facing the same issue right now, the problem is that with the URL http://172.19.63.33:9999/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/ you're trying to access the service kubernetes-dashboard in the kube-system namespace, but for some reason the service doesnt exist there anymore. Try http://172.19.63.33:9999/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/. This resolved it for me.

Even if you use correct url, you still won't be able to log in. It's not allowed to log in when accessing Dashboard over insecure connection (http).

@floreks Yeah, didnt realise that. In my case I'm using kubectl proxy and after that I access the cluster via localhost:8001/api/etc.. Just don't know why the path isn't correct anymore, probably because my dashboard got updated to version 2 beta 1 in the meantime? Might have a look on this if I can find time.

It's not correct because v2 has been moved out of kube-system namespace to namespace dedicated for dashboard only.

PS. Both localhost and 127.0.0.1 are allowed to be accessed over HTTP as they are secure by default.

Thank you for the clarification. :)

Hi all,
First of all thank you so much for replying here with useful information.

I have checked dashboard on two system -

  1. In one system where https user based authentication is not present. It means I have not added any user or policy.json so that outside kubernetes system user can trigger https request to kube-apiserver.

--> In this case, I am not able to access dashboard after using kubectl proxy.

  1. In this system, https user based authentication is working fine. I have added policy.json, known-users and ABAC role in kube-apiserver.yaml file so that outside user can trigger https request.

--> In this case able access GUI.

http://172.19.63.10:9999/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/#/login

Is it expected behavior. It means I cannot use dashboard with the default configuration of kube-apiserver.yaml that is created just after the kubernetes cluster come up.

After that I have tried to generate token for dashboard. But not able to login.

https://medium.com/@kanrangsan/creating-admin-user-to-access-kubernetes-dashboard-723d6c9764e4

i have taken reference from this website and specify "kubernetes-dashboard" namespace instead of kube-system because my dashboard is running inside kubernetes-dashboard.

@hiteshsardana99 I have already explained why. Please read my comments first.

Even if you use correct url, you still won't be able to log in. It's not allowed to log in when accessing Dashboard over insecure connection (http).

PS. Both localhost and 127.0.0.1 are allowed to be accessed over HTTP as they are secure by default.

Having the same issues. I have a 3 node cluster, master node and 2 worker nodes. It seems as if the dashboard service isn't exactly running. I tried shutting down the node where the dashboard was supposedly running on (k8s-node1), and kubernetes apparently moved dashboard to k8s-node2, but when I run the svc command, it does not show dashboard, so I'm not sure whether the service is actually running. Sorry, I'm new to kubernetes:

master node:
[admin@k8s-master ~]$ kubectl get pods --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system coredns-5c98db65d4-7fztc 1/1 Running 2 4d17h 172.16.0.5 k8s-master
kube-system coredns-5c98db65d4-wwb4t 1/1 Running 2 4d17h 172.16.0.4 k8s-master
kube-system etcd-k8s-master 1/1 Running 1 4d17h 10.1.99.10 k8s-master
kube-system kube-apiserver-k8s-master 1/1 Running 1 4d17h 10.1.99.10 k8s-master
kube-system kube-controller-manager-k8s-master 1/1 Running 1 4d17h 10.1.99.10 k8s-master
kube-system kube-router-74c9p 1/1 Running 0 4d16h 10.1.99.12 k8s-node2
kube-system kube-router-j8b2n 1/1 Running 1 4d16h 10.1.99.11 k8s-node1
kube-system kube-router-jh8h4 1/1 Running 1 4d16h 10.1.99.10 k8s-master
kube-system kube-scheduler-k8s-master 1/1 Running 1 4d17h 10.1.99.10 k8s-master
kubernetes-dashboard kubernetes-dashboard-5c8f9556c4-l5kjl 1/1 Terminating 2 46h 172.16.1.3 k8s-node1
kubernetes-dashboard kubernetes-dashboard-5c8f9556c4-x6nbg 1/1 Running 0 6m55s 172.16.2.3 k8s-node2
kubernetes-dashboard kubernetes-metrics-scraper-86456cdd8f-vktcs 1/1 Running 0 46h 172.16.2.2 k8s-node2

[admin@k8s-master ~]$ kubectl -n kube-system get svc -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kube-dns ClusterIP 10.96.0.10 53/UDP,53/TCP,9153/TCP 4d17h k8s-app=kube-dns

For folks coming here with this issue for v2.0.0-beta4 and other similar versions, this is likely happening for you because you're using an old URL without remembering to update the namespace.

http://127.0.0.1:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/#/login

is not the same as

http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/

v2 has moved the Dashboard to the kubernetes-dashboard namespace, as mentioned several times here.

I have tried using both version, v1.10.0 and v2.0.0-beta4 with the same results:

v.1.10.0 URL:
http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/

v.2.0.0-beta4 URL:
http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/

Error: 'dial tcp 10.40.0.1:8443: connect: no route to host'
Trying to reach: 'https://10.40.0.1:8443/'

In addition to the failure, the kubernetes-dashboard service is showing the following error messages in the log:

[admin@k8s-node2 ~]$ kubectl logs -n kubernetes-dashboard kubernetes-dashboard-5c8f9556c4-x65vq | more
2019/10/01 12:56:03 Starting overwatch
2019/10/01 12:56:03 Using namespace: kubernetes-dashboard
2019/10/01 12:56:03 Using in-cluster config to connect to apiserver
2019/10/01 12:56:03 Using secret token for csrf signing
2019/10/01 12:56:03 Initializing csrf token from kubernetes-dashboard-csrf secret
2019/10/01 12:56:04 Successful initial request to the apiserver, version: v1.15.2
2019/10/01 12:56:04 Generating JWE encryption key
2019/10/01 12:56:04 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
2019/10/01 12:56:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
2019/10/01 12:56:05 Initializing JWE encryption key from synchronized object
2019/10/01 12:56:05 Creating in-cluster Sidecar client
2019/10/01 12:56:05 Auto-generating certificates
2019/10/01 12:56:05 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2019/10/01 12:56:05 Successfully created certificates
2019/10/01 12:56:05 Serving securely on HTTPS port: 8443
2019/10/01 12:56:35 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2019/10/01 12:57:05 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2019/10/01 12:57:35 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2019/10/01 12:58:05 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2019/10/01 12:58:35 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2019/10/01 12:59:05 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2019/10/01 12:59:35 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2019/10/01 13:00:05 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2019/10/01 13:00:35 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2019/10/01 13:01:05 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2019/10/01 13:01:35 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2019/10/01 13:02:05 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.

scraper metrics is showing the following log messages:

[admin@k8s-node2 ~]$ kubectl logs -n kubernetes-dashboard kubernetes-metrics-scraper-86456cdd8f-5jbtv | more
{"level":"info","msg":"Kubernetes host: https://10.96.0.1:443","time":"2019-10-01T12:56:05Z"}
{"level":"info","msg":"URL: /","time":"2019-10-01T12:56:44Z"}
{"level":"info","msg":"URL: /","time":"2019-10-01T12:56:54Z"}
{"level":"info","msg":"URL: /","time":"2019-10-01T12:57:04Z"}
{"level":"error","msg":"Error scraping node metrics: the server could not find the requested resource (get nodes.metrics.k8s.io)","time":"2019-10-01T12:57:05Z"}
{"level":"info","msg":"URL: /","time":"2019-10-01T12:57:14Z"}
{"level":"info","msg":"URL: /","time":"2019-10-01T12:57:24Z"}
{"level":"info","msg":"URL: /","time":"2019-10-01T12:57:34Z"}
{"level":"info","msg":"URL: /","time":"2019-10-01T12:57:44Z"}
{"level":"info","msg":"URL: /","time":"2019-10-01T12:57:54Z"}
{"level":"info","msg":"URL: /","time":"2019-10-01T12:58:04Z"}
{"level":"error","msg":"Error scraping node metrics: the server could not find the requested resource (get nodes.metrics.k8s.io)","time":"2019-10-01T12:58:05Z"}
{"level":"info","msg":"URL: /","time":"2019-10-01T12:58:14Z"}
{"level":"info","msg":"URL: /","time":"2019-10-01T12:58:24Z"}
{"level":"info","msg":"URL: /","time":"2019-10-01T12:58:34Z"}
{"level":"info","msg":"URL: /","time":"2019-10-01T12:58:44Z"}
{"level":"info","msg":"URL: /","time":"2019-10-01T12:58:54Z"}
{"level":"info","msg":"URL: /","time":"2019-10-01T12:59:04Z"}
{"level":"error","msg":"Error scraping node metrics: the server could not find the requested resource (get nodes.metrics.k8s.io)","time":"2019-10-01T12:59:05Z"}
{"level":"info","msg":"URL: /","time":"2019-10-01T12:59:14Z"}
{"level":"info","msg":"URL: /","time":"2019-10-01T12:59:24Z"}
{"level":"info","msg":"URL: /","time":"2019-10-01T12:59:34Z"}
{"level":"info","msg":"URL: /","time":"2019-10-01T12:59:44Z"}
{"level":"info","msg":"URL: /","time":"2019-10-01T12:59:54Z"}
{"level":"info","msg":"URL: /","time":"2019-10-01T13:00:04Z"}
{"level":"error","msg":"Error scraping node metrics: the server could not find the requested resource (get nodes.metrics.k8s.io)","time":"2019-10-01T13:00:05Z"}
{"level":"info","msg":"URL: /","time":"2019-10-01T13:00:14Z"}
{"level":"info","msg":"URL: /","time":"2019-10-01T13:00:24Z"}
{"level":"info","msg":"URL: /","time":"2019-10-01T13:00:34Z"}
{"level":"info","msg":"URL: /","time":"2019-10-01T13:00:44Z"}
{"level":"info","msg":"URL: /","time":"2019-10-01T13:00:54Z"}
{"level":"info","msg":"URL: /","time":"2019-10-01T13:01:04Z"}
{"level":"error","msg":"Error scraping node metrics: the server could not find the requested resource (get nodes.metrics.k8s.io)","time":"2019-10-01T13:01:05Z"}
{"level":"info","msg":"URL: /","time":"2019-10-01T13:01:14Z"}
{"level":"info","msg":"URL: /","time":"2019-10-01T13:01:24Z"}
{"level":"info","msg":"URL: /","time":"2019-10-01T13:01:34Z"}
{"level":"info","msg":"URL: /","time":"2019-10-01T13:01:44Z"}
{"level":"info","msg":"URL: /","time":"2019-10-01T13:01:54Z"}
{"level":"info","msg":"URL: /","time":"2019-10-01T13:02:04Z"}
{"level":"error","msg":"Error scraping node metrics: the server could not find the requested resource (get nodes.metrics.k8s.io)","time":"2019-10-01T13:02:05Z"}
{"level":"info","msg":"URL: /","time":"2019-10-01T13:02:14Z"}
{"level":"info","msg":"URL: /","time":"2019-10-01T13:02:24Z"}
{"level":"info","msg":"URL: /","time":"2019-10-01T13:02:34Z"}
{"level":"info","msg":"URL: /","time":"2019-10-01T13:02:44Z"}
{"level":"info","msg":"URL: /","time":"2019-10-01T13:02:54Z"}
{"level":"info","msg":"URL: /","time":"2019-10-01T13:03:04Z"}

Hi,

Following worked for me:

Check the endpoint in kubernetes-dashboard project:
# kubectl -n kubernetes-dashboard get endpoints -o wide
NAME ENDPOINTS AGE
dashboard-metrics-scraper 10.32.0.6:8000 67m
kubernetes-dashboard 10.32.0.5:8443 67m

Go to 10.32.0.5:8443

Thats it!

Thank you manov555. This actually worked. Without sounding like a complete idiot (and full disclosure, I am "trying" to deploy Kubernetes cluster for the very first time). The documentation to deploy such infrastructure while accurate at first, seems to be missing very critical information.

I'd love to seriously consider Kubernetes, but it just seems like an OpenStack solution in the early stages, where there's little resources whenever dealing with networking issues.

I obviously have a lot to learn, but I do appreciate the feedback.

As far as my deployment, I am unable to use the "proxy" in the way the kubernetes documentation has it outlined.

Thanks again manov555.

i have
kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-5644d7b6d9-7mz85 1/1 Running 0 27m
kube-system coredns-5644d7b6d9-wrdbj 1/1 Running 0 27m
kube-system etcd-polyaxon1 1/1 Running 0 26m
kube-system kube-apiserver-polyaxon1 1/1 Running 0 26m
kube-system kube-controller-manager-polyaxon1 1/1 Running 0 26m
kube-system kube-flannel-ds-amd64-h2gdz 1/1 Running 0 21m
kube-system kube-flannel-ds-amd64-pkh2l 1/1 Running 0 23m
kube-system kube-proxy-4tq5k 1/1 Running 0 27m
kube-system kube-proxy-654l8 1/1 Running 0 21m
kube-system kube-scheduler-polyaxon1 1/1 Running 0 26m
kube-system kubernetes-dashboard-7c54d59f66-nvjns 1/1 Running 0 19m

and after hitting kubectl proxy

i still get an error for
http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/

image

kubectl logs -n kube-system kubernetes-dashboard-7c54d59f66-nvjns
yields
the below message
Metric client health check failed: the server could not find the requested resource (get services heapster)

http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/

Thanks much. I got the url
http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
but it prompts for the below

image

Please advise

i used the step
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | awk '/^deployment-controller-token-/{print $1}') | awk '$1=="token:"{print $2}'
from the below
https://stackoverflow.com/questions/46664104/how-to-sign-in-kubernetes-dashboard

and provided the token

But now i get the below error

image

Appreciate some help on this

@hiteshsardana99 Came back from vacation and was facing the same issue right now, the problem is that with the URL http://172.19.63.33:9999/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/ your trying to access the service kubernetes-dashboard in the kube-system namespace, but for some reason the service doesnt exist there anymore. Try http://172.19.63.33:9999/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/. This resolved it for me.

Solved my problem. Thanks !

@An-DJ
This issue spent me 1.5 days, and I found your solution, it solved the problem on my side, Thanks so much.

I'm setting up v2.0.1 and having random issues with kubectl proxy. kubectl port-forward just work like a charm on first try.

Command is kubectl port-forward services/kubernetes-dashboard 8000:443 and then access kubernetes-dashboard with https://localhost:8000. Hope it helps.

Hi,

Following worked for me:

Check the endpoint in kubernetes-dashboard project:
# kubectl -n kubernetes-dashboard get endpoints -o wide
NAME ENDPOINTS AGE
dashboard-metrics-scraper 10.32.0.6:8000 67m
kubernetes-dashboard 10.32.0.5:8443 67m

Go to 10.32.0.5:8443

Thats it!

I have same issue as xancatal.
Can you please explain your line " Go to 10.32.0.5:8443" ? What exactly and how exactly we proceed ?

Well, looking a little more closer at the node it looks like it's not ready to run pods:

$ kubectl describe node mynode
Name:               mynode
Roles:              master
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/hostname=mynode
                    node-role.kubernetes.io/master=
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Fri, 09 Nov 2018 17:08:59 +0100
Taints:             node.kubernetes.io/not-ready:NoSchedule
Unschedulable:      false
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  OutOfDisk        False   Wed, 14 Nov 2018 15:23:24 +0100   Fri, 09 Nov 2018 17:08:51 +0100   KubeletHasSufficientDisk     kubelet has sufficient disk space available
  MemoryPressure   False   Wed, 14 Nov 2018 15:23:24 +0100   Fri, 09 Nov 2018 17:08:51 +0100   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Wed, 14 Nov 2018 15:23:24 +0100   Fri, 09 Nov 2018 17:08:51 +0100   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Wed, 14 Nov 2018 15:23:24 +0100   Fri, 09 Nov 2018 17:08:51 +0100   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            False   Wed, 14 Nov 2018 15:23:24 +0100   Fri, 09 Nov 2018 17:08:51 +0100   KubeletNotReady              runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
...

Sorry for the false alarm...

what did you do to fix this????

Was this page helpful?
0 / 5 - 0 ratings