dashboard pod stops running
root@kube-master:~# kubectl get pods --namespace=kube-system
NAME READY STATUS RESTARTS AGE
calico-etcd-ndk6g 1/1 Running 0 1h
calico-node-6z7eo 2/2 Running 0 1h
calico-node-dx6bw 2/2 Running 0 1h
calico-policy-controller-482x3 1/1 Running 0 1h
etcd-kube-master 1/1 Running 0 2h
kube-apiserver-kube-master 1/1 Running 0 2h
kube-controller-manager-kube-master 1/1 Running 0 2h
kube-discovery-982812725-3ufq7 1/1 Running 0 2h
kube-dns-2247936740-661cz 3/3 Running 0 2h
kube-proxy-amd64-ahpyz 1/1 Running 0 2h
kube-proxy-amd64-f0dai 1/1 Running 0 1h
kube-scheduler-kube-master 1/1 Running 0 2h
kubernetes-dashboard-1655269645-31e5x 1/1 Running 1 42s
info: 1 completed object(s) was(were) not shown in pods list. Pass --show-all to see all objects.
root@kube-master:~# kubectl logs kubernetes-dashboard-1655269645-31e5x --namespace=kube-system
Starting HTTP server on port 9090
Creating API server client for https://100.64.0.1:443
Error while initializing connection to Kubernetes apiserver. This most likely means that the cluster is misconfigured (e.g., it has invalid apiserver certificates or service accounts configuration) or the --apiserver-host param points to a server that does not exist. Reason: Get https://100.64.0.1:443/version: dial tcp 100.64.0.1:443: i/o timeout
Dashboard version: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.4.0
Kubernetes version: 1.4.0
Operating system: Ubuntu 16.04
Node.js version: installed using kubeadm
Go version:installed using kubeadm
kubeadm init on the master node
kubeadm join on the worker
Starting HTTP server on port 9090
Creating API server client for https://100.64.0.1:443
Error while initializing connection to Kubernetes apiserver. This most likely means that the cluster is misconfigured (e.g., it has invalid apiserver certificates or service accounts configuration) or the --apiserver-host param points to a server that does not exist. Reason: Get https://100.64.0.1:443/version: dial tcp 100.64.0.1:443: i/o timeout
able to access the dashboard
baremetal kubernetes install.
Thanks for your error report. It looks like your API server is not configured correctly or otherwise can't be contacted from within the cluster. Are you sure you have networking set up properly? This is most likely not an issue with the dashboard and rather an issue with cluster creation.
Hi @billcloud-me , did @IanLewis 's guidance lead to a resolution?
I had no issues with kubeadm and kube-weave network.
Can you check the trouble-shooting guide? https://github.com/kubernetes/dashboard/blob/master/docs/user-guide/troubleshooting.md
I forgot to close this issue. My problem was not having ip-forwarding enabled. Thank you @cheld and @IanLewis!
I had a similar issue, where the dashboard pod, or any pod which was running on nodes other than the master would not have any network connectivity.
Found out that IP forwarding was not enabled on the nodes, so packets sent by the pods for other nodes would not leave their hosting node.
To fix this I un-commented this line in "/etc/sysctl.conf":
#sysctl net.ipv4.ip_forward=1
And, reload the new configuration:
$ sudo sysctl -p
Now, IP forwarding is enabled and persistent (iptables-save just outputs the current iptables rules):
$ sudo iptables-save
....
*filter
....
:FORWARD ACCEPT [4:1088]
Most helpful comment
I had a similar issue, where the dashboard pod, or any pod which was running on nodes other than the master would not have any network connectivity.
Found out that IP forwarding was not enabled on the nodes, so packets sent by the pods for other nodes would not leave their hosting node.
To fix this I un-commented this line in "/etc/sysctl.conf":
#sysctl net.ipv4.ip_forward=1And, reload the new configuration:
$ sudo sysctl -pNow, IP forwarding is enabled and persistent (iptables-save just outputs the current iptables rules):