Dashboard: Error: 'dial tcp 192.168.1.2:8443: i/o timeout

Created on 7 May 2019  路  9Comments  路  Source: kubernetes/dashboard

Environment
Installation method: kubeadm with Calico CNI (--pod-network-cidr=192.168.0.0/16)
Kubernetes version:1.14
Dashboard version:
Operating system:Ubuntu 18.04 (EC2 instances)
Node.js version ('node --version' output):
Go version ('go version' output):
Steps to reproduce
$kubeadm init --pod-network-cidr=192.168.0.0/16

$kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml
$kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml
Observed result
$kubectl cluster-info

Kubernetes master is running at https://172.30.0.202:6443
KubeDNS is running at https://172.30.0.202:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
$kubectl get nodes

NAME              STATUS   ROLES    AGE    VERSION
ip-172-30-0-202   Ready    master   121m   v1.14.1
ip-172-30-0-23    Ready    <none>   113m   v1.14.1
$kubectl get pods --all-namespaces -o wide

NAMESPACE     NAME                                      READY   STATUS    RESTARTS   AGE    IP             NODE              NOMINATED NODE   READINESS GATES
kube-system   calico-node-mscl8                         2/2     Running   0          107m   172.30.0.202   ip-172-30-0-202   <none>           <none>
kube-system   calico-node-qhj8d                         2/2     Running   0          100m   172.30.0.23    ip-172-30-0-23    <none>           <none>
kube-system   coredns-fb8b8dccf-h7mvp                   1/1     Running   0          108m   192.168.0.3    ip-172-30-0-202   <none>           <none>
kube-system   coredns-fb8b8dccf-qsgxr                   1/1     Running   0          108m   192.168.0.2    ip-172-30-0-202   <none>           <none>
kube-system   etcd-ip-172-30-0-202                      1/1     Running   0          107m   172.30.0.202   ip-172-30-0-202   <none>           <none>
kube-system   kube-apiserver-ip-172-30-0-202            1/1     Running   0          107m   172.30.0.202   ip-172-30-0-202   <none>           <none>
kube-system   kube-controller-manager-ip-172-30-0-202   1/1     Running   0          107m   172.30.0.202   ip-172-30-0-202   <none>           <none>
kube-system   kube-proxy-6pjbr                          1/1     Running   0          108m   172.30.0.202   ip-172-30-0-202   <none>           <none>
kube-system   kube-proxy-jcwj2                          1/1     Running   0          100m   172.30.0.23    ip-172-30-0-23    <none>           <none>
kube-system   kube-scheduler-ip-172-30-0-202            1/1     Running   0          107m   172.30.0.202   ip-172-30-0-202   <none>           <none>
kube-system   kubernetes-dashboard-5f7b999d65-vnr8x     1/1     Running   0          95m    192.168.1.2    ip-172-30-0-23    <none>           <none>



md5-6f9ca402ea61c14ddafce062a48e72ed



$kubectl get svc --all-namespaces - wide

NAMESPACE     NAME                   TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                  AGE    SELECTOR
default       kubernetes             ClusterIP   10.96.0.1       <none>        443/TCP                  113m   <none>
kube-system   calico-typha           ClusterIP   10.105.77.96    <none>        5473/TCP                 113m   k8s-app=calico-typha
kube-system   kube-dns               ClusterIP   10.96.0.10      <none>        53/UDP,53/TCP,9153/TCP   113m   k8s-app=kube-dns
kube-system   kubernetes-dashboard   ClusterIP   10.108.172.49   <none>        443/TCP                  101m   k8s-app=kubernetes-dashboard



md5-6f9ca402ea61c14ddafce062a48e72ed



$kubectl logs kubernetes-dashboard-5f7b999d65-vnr8x -n kube-system

2019/05/07 07:54:40 Starting overwatch
2019/05/07 07:54:40 Using in-cluster config to connect to apiserver
2019/05/07 07:54:40 Using service account token for csrf signing
2019/05/07 07:54:40 Successful initial request to the apiserver, version: v1.14.1
2019/05/07 07:54:40 Generating JWE encryption key
2019/05/07 07:54:40 New synchronizer has been registered: kubernetes-dashboard-key-holder-kube-system. Starting
2019/05/07 07:54:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system
2019/05/07 07:54:40 Storing encryption key in a secret
2019/05/07 07:54:40 Creating in-cluster Heapster client
2019/05/07 07:54:40 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/05/07 07:54:40 Auto-generating certificates
2019/05/07 07:54:40 Successfully created certificates
2019/05/07 07:54:40 Serving securely on HTTPS port: 8443
2019/05/07 07:55:10 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/05/07 07:55:40 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/05/07 07:56:10 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.



md5-6f9ca402ea61c14ddafce062a48e72ed



From master node
$kubectl proxy
Starting to serve on 127.0.0.1:8001

From another terminal to master node
$curl 127.0.0.1:8001/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy/

Error: 'dial tcp 192.168.1.2:8443: i/o timeout'
Trying to reach: 'http://192.168.1.2:8443/
Expected result


Expected result is Dashboard Login Page HTML, while I get the error

Error: 'dial tcp 192.168.1.2:8443: i/o timeout'
Trying to reach: 'http://192.168.1.2:8443/

Comments


As you would notice all pods, services are running without any errors, and exceptions. However the dashboard is still elusive.

I am on EC2 t2.medium instances (one for the master, and another for the worker node)

kinbug

Most helpful comment

I ran this:
nohup kubectl proxy &

Then this:
curl http://127.0.0.1:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/

Getting this error:
Error trying to reach service: 'Proxy Error ( Connection refused )

Even though when i run this:
curl http://127.0.0.1:8001/api/v1/namespaces/kubernetes-dashboard/services/

I get this output,which means i can reach port 8001 very well:

{
  "kind": "ServiceList",
  "apiVersion": "v1",
  "metadata": {
    "selfLink": "/api/v1/namespaces/kubernetes-dashboard/services/",
    "resourceVersion": "2229336"
  },
  "items": [
    {
      "metadata": {
        "name": "dashboard-metrics-scraper",
        "namespace": "kubernetes-dashboard",
        "selfLink": "/api/v1/namespaces/kubernetes-dashboard/services/dashboard-metrics-scraper",
        "uid": "716c5d85-d3d5-4f49-bcae-b77b848fc129",
        "resourceVersion": "2216232",
        "creationTimestamp": "2020-03-17T06:49:58Z",
        "labels": {
          "k8s-app": "dashboard-metrics-scraper"
        }
      },
      "spec": {
        "ports": [
          {
            "protocol": "TCP",
            "port": 8000,
            "targetPort": 8000
          }
        ],
        "selector": {
          "k8s-app": "dashboard-metrics-scraper"
        },
        "clusterIP": "10.107.93.77",
        "type": "ClusterIP",
        "sessionAffinity": "None"
      },
      "status": {
        "loadBalancer": {

        }
      }
    },
    {
      "metadata": {
        "name": "kubernetes-dashboard",
        "namespace": "kubernetes-dashboard",
        "selfLink": "/api/v1/namespaces/kubernetes-dashboard/services/kubernetes-dashboard",
        "uid": "76832e87-fa3a-44de-9d19-06b15efc8073",
        "resourceVersion": "2216216",
        "creationTimestamp": "2020-03-17T06:49:58Z",
        "labels": {
          "k8s-app": "kubernetes-dashboard"
        }
      },
      "spec": {
        "ports": [
          {
            "protocol": "TCP",
            "port": 443,
            "targetPort": 8443
          }
        ],
        "selector": {
          "k8s-app": "kubernetes-dashboard"
        },
        "clusterIP": "10.111.110.202",
        "type": "ClusterIP",
        "sessionAffinity": "None"
      },
      "status": {
        "loadBalancer": {

        }
      }
    }
  ]
}

_someone please advise what i might have overlooked._

All 9 comments

Another observation that shows other functions seem to be working.

$curl 127.0.0.1:8001/api
{
  "kind": "APIVersions",
  "versions": [
    "v1"
  ],
  "serverAddressByClientCIDRs": [
    {
      "clientCIDR": "0.0.0.0/0",
      "serverAddress": "172.30.0.202:6443"
    }
  ]
}

Can you try the following command:
kubectl --namespace=kube-system port-forward <kubernetes-dashboard-podname> 8443

I got this resolution when looking at a similar github issue here :
https://github.com/kubernetes/dashboard/issues/3038

@mantralabs follow the steps from our README and use correct URL to access Dashboard.

/close

@floreks: Closing this issue.

In response to this:

@mantralabs follow the steps from our README and use correct URL to access Dashboard.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Yes, the dashboard URL that I was using was indeed incorrect. The correct URL should be
curl 127.0.0.1:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/

I used Weave Net as the CNI along with this above fix of the URL made the dashboard work.

Hey, I'm new to Kubernetes. I initialized the Dashboard and proxy from my master node, and I'm getting the same error message when I enter the URL in the master node:

Error: 'dial tcp 172.16.1.2:8443: i/o timeout'
Trying to reach: 'https://172.16.1.2:8443/'

I noticed that the Dashboard is running in one of the two worker nodes (node1):

kube-system kube-router-jh8h4 1/1 Running 1 3d23h 10.1.99.10 k8s-master
kube-system kube-scheduler-k8s-master 1/1 Running 1 4d 10.1.99.10 k8s-master
kubernetes-dashboard kubernetes-dashboard-5c8f9556c4-l5kjl 1/1 Running 0 29h 172.16.1.2 k8s-node1
kubernetes-dashboard kubernetes-metrics-scraper-86456cdd8f-vktcs 1/1 Running 0 29h 172.16.2.2 k8s-node2

Should I attempt to access the dashboard on the actual node1 web browser?

I had a similar issue that was caused by the firewall blocking traffic.
check:
I had to add this for BGP in iptables config /etc/sysconfig/iptables
-A INPUT -p tcp --dport 179 -m comment --comment "Kubernetes" -j ACCEPT

and you have to use iptables-legacy on CentOS 7 or look up how to enable Calico to work with 1.8 (setting some var).

I ran this:
nohup kubectl proxy &

Then this:
curl http://127.0.0.1:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/

Getting this error:
Error trying to reach service: 'Proxy Error ( Connection refused )

Even though when i run this:
curl http://127.0.0.1:8001/api/v1/namespaces/kubernetes-dashboard/services/

I get this output,which means i can reach port 8001 very well:

{
  "kind": "ServiceList",
  "apiVersion": "v1",
  "metadata": {
    "selfLink": "/api/v1/namespaces/kubernetes-dashboard/services/",
    "resourceVersion": "2229336"
  },
  "items": [
    {
      "metadata": {
        "name": "dashboard-metrics-scraper",
        "namespace": "kubernetes-dashboard",
        "selfLink": "/api/v1/namespaces/kubernetes-dashboard/services/dashboard-metrics-scraper",
        "uid": "716c5d85-d3d5-4f49-bcae-b77b848fc129",
        "resourceVersion": "2216232",
        "creationTimestamp": "2020-03-17T06:49:58Z",
        "labels": {
          "k8s-app": "dashboard-metrics-scraper"
        }
      },
      "spec": {
        "ports": [
          {
            "protocol": "TCP",
            "port": 8000,
            "targetPort": 8000
          }
        ],
        "selector": {
          "k8s-app": "dashboard-metrics-scraper"
        },
        "clusterIP": "10.107.93.77",
        "type": "ClusterIP",
        "sessionAffinity": "None"
      },
      "status": {
        "loadBalancer": {

        }
      }
    },
    {
      "metadata": {
        "name": "kubernetes-dashboard",
        "namespace": "kubernetes-dashboard",
        "selfLink": "/api/v1/namespaces/kubernetes-dashboard/services/kubernetes-dashboard",
        "uid": "76832e87-fa3a-44de-9d19-06b15efc8073",
        "resourceVersion": "2216216",
        "creationTimestamp": "2020-03-17T06:49:58Z",
        "labels": {
          "k8s-app": "kubernetes-dashboard"
        }
      },
      "spec": {
        "ports": [
          {
            "protocol": "TCP",
            "port": 443,
            "targetPort": 8443
          }
        ],
        "selector": {
          "k8s-app": "kubernetes-dashboard"
        },
        "clusterIP": "10.111.110.202",
        "type": "ClusterIP",
        "sessionAffinity": "None"
      },
      "status": {
        "loadBalancer": {

        }
      }
    }
  ]
}

_someone please advise what i might have overlooked._

I have the same issue as the dev above. Any news?

Was this page helpful?
0 / 5 - 0 ratings

Related issues

maciaszczykm picture maciaszczykm  路  3Comments

kasunsjc picture kasunsjc  路  3Comments

wu105 picture wu105  路  3Comments

donspaulding picture donspaulding  路  5Comments

mhobotpplnet picture mhobotpplnet  路  3Comments