Dashboard: Kubernetes dashboard: dial tcp 10.96.0.1:443: i/o timeout

Created on 1 Apr 2019  路  2Comments  路  Source: kubernetes/dashboard

Environment

Installation method: kubeadm on Vagrant
Kubernetes version:

Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.0", GitCommit:"641856db18352033a0d96dbc99153fa3b27298e5", GitTreeState:"clean", BuildDate:"2019-03-25T15:53:57Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.0", GitCommit:"641856db18352033a0d96dbc99153fa3b27298e5", GitTreeState:"clean", BuildDate:"2019-03-25T15:45:25Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}

Dashboard version:
1.10.1
Operating system: Ubuntu 18.04 Vagrant box.
Node.js version ('node --version' output): /
Go version ('go version' output): /

Installed the cluster using this setup:

kubectl get nodes -o wide
NAME         STATUS   ROLES    AGE     VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
k8s-master   Ready    master   10m     v1.14.0   192.168.50.10   <none>        Ubuntu 18.04.1 LTS   4.15.0-29-generic   docker://18.6.2
node-1       Ready    <none>   7m33s   v1.14.0   192.168.50.11   <none>        Ubuntu 18.04.1 LTS   4.15.0-29-generic   docker://18.6.2

Cluster info

kubectl cluster-info
Kubernetes master is running at https://192.168.50.10:6443
KubeDNS is running at https://192.168.50.10:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Steps to reproduce


On the master I ran:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml

Observed result


pod starts but restarts pretty quick and goes in CrashLoopBackOff

NAME                                    READY   STATUS    RESTARTS   AGE   IP              NODE         NOMINATED NODE   READINESS GATES
calico-node-2cwm7                       1/1     Running   0          38m   192.168.50.10   k8s-master   <none>           <none>
calico-node-988t5                       1/1     Running   0          35m   192.168.50.11   node-1       <none>           <none>
coredns-fb8b8dccf-dt2r4                 1/1     Running   0          38m   192.168.0.3     k8s-master   <none>           <none>
coredns-fb8b8dccf-zzfzr                 1/1     Running   0          38m   192.168.0.2     k8s-master   <none>           <none>
etcd-k8s-master                         1/1     Running   0          37m   192.168.50.10   k8s-master   <none>           <none>
kube-apiserver-k8s-master               1/1     Running   0          37m   192.168.50.10   k8s-master   <none>           <none>
kube-controller-manager-k8s-master      1/1     Running   0          37m   192.168.50.10   k8s-master   <none>           <none>
kube-proxy-f84p5                        1/1     Running   0          38m   192.168.50.10   k8s-master   <none>           <none>
kube-proxy-whzjj                        1/1     Running   0          35m   192.168.50.11   node-1       <none>           <none>
kube-scheduler-k8s-master               1/1     Running   0          37m   192.168.50.10   k8s-master   <none>           <none>
kubernetes-dashboard-5f7b999d65-tlgmj   0/1     CrashLoopBackOff   6          33m   192.168.1.2     node-1       <none>           <none>

logs of pod:

kubectl logs kubernetes-dashboard-5f7b999d65-tlgmj -n kube-system
2019/04/01 16:12:25 Starting overwatch
2019/04/01 16:12:25 Using in-cluster config to connect to apiserver
2019/04/01 16:12:25 Using service account token for csrf signing
2019/04/01 16:12:55 Error while initializing connection to Kubernetes apiserver. This most likely means that the cluster is misconfigured (e.g., it has invalid apiserver certificates or service account's configuration) or the --apiserver-host param points to a server that does not exist. Reason: Get https://10.96.0.1:443/version: dial tcp 10.96.0.1:443: i/o timeout
Refer to our FAQ and wiki pages for more information: https://github.com/kubernetes/dashboard/wiki/FAQ
Expected result


A dashboard pod running on one of the worker nodes.

Comments

I checked if I can curl kubernetes.default from within a pod (I didn't use busybox because of the dns issues with it) (also telnet worked from every node):

$ kubectl run -i --tty ubuntu --image=ubuntu
# apt-get update -y
# apt-get install dnsutils -y
nslookup kubernetes.default
Server:     10.96.0.10
Address:    10.96.0.10#53

Name:   kubernetes.default.svc.cluster.local
Address: 10.96.0.1

Kubelet logs

Apr 01 15:32:50 k8s-master systemd[1]: Started kubelet: The Kubernetes Node Agent.
Apr 01 15:32:51 k8s-master kubelet[18041]: I0401 15:32:51.671951   18041 server.go:417] Version: v1.14.0
Apr 01 15:32:51 k8s-master kubelet[18041]: I0401 15:32:51.672373   18041 plugins.go:103] No cloud provider specified.
Apr 01 15:32:51 k8s-master kubelet[18041]: W0401 15:32:51.672411   18041 server.go:556] standalone mode, no API client
Apr 01 15:32:51 k8s-master kubelet[18041]: W0401 15:32:51.903905   18041 server.go:474] No api server defined - no events will be sent to API server.
Apr 01 15:32:51 k8s-master kubelet[18041]: I0401 15:32:51.904123   18041 server.go:625] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
Apr 01 15:32:51 k8s-master kubelet[18041]: I0401 15:32:51.905462   18041 container_manager_linux.go:261] container manager verified user specified cgroup-root exists: []
Apr 01 15:32:51 k8s-master kubelet[18041]: I0401 15:32:51.905685   18041 container_manager_linux.go:266] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:
Apr 01 15:32:51 k8s-master kubelet[18041]: I0401 15:32:51.910746   18041 container_manager_linux.go:286] Creating device plugin manager: true
Apr 01 15:32:51 k8s-master kubelet[18041]: I0401 15:32:51.911688   18041 state_mem.go:36] [cpumanager] initializing new in-memory state store
Apr 01 15:32:51 k8s-master kubelet[18041]: I0401 15:32:51.944104   18041 client.go:75] Connecting to docker on unix:///var/run/docker.sock
Apr 01 15:32:51 k8s-master kubelet[18041]: I0401 15:32:51.944431   18041 client.go:104] Start docker client with request timeout=2m0s
Apr 01 15:32:51 k8s-master kubelet[18041]: W0401 15:32:51.965261   18041 docker_service.go:561] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
Apr 01 15:32:51 k8s-master kubelet[18041]: I0401 15:32:51.965970   18041 docker_service.go:238] Hairpin mode set to "hairpin-veth"
Apr 01 15:32:51 k8s-master kubelet[18041]: W0401 15:32:51.966259   18041 cni.go:213] Unable to update cni config: No networks found in /etc/cni/net.d
Apr 01 15:32:51 k8s-master kubelet[18041]: I0401 15:32:51.978634   18041 docker_service.go:253] Docker cri networking managed by kubernetes.io/no-op
Apr 01 15:32:52 k8s-master kubelet[18041]: I0401 15:32:52.009475   18041 docker_service.go:258] Docker Info: &{ID:BCTY:UWZU:ACOL:PFRO:WNWW:Y2OT:YBGL:5VO2:YVTQ:IPW5:5J7D:TIIX Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersSt
Apr 01 15:32:52 k8s-master kubelet[18041]: I0401 15:32:52.010638   18041 docker_service.go:271] Setting cgroupDriver to cgroupfs
Apr 01 15:32:52 k8s-master kubelet[18041]: I0401 15:32:52.055559   18041 remote_runtime.go:62] parsed scheme: ""
Apr 01 15:32:52 k8s-master kubelet[18041]: I0401 15:32:52.055637   18041 remote_runtime.go:62] scheme "" not registered, fallback to default scheme
Apr 01 15:32:52 k8s-master kubelet[18041]: I0401 15:32:52.055684   18041 remote_image.go:50] parsed scheme: ""
Apr 01 15:32:52 k8s-master kubelet[18041]: I0401 15:32:52.055695   18041 remote_image.go:50] scheme "" not registered, fallback to default scheme
Apr 01 15:32:52 k8s-master kubelet[18041]: I0401 15:32:52.056249   18041 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{/var/run/dockershim.sock 0  <nil>}]
Apr 01 15:32:52 k8s-master kubelet[18041]: I0401 15:32:52.056294   18041 clientconn.go:796] ClientConn switching balancer to "pick_first"
Apr 01 15:32:52 k8s-master kubelet[18041]: I0401 15:32:52.056373   18041 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0xc0001e5930, CONNECTING
Apr 01 15:32:52 k8s-master kubelet[18041]: I0401 15:32:52.056863   18041 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0xc0001e5930, READY
Apr 01 15:32:52 k8s-master kubelet[18041]: I0401 15:32:52.056947   18041 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{/var/run/dockershim.sock 0  <nil>}]
Apr 01 15:32:52 k8s-master kubelet[18041]: I0401 15:32:52.056968   18041 clientconn.go:796] ClientConn switching balancer to "pick_first"
Apr 01 15:32:52 k8s-master kubelet[18041]: I0401 15:32:52.057051   18041 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0xc0001e5eb0, CONNECTING
Apr 01 15:32:52 k8s-master kubelet[18041]: I0401 15:32:52.057504   18041 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0xc0001e5eb0, READY
Apr 01 15:32:52 k8s-master kubelet[18041]: I0401 15:32:52.074179   18041 kuberuntime_manager.go:210] Container runtime docker initialized, version: 18.06.2-ce, apiVersion: 1.38.0
Apr 01 15:32:52 k8s-master kubelet[18041]: W0401 15:32:52.075024   18041 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
Apr 01 15:32:52 k8s-master kubelet[18041]: W0401 15:32:52.076976   18041 csi_plugin.go:218] kubernetes.io/csi: kubeclient not set, assuming standalone kubelet
Apr 01 15:32:52 k8s-master kubelet[18041]: I0401 15:32:52.080289   18041 server.go:1037] Started kubelet
Apr 01 15:32:52 k8s-master kubelet[18041]: W0401 15:32:52.080493   18041 kubelet.go:1387] No api server defined - no node status update will be sent.
Apr 01 15:32:52 k8s-master kubelet[18041]: I0401 15:32:52.081654   18041 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer
Apr 01 15:32:52 k8s-master kubelet[18041]: I0401 15:32:52.081700   18041 status_manager.go:148] Kubernetes client is nil, not starting status manager.
Apr 01 15:32:52 k8s-master kubelet[18041]: I0401 15:32:52.081728   18041 kubelet.go:1806] Starting kubelet main sync loop.
Apr 01 15:32:52 k8s-master kubelet[18041]: I0401 15:32:52.081758   18041 kubelet.go:1823] skipping pod synchronization - [container runtime status check may not have completed yet., PLEG is not healthy: pleg has yet to be successful.]
Apr 01 15:32:52 k8s-master kubelet[18041]: E0401 15:32:52.082070   18041 kubelet.go:1282] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data in memory cac
Apr 01 15:32:52 k8s-master kubelet[18041]: I0401 15:32:52.082268   18041 volume_manager.go:248] Starting Kubelet Volume Manager
Apr 01 15:32:52 k8s-master kubelet[18041]: I0401 15:32:52.082709   18041 server.go:141] Starting to listen on 0.0.0.0:10250
Apr 01 15:32:52 k8s-master kubelet[18041]: I0401 15:32:52.083978   18041 server.go:343] Adding debug handlers to kubelet server.
Apr 01 15:32:52 k8s-master kubelet[18041]: E0401 15:32:52.088891   18041 runtime.go:69] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
Apr 01 15:32:52 k8s-master kubelet[18041]: /workspace/anago-v1.14.0-rc.1.5+641856db183520/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:76

description of pod:

$ kubectl describe pod kubernetes-dashboard-5f7b999d65-jjc6m -n kube-system
Name:               kubernetes-dashboard-5f7b999d65-jjc6m
Namespace:          kube-system
Priority:           0
PriorityClassName:  <none>
Node:               node-1/192.168.50.11
Start Time:         Mon, 01 Apr 2019 18:15:04 +0000
Labels:             k8s-app=kubernetes-dashboard
                    pod-template-hash=5f7b999d65
Annotations:        cni.projectcalico.org/podIP: 192.168.1.2/32
Status:             Running
IP:                 192.168.1.2
Controlled By:      ReplicaSet/kubernetes-dashboard-5f7b999d65
Containers:
  kubernetes-dashboard:
    Container ID:  docker://3f257a5febae8b1a0a05e274628a5aba73be4f83f20c1c841ae5bb49933f4200
    Image:         k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1
    Image ID:      docker-pullable://k8s.gcr.io/kubernetes-dashboard-amd64@sha256:0ae6b69432e78069c5ce2bcde0fe409c5c4d6f0f4d9cd50a17974fea38898747
    Port:          8443/TCP
    Host Port:     0/TCP
    Args:
      --auto-generate-certificates
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Mon, 01 Apr 2019 18:16:35 +0000
      Finished:     Mon, 01 Apr 2019 18:17:05 +0000
    Ready:          False
    Restart Count:  2
    Liveness:       http-get https://:8443/ delay=30s timeout=30s period=10s #success=1 #failure=3
    Environment:    <none>
    Mounts:
      /certs from kubernetes-dashboard-certs (rw)
      /tmp from tmp-volume (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kubernetes-dashboard-token-m74l2 (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  kubernetes-dashboard-certs:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  kubernetes-dashboard-certs
    Optional:    false
  tmp-volume:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  kubernetes-dashboard-token-m74l2:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  kubernetes-dashboard-token-m74l2
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node-role.kubernetes.io/master:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason     Age                  From               Message
  ----     ------     ----                 ----               -------
  Normal   Scheduled  11m                  default-scheduler  Successfully assigned kube-system/kubernetes-dashboard-5f7b999d65-grk4v to node-1
  Normal   Pulling    11m                  kubelet, node-1    Pulling image "k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1"
  Normal   Pulled     11m                  kubelet, node-1    Successfully pulled image "k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1"
  Normal   Pulled     7m34s (x4 over 10m)  kubelet, node-1    Container image "k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1" already present on machine
  Normal   Created    7m33s (x5 over 11m)  kubelet, node-1    Created container kubernetes-dashboard
  Normal   Started    7m33s (x5 over 11m)  kubelet, node-1    Started container kubernetes-dashboard
  Warning  BackOff    56s (x35 over 10m)   kubelet, node-1    Back-off restarting failed container

This works on node and master:

vagrant@node-1:~$ curl -kv https://10.96.0.1:443/version
*   Trying 10.96.0.1...
* TCP_NODELAY set
* Connected to 10.96.0.1 (10.96.0.1) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
*   CAfile: /etc/ssl/certs/ca-certificates.crt
  CApath: /etc/ssl/certs
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Request CERT (13):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Certificate (11):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Client hello (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256
* ALPN, server accepted to use h2
* Server certificate:
*  subject: CN=kube-apiserver
*  start date: Apr  1 18:43:01 2019 GMT
*  expire date: Mar 31 18:43:01 2020 GMT
*  issuer: CN=kubernetes
*  SSL certificate verify result: unable to get local issuer certificate (20), continuing anyway.
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x557eb9bdf900)
> GET /version HTTP/2
> Host: 10.96.0.1
> User-Agent: curl/7.58.0
> Accept: */*
>
* Connection state changed (MAX_CONCURRENT_STREAMS updated)!
< HTTP/2 200
< content-type: application/json
< content-length: 263
< date: Mon, 01 Apr 2019 19:09:30 GMT
<
{
  "major": "1",
  "minor": "14",
  "gitVersion": "v1.14.0",
  "gitCommit": "641856db18352033a0d96dbc99153fa3b27298e5",
  "gitTreeState": "clean",
  "buildDate": "2019-03-25T15:45:25Z",
  "goVersion": "go1.12.1",
  "compiler": "gc",
  "platform": "linux/amd64"
* Connection #0 to host 10.96.0.1 left intact

Metrics:

vagrant@k8s-master:~$ kubectl get nodes -o json | jq '.items[] | {name: .metadata.name, cap: .status.capacity}'
{
  "name": "k8s-master",
  "cap": {
    "cpu": "2",
    "ephemeral-storage": "64800356Ki",
    "hugepages-2Mi": "0",
    "memory": "1008940Ki",
    "pods": "110"
  }
}
{
  "name": "node-1",
  "cap": {
    "cpu": "2",
    "ephemeral-storage": "64800356Ki",
    "hugepages-2Mi": "0",
    "memory": "1008940Ki",
    "pods": "110"
  }
}
vagrant@k8s-master:~$ kubectl get nodes -o yaml | egrep cpu
      cpu: "2"
      cpu: "2"
      cpu: "2"
      cpu: "2"
vagrant@k8s-master:~$ kubectl get nodes -o yaml | egrep memory
      memory: 906540Ki
      memory: 1008940Ki
      message: kubelet has sufficient memory available
      memory: 906540Ki
      memory: 1008940Ki
      message: kubelet has sufficient memory available

node seems healthy

Conditions:
  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----                 ------  -----------------                 ------------------                ------                       -------
  NetworkUnavailable   False   Mon, 01 Apr 2019 18:45:49 +0000   Mon, 01 Apr 2019 18:45:49 +0000   CalicoIsUp                   Calico is running on this node
  MemoryPressure       False   Mon, 01 Apr 2019 19:26:53 +0000   Mon, 01 Apr 2019 18:45:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure         False   Mon, 01 Apr 2019 19:26:53 +0000   Mon, 01 Apr 2019 18:45:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure          False   Mon, 01 Apr 2019 19:26:53 +0000   Mon, 01 Apr 2019 18:45:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready                True    Mon, 01 Apr 2019 19:26:53 +0000   Mon, 01 Apr 2019 18:45:50 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled
kinsupport

Most helpful comment

@floreks seems to be an issue with the fact my pod network was 192.168.0.0/16 and my nodes 192.168.50.10, 192.168.50.11, .. (inside that range). When I updated the CIDR of my pod network to 172.16.0.0/16 the deployment of the dashboard went well.

All 2 comments

This is a configuration issue. You have to check connectivity between nodes and master, not only DNS resolution. Dashboard is on a node-1 and it has to connect to apiserver that is running on the master node.

@floreks seems to be an issue with the fact my pod network was 192.168.0.0/16 and my nodes 192.168.50.10, 192.168.50.11, .. (inside that range). When I updated the CIDR of my pod network to 172.16.0.0/16 the deployment of the dashboard went well.

Was this page helpful?
0 / 5 - 0 ratings