Website: kubelet is not starting apiserver

Created on 22 Jun 2017  ·  5Comments  ·  Source: kubernetes/website

This is a...

  • [ ] Feature Request
  • [x] Bug Report

Problem:

# cat /etc/os-release
NAME="Container Linux by CoreOS"
ID=coreos
VERSION=1409.2.0
VERSION_ID=1409.2.0
BUILD_ID=2017-06-19-2321
PRETTY_NAME="Container Linux by CoreOS 1409.2.0 (Ladybug)"
ANSI_COLOR="38;5;75"
HOME_URL="https://coreos.com/"
BUG_REPORT_URL="https://issues.coreos.com"
COREOS_BOARD="amd64-usr"

kubelet is failing to run services declared in the /etc/kubernetes/manifests/, most notably apiserver:

Jun 22 13:09:42 k8s-1 systemd[1]: Started kubelet.service.
Jun 22 13:09:42 k8s-1 kubelet-wrapper[1445]: + exec /usr/bin/rkt run --uuid-file-save=/var/run/kubelet-pod.uuid --volume var-log,kind=host,source=/var/log --mount volume=var-log,target=/var/log --volume dns,kind=host,source=/etc/resolv.conf --mount volume=dns,target=/etc/resolv.conf --trust-keys-from-https --volume etc-kubernetes,kind=host,source=/etc/kubernetes,readOnly=false --volume etc-ssl-certs,kind=host,source=/etc/ssl/certs,readOnly=true --volume usr-share-certs,kind=host,source=/usr/share/ca-certificates,readOnly=true --volume var-lib-docker,kind=host,source=/var/lib/docker,readOnly=false --volume var-lib-kubelet,kind=host,source=/var/lib/kubelet,readOnly=false,recursive=true --volume var-log,kind=host,source=/var/log,readOnly=false --volume os-release,kind=host,source=/usr/lib/os-release,readOnly=true --volume run,kind=host,source=/run,readOnly=false --mount volume=etc-kubernetes,target=/etc/kubernetes --mount volume=etc-ssl-certs,target=/etc/ssl/certs --mount volume=usr-share-certs,target=/usr/share/ca-certificates --mount volume=var-lib-docker,target=/var/lib/docker --mount volume=var-lib-kubelet,target=/var/lib/kubelet --mount volume=var-log,target=/var/log --mount volume=os-release,target=/etc/os-release --mount volume=run,target=/run --stage1-from-dir=stage1-fly.aci quay.io/coreos/hyperkube:v1.6.6_coreos.0 --exec=/kubelet -- --require-kubeconfig=true --kubeconfig=/var/lib/kubelet/kubeconfig.yml --register-node=true --cni-conf-dir=/etc/kubernetes/cni/net.d --network-plugin= --container-runtime=docker --allow-privileged=true --pod-manifest-path=/etc/kubernetes/manifests --hostname-override=10.44.1.191 --cluster_dns=10.30.0.2 --cluster_domain=cluster.local
Jun 22 13:11:08 k8s-1 kubelet-wrapper[1445]: I0622 13:11:08.658746    1445 feature_gate.go:144] feature gates: map[]
Jun 22 13:11:08 k8s-1 kubelet-wrapper[1445]: I0622 13:11:08.774562    1445 docker.go:364] Connecting to docker on unix:///var/run/docker.sock
Jun 22 13:11:08 k8s-1 kubelet-wrapper[1445]: I0622 13:11:08.775194    1445 docker.go:384] Start docker client with request timeout=2m0s
Jun 22 13:11:08 k8s-1 kubelet-wrapper[1445]: W0622 13:11:08.870732    1445 cni.go:157] Unable to update cni config: No networks found in /etc/kubernetes/cni/net.d
Jun 22 13:11:09 k8s-1 kubelet-wrapper[1445]: I0622 13:11:09.152220    1445 manager.go:143] cAdvisor running in container: "/system.slice/kubelet.service"
Jun 22 13:11:09 k8s-1 kubelet-wrapper[1445]: W0622 13:11:09.205309    1445 manager.go:151] unable to connect to Rkt api service: rkt: cannot tcp Dial rkt api service: dial tcp [::1]:15441: getsockopt: connection refused
Jun 22 13:11:09 k8s-1 kubelet-wrapper[1445]: I0622 13:11:09.211784    1445 fs.go:117] Filesystem partitions: map[/dev/sda9:{mountpoint:/var/lib/docker major:8 minor:9 fsType:ext4 blockSize:0} /dev/mapper/usr:{mountpoint:/usr/share/ca-certificates major:254 minor:0 fsType:ext4 blockSize:0}]
Jun 22 13:11:09 k8s-1 kubelet-wrapper[1445]: I0622 13:11:09.232133    1445 manager.go:198] Machine: {NumCores:1 CpuFrequency:2333333 MemoryCapacity:1045037056 MachineID:d51d78572977e1688e52c5fcf9b253e7 SystemUUID:564DC665-946D-D8F5-15BC-A7F5705952A0 BootID:5659d4a0-e903-4b3b-a594-20f4af707d6b Filesystems:[{Device:/dev/sda9 Capacity:5843333120 Type:vfs Inodes:1498496 HasInodes:true} {Device:/dev/mapper/usr Capacity:1031946240 Type:vfs Inodes:260096 HasInodes:true} {Device:overlay Capacity:5843333120 Type:vfs Inodes:1498496 HasInodes:true}] DiskMap:map[254:0:{Name:dm-0 Major:254 Minor:0 Size:1065345024 Scheduler:none} 8:0:{Name:sda Major:8 Minor:0 Size:8589934592 Scheduler:cfq}] NetworkDevices:[{Name:ens32 MacAddress:00:0c:29:59:52:a0 Speed:1000 Mtu:1500} {Name:flannel.1 MacAddress:da:52:b8:eb:ff:ac Speed:0 Mtu:1450}] Topology:[{Id:0 Memory:1045037056 Cores:[{Id:0 Threads:[0] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:4194304 Type:Unified Level:2}]}] Caches:[]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None}
Jun 22 13:11:09 k8s-1 kubelet-wrapper[1445]: I0622 13:11:09.241285    1445 manager.go:204] Version: {KernelVersion:4.11.6-coreos ContainerOsVersion:Container Linux by CoreOS 1409.2.0 (Ladybug) DockerVersion:1.12.6 CadvisorVersion: CadvisorRevision:}
Jun 22 13:11:09 k8s-1 kubelet-wrapper[1445]: I0622 13:11:09.243760    1445 server.go:509] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
Jun 22 13:11:09 k8s-1 kubelet-wrapper[1445]: I0622 13:11:09.282927    1445 container_manager_linux.go:245] container manager verified user specified cgroup-root exists: /
Jun 22 13:11:09 k8s-1 kubelet-wrapper[1445]: I0622 13:11:09.283589    1445 container_manager_linux.go:250] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs ProtectKernelDefaults:false EnableCRI:true NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>}]} ExperimentalQOSReserved:map[]}
Jun 22 13:11:09 k8s-1 kubelet-wrapper[1445]: I0622 13:11:09.296995    1445 kubelet.go:255] Adding manifest file: /etc/kubernetes/manifests
Jun 22 13:11:09 k8s-1 kubelet-wrapper[1445]: I0622 13:11:09.297195    1445 kubelet.go:265] Watching apiserver
Jun 22 13:11:09 k8s-1 kubelet-wrapper[1445]: E0622 13:11:09.312090    1445 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:390: Failed to list *v1.Node: Get http://127.0.0.1:8080/api/v1/nodes?fieldSelector=metadata.name%3D10.44.1.191&resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Jun 22 13:11:09 k8s-1 kubelet-wrapper[1445]: E0622 13:11:09.384592    1445 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:382: Failed to list *v1.Service: Get http://127.0.0.1:8080/api/v1/services?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Jun 22 13:11:09 k8s-1 kubelet-wrapper[1445]: E0622 13:11:09.385256    1445 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get http://127.0.0.1:8080/api/v1/pods?fieldSelector=spec.nodeName%3D10.44.1.191&resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Jun 22 13:11:09 k8s-1 kubelet-wrapper[1445]: W0622 13:11:09.407834    1445 kubelet_network.go:70] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
Jun 22 13:11:09 k8s-1 kubelet-wrapper[1445]: I0622 13:11:09.408371    1445 kubelet.go:494] Hairpin mode set to "hairpin-veth"
Jun 22 13:11:09 k8s-1 kubelet-wrapper[1445]: W0622 13:11:09.558792    1445 cni.go:157] Unable to update cni config: No networks found in /etc/kubernetes/cni/net.d
Jun 22 13:11:09 k8s-1 kubelet-wrapper[1445]: I0622 13:11:09.816168    1445 docker_service.go:187] Docker cri networking managed by kubernetes.io/no-op
Jun 22 13:11:09 k8s-1 kubelet-wrapper[1445]: I0622 13:11:09.822178    1445 docker_service.go:204] Setting cgroupDriver to cgroupfs
Jun 22 13:11:09 k8s-1 kubelet-wrapper[1445]: E0622 13:11:09.879230    1445 container_manager_linux.go:638] error opening pid file /run/docker/libcontainerd/docker-containerd.pid: open /run/docker/libcontainerd/docker-containerd.pid: no such file or directory
Jun 22 13:11:09 k8s-1 kubelet-wrapper[1445]: I0622 13:11:09.905095    1445 remote_runtime.go:41] Connecting to runtime service /var/run/dockershim.sock
Jun 22 13:11:10 k8s-1 kubelet-wrapper[1445]: I0622 13:11:10.108686    1445 kuberuntime_manager.go:171] Container runtime docker initialized, version: 1.12.6, apiVersion: 1.24.0
Jun 22 13:11:10 k8s-1 kubelet-wrapper[1445]: I0622 13:11:10.140559    1445 server.go:869] Started kubelet v1.6.6+coreos.0-dirty
Jun 22 13:11:10 k8s-1 kubelet-wrapper[1445]: E0622 13:11:10.169935    1445 kubelet.go:1165] Image garbage collection failed: unable to find data for container /
Jun 22 13:11:10 k8s-1 kubelet-wrapper[1445]: I0622 13:11:10.179008    1445 kubelet_node_status.go:230] Setting node annotation to enable volume controller attach/detach
Jun 22 13:11:10 k8s-1 kubelet-wrapper[1445]: I0622 13:11:10.180376    1445 server.go:127] Starting to listen on 0.0.0.0:10250
Jun 22 13:11:10 k8s-1 kubelet-wrapper[1445]: I0622 13:11:10.193754    1445 server.go:294] Adding debug handlers to kubelet server.
Jun 22 13:11:10 k8s-1 kubelet-wrapper[1445]: E0622 13:11:10.252678    1445 event.go:208] Unable to write event: 'Post http://127.0.0.1:8080/api/v1/namespaces/default/events: dial tcp 127.0.0.1:8080: getsockopt: connection refused' (may retry after sleeping)
Jun 22 13:11:10 k8s-1 kubelet-wrapper[1445]: E0622 13:11:10.253964    1445 kubelet.go:1661] Failed to check if disk space is available for the runtime: failed to get fs info for "runtime": unable to find data for container /
Jun 22 13:11:10 k8s-1 kubelet-wrapper[1445]: E0622 13:11:10.254371    1445 kubelet.go:1669] Failed to check if disk space is available on the root partition: failed to get fs info for "root": unable to find data for container /
Jun 22 13:11:10 k8s-1 kubelet-wrapper[1445]: E0622 13:11:10.264986    1445 event.go:259] Could not construct reference to: '&v1.Node{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.44.1.191", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{sec:0, nsec:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"kubernetes.io/hostname":"10.44.1.191", "beta.kubernetes.io/os":"linux", "beta.kubernetes.io/arch":"amd64"}, Annotations:map[string]string{"volumes.kubernetes.io/controller-managed-attach-detach":"true"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.NodeSpec{PodCIDR:"", ExternalID:"10.44.1.191", ProviderID:"", Unschedulable:false, Taints:[]v1.Taint(nil)}, Status:v1.NodeStatus{Capacity:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:1000, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"", Format:"DecimalSI"}, "pods":resource.Quantity{i:resource.int64Amount{value:110, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:1045037056, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"", Format:"BinarySI"}}, Allocatable:v1.ResourceList{"pods":resource.Quantity{i:resource.int64Amount{value:110, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:940179456, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"", Format:"BinarySI"}, "cpu":resource.Quantity{i:resource.int64Amount{value:1000, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"", Format:"DecimalSI"}}, Phase:"", Conditions:[]v1.NodeCondition{v1.NodeCondition{Type:"OutOfDisk", Status:"False", LastHeartbeatTime:v1.Time{Time:time.Time{sec:63633733870, nsec:253937089, loc:(*time.Location)(0x6f2a4e0)}}, LastTransitionTime:v1.Time{Time:time.Time{sec
Jun 22 13:11:10 k8s-1 kubelet-wrapper[1445]: :63633733870, nsec:253937089, loc:(*time.Location)(0x6f2a4e0)}}, Reason:"KubeletHasSufficientDisk", Message:"kubelet has sufficient disk space available"}, v1.NodeCondition{Type:"MemoryPressure", Status:"False", LastHeartbeatTime:v1.Time{Time:time.Time{sec:63633733870, nsec:254699536, loc:(*time.Location)(0x6f2a4e0)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63633733870, nsec:254699536, loc:(*time.Location)(0x6f2a4e0)}}, Reason:"KubeletHasSufficientMemory", Message:"kubelet has sufficient memory available"}, v1.NodeCondition{Type:"DiskPressure", Status:"False", LastHeartbeatTime:v1.Time{Time:time.Time{sec:63633733870, nsec:254715049, loc:(*time.Location)(0x6f2a4e0)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63633733870, nsec:254715049, loc:(*time.Location)(0x6f2a4e0)}}, Reason:"KubeletHasNoDiskPressure", Message:"kubelet has no disk pressure"}, v1.NodeCondition{Type:"Ready", Status:"False", LastHeartbeatTime:v1.Time{Time:time.Time{sec:63633733870, nsec:254746440, loc:(*time.Location)(0x6f2a4e0)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63633733870, nsec:254746440, loc:(*time.Location)(0x6f2a4e0)}}, Reason:"KubeletNotReady", Message:"container runtime is down,PLEG is not healthy: pleg was last seen active 2562047h47m16.854775807s ago; threshold is 3m0s,network state unknown"}}, Addresses:[]v1.NodeAddress{v1.NodeAddress{Type:"LegacyHostIP", Address:"10.44.1.191"}, v1.NodeAddress{Type:"InternalIP", Address:"10.44.1.191"}, v1.NodeAddress{Type:"Hostname", Address:"10.44.1.191"}}, DaemonEndpoints:v1.NodeDaemonEndpoints{KubeletEndpoint:v1.DaemonEndpoint{Port:10250}}, NodeInfo:v1.NodeSystemInfo{MachineID:"d51d78572977e1688e52c5fcf9b253e7", SystemUUID:"564DC665-946D-D8F5-15BC-A7F5705952A0", BootID:"5659d4a0-e903-4b3b-a594-20f4af707d6b", KernelVersion:"4.11.6-coreos", OSImage:"Container Linux by CoreOS 1409.2.0 (Ladybug)", ContainerRuntimeVersion:"docker://1.12.6", KubeletVersion:"v1.6.6+coreos.0-dirty", KubeProxyVersion:"v1.6.6+coreos.0-dirty", OperatingSystem:"linux", Architecture:"amd64"}, Images:[]v1
Jun 22 13:11:10 k8s-1 kubelet-wrapper[1445]: .ContainerImage{v1.ContainerImage{Names:[]string{"gcr.io/google_containers/pause-amd64@sha256:163ac025575b775d1c0f9bf0bdd0f086883171eb475b5068e7defa4ca9e76516", "gcr.io/google_containers/pause-amd64:3.0"}, SizeBytes:746888}}, VolumesInUse:[]v1.UniqueVolumeName(nil), VolumesAttached:[]v1.AttachedVolume(nil)}}' due to: 'selfLink was empty, can't make reference'. Will not report event: 'Normal' 'NodeAllocatableEnforced' 'Updated Node Allocatable limit across pods'
Jun 22 13:11:10 k8s-1 kubelet-wrapper[1445]: I0622 13:11:10.273244    1445 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer
Jun 22 13:11:10 k8s-1 kubelet-wrapper[1445]: I0622 13:11:10.273359    1445 status_manager.go:140] Starting to sync pod status with apiserver
Jun 22 13:11:10 k8s-1 kubelet-wrapper[1445]: I0622 13:11:10.273395    1445 kubelet.go:1741] Starting kubelet main sync loop.
Jun 22 13:11:10 k8s-1 kubelet-wrapper[1445]: I0622 13:11:10.273423    1445 kubelet.go:1752] skipping pod synchronization - [container runtime is down PLEG is not healthy: pleg was last seen active 2562047h47m16.854775807s ago; threshold is 3m0s]
Jun 22 13:11:10 k8s-1 kubelet-wrapper[1445]: I0622 13:11:10.274984    1445 volume_manager.go:249] Starting Kubelet Volume Manager
Jun 22 13:11:10 k8s-1 kubelet-wrapper[1445]: E0622 13:11:10.335582    1445 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:390: Failed to list *v1.Node: Get http://127.0.0.1:8080/api/v1/nodes?fieldSelector=metadata.name%3D10.44.1.191&resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Jun 22 13:11:10 k8s-1 kubelet-wrapper[1445]: I0622 13:11:10.427126    1445 kubelet_node_status.go:230] Setting node annotation to enable volume controller attach/detach
Jun 22 13:11:10 k8s-1 kubelet-wrapper[1445]: E0622 13:11:10.443391    1445 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:382: Failed to list *v1.Service: Get http://127.0.0.1:8080/api/v1/services?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Jun 22 13:11:10 k8s-1 kubelet-wrapper[1445]: I0622 13:11:10.474434    1445 factory.go:309] Registering Docker factory
Jun 22 13:11:10 k8s-1 kubelet-wrapper[1445]: W0622 13:11:10.475059    1445 manager.go:247] Registration of the rkt container factory failed: unable to communicate with Rkt api service: rkt: cannot tcp Dial rkt api service: dial tcp [::1]:15441: getsockopt: connection refused
Jun 22 13:11:10 k8s-1 kubelet-wrapper[1445]: I0622 13:11:10.475564    1445 factory.go:54] Registering systemd factory
Jun 22 13:11:10 k8s-1 kubelet-wrapper[1445]: I0622 13:11:10.476678    1445 factory.go:86] Registering Raw factory
Jun 22 13:11:10 k8s-1 kubelet-wrapper[1445]: I0622 13:11:10.477942    1445 manager.go:1106] Started watching for new ooms in manager
Jun 22 13:11:10 k8s-1 kubelet-wrapper[1445]: E0622 13:11:10.577504    1445 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get http://127.0.0.1:8080/api/v1/pods?fieldSelector=spec.nodeName%3D10.44.1.191&resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Jun 22 13:11:10 k8s-1 kubelet-wrapper[1445]: I0622 13:11:10.578258    1445 oomparser.go:185] oomparser using systemd
Jun 22 13:11:10 k8s-1 kubelet-wrapper[1445]: I0622 13:11:10.596883    1445 manager.go:288] Starting recovery of all containers
Jun 22 13:11:10 k8s-1 kubelet-wrapper[1445]: E0622 13:11:10.608600    1445 kubelet.go:1661] Failed to check if disk space is available for the runtime: failed to get fs info for "runtime": unable to find data for container /
Jun 22 13:11:10 k8s-1 kubelet-wrapper[1445]: E0622 13:11:10.609253    1445 kubelet.go:1669] Failed to check if disk space is available on the root partition: failed to get fs info for "root": unable to find data for container /
Jun 22 13:11:10 k8s-1 kubelet-wrapper[1445]: I0622 13:11:10.609838    1445 kubelet_node_status.go:77] Attempting to register node 10.44.1.191
Jun 22 13:11:10 k8s-1 kubelet-wrapper[1445]: E0622 13:11:10.989970    1445 kubelet_node_status.go:101] Unable to register node "10.44.1.191" with API server: Post http://127.0.0.1:8080/api/v1/nodes: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Jun 22 13:11:11 k8s-1 kubelet-wrapper[1445]: I0622 13:11:11.190994    1445 kubelet_node_status.go:230] Setting node annotation to enable volume controller attach/detach
Jun 22 13:11:11 k8s-1 kubelet-wrapper[1445]: I0622 13:11:11.193889    1445 kubelet_node_status.go:77] Attempting to register node 10.44.1.191
Jun 22 13:11:11 k8s-1 kubelet-wrapper[1445]: E0622 13:11:11.344551    1445 kubelet_node_status.go:101] Unable to register node "10.44.1.191" with API server: Post http://127.0.0.1:8080/api/v1/nodes: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Jun 22 13:11:11 k8s-1 kubelet-wrapper[1445]: E0622 13:11:11.446474    1445 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:390: Failed to list *v1.Node: Get http://127.0.0.1:8080/api/v1/nodes?fieldSelector=metadata.name%3D10.44.1.191&resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Jun 22 13:11:11 k8s-1 kubelet-wrapper[1445]: E0622 13:11:11.512257    1445 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:382: Failed to list *v1.Service: Get http://127.0.0.1:8080/api/v1/services?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Jun 22 13:11:11 k8s-1 kubelet-wrapper[1445]: E0622 13:11:11.579091    1445 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get http://127.0.0.1:8080/api/v1/pods?fieldSelector=spec.nodeName%3D10.44.1.191&resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Jun 22 13:11:11 k8s-1 kubelet-wrapper[1445]: I0622 13:11:11.772637    1445 manager.go:293] Recovery completed
Jun 22 13:11:11 k8s-1 kubelet-wrapper[1445]: I0622 13:11:11.903355    1445 kubelet_node_status.go:230] Setting node annotation to enable volume controller attach/detach
Jun 22 13:11:12 k8s-1 kubelet-wrapper[1445]: I0622 13:11:12.095479    1445 kubelet_node_status.go:77] Attempting to register node 10.44.1.191
Jun 22 13:11:12 k8s-1 kubelet-wrapper[1445]: E0622 13:11:12.165545    1445 kubelet_node_status.go:101] Unable to register node "10.44.1.191" with API server: Post http://127.0.0.1:8080/api/v1/nodes: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Jun 22 13:11:12 k8s-1 kubelet-wrapper[1445]: E0622 13:11:12.361297    1445 eviction_manager.go:214] eviction manager: unexpected err: failed GetNode: node '10.44.1.191' not found
Jun 22 13:11:12 k8s-1 kubelet-wrapper[1445]: E0622 13:11:12.448155    1445 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:390: Failed to list *v1.Node: Get http://127.0.0.1:8080/api/v1/nodes?fieldSelector=metadata.name%3D10.44.1.191&resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Jun 22 13:11:12 k8s-1 kubelet-wrapper[1445]: E0622 13:11:12.513797    1445 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:382: Failed to list *v1.Service: Get http://127.0.0.1:8080/api/v1/services?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Jun 22 13:11:12 k8s-1 kubelet-wrapper[1445]: E0622 13:11:12.580470    1445 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get http://127.0.0.1:8080/api/v1/pods?fieldSelector=spec.nodeName%3D10.44.1.191&resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Jun 22 13:11:12 k8s-1 kubelet-wrapper[1445]: I0622 13:11:12.966291    1445 kubelet_node_status.go:230] Setting node annotation to enable volume controller attach/detach
Jun 22 13:11:12 k8s-1 kubelet-wrapper[1445]: I0622 13:11:12.969881    1445 kubelet_node_status.go:77] Attempting to register node 10.44.1.191
Jun 22 13:11:12 k8s-1 kubelet-wrapper[1445]: E0622 13:11:12.971486    1445 kubelet_node_status.go:101] Unable to register node "10.44.1.191" with API server: Post http://127.0.0.1:8080/api/v1/nodes: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Jun 22 13:11:13 k8s-1 kubelet-wrapper[1445]: E0622 13:11:13.449953    1445 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:390: Failed to list *v1.Node: Get http://127.0.0.1:8080/api/v1/nodes?fieldSelector=metadata.name%3D10.44.1.191&resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Jun 22 13:11:13 k8s-1 kubelet-wrapper[1445]: E0622 13:11:13.515369    1445 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:382: Failed to list *v1.Service: Get http://127.0.0.1:8080/api/v1/services?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Jun 22 13:11:13 k8s-1 kubelet-wrapper[1445]: E0622 13:11:13.581860    1445 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get http://127.0.0.1:8080/api/v1/pods?fieldSelector=spec.nodeName%3D10.44.1.191&resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Jun 22 13:11:13 k8s-1 kubelet-wrapper[1445]: E0622 13:11:13.637008    1445 event.go:208] Unable to write event: 'Post http://127.0.0.1:8080/api/v1/namespaces/default/events: dial tcp 127.0.0.1:8080: getsockopt: connection refused' (may retry after sleeping)
Jun 22 13:11:14 k8s-1 kubelet-wrapper[1445]: E0622 13:11:14.451922    1445 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:390: Failed to list *v1.Node: Get http://127.0.0.1:8080/api/v1/nodes?fieldSelector=metadata.name%3D10.44.1.191&resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Jun 22 13:11:14 k8s-1 kubelet-wrapper[1445]: E0622 13:11:14.517067    1445 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:382: Failed to list *v1.Service: Get http://127.0.0.1:8080/api/v1/services?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Jun 22 13:11:14 k8s-1 kubelet-wrapper[1445]: I0622 13:11:14.573624    1445 kubelet_node_status.go:230] Setting node annotation to enable volume controller attach/detach
Jun 22 13:11:14 k8s-1 kubelet-wrapper[1445]: I0622 13:11:14.577375    1445 kubelet_node_status.go:77] Attempting to register node 10.44.1.191
Jun 22 13:11:14 k8s-1 kubelet-wrapper[1445]: E0622 13:11:14.578733    1445 kubelet_node_status.go:101] Unable to register node "10.44.1.191" with API server: Post http://127.0.0.1:8080/api/v1/nodes: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Jun 22 13:11:14 k8s-1 kubelet-wrapper[1445]: E0622 13:11:14.583125    1445 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get http://127.0.0.1:8080/api/v1/pods?fieldSelector=spec.nodeName%3D10.44.1.191&resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Jun 22 13:11:15 k8s-1 kubelet-wrapper[1445]: I0622 13:11:15.274043    1445 kubelet_node_status.go:230] Setting node annotation to enable volume controller attach/detach
Jun 22 13:11:15 k8s-1 kubelet-wrapper[1445]: I0622 13:11:15.277708    1445 kubelet_node_status.go:230] Setting node annotation to enable volume controller attach/detach
Jun 22 13:11:15 k8s-1 kubelet-wrapper[1445]: I0622 13:11:15.279212    1445 kubelet_node_status.go:230] Setting node annotation to enable volume controller attach/detach
Jun 22 13:11:15 k8s-1 kubelet-wrapper[1445]: I0622 13:11:15.283995    1445 kubelet_node_status.go:230] Setting node annotation to enable volume controller attach/detach
Jun 22 13:11:15 k8s-1 kubelet-wrapper[1445]: I0622 13:11:15.285177    1445 kubelet_node_status.go:230] Setting node annotation to enable volume controller attach/detach
Jun 22 13:11:15 k8s-1 kubelet-wrapper[1445]: I0622 13:11:15.329762    1445 kubelet_node_status.go:230] Setting node annotation to enable volume controller attach/detach
Jun 22 13:11:15 k8s-1 kubelet-wrapper[1445]: I0622 13:11:15.331302    1445 kubelet_node_status.go:230] Setting node annotation to enable volume controller attach/detach
Jun 22 13:11:15 k8s-1 kubelet-wrapper[1445]: W0622 13:11:15.332223    1445 status_manager.go:465] Failed to update status for pod "_()": Get http://127.0.0.1:8080/api/v1/namespaces/kube-system/pods/kube-apiserver-10.44.1.191: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Jun 22 13:11:15 k8s-1 kubelet-wrapper[1445]: E0622 13:11:15.332685    1445 kubelet.go:1535] Failed creating a mirror pod for "kube-apiserver-10.44.1.191_kube-system(2fb1fd773540a1f5e6563638a41f974f)": Post http://127.0.0.1:8080/api/v1/namespaces/kube-system/pods: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Jun 22 13:11:15 k8s-1 kubelet-wrapper[1445]: E0622 13:11:15.344760    1445 kubelet.go:1535] Failed creating a mirror pod for "kube-controller-manager-10.44.1.191_kube-system(2b6bdcbe503076f461f4a48a6b92308c)": Post http://127.0.0.1:8080/api/v1/namespaces/kube-system/pods: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Jun 22 13:11:15 k8s-1 kubelet-wrapper[1445]: W0622 13:11:15.346878    1445 status_manager.go:465] Failed to update status for pod "_()": Get http://127.0.0.1:8080/api/v1/namespaces/kube-system/pods/kube-controller-manager-10.44.1.191: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Jun 22 13:11:15 k8s-1 kubelet-wrapper[1445]: I0622 13:11:15.348753    1445 kubelet_node_status.go:230] Setting node annotation to enable volume controller attach/detach
Jun 22 13:11:15 k8s-1 kubelet-wrapper[1445]: E0622 13:11:15.348904    1445 kubelet.go:1535] Failed creating a mirror pod for "kube-proxy-10.44.1.191_kube-system(3c34b284dc6e937ee99dffc54d193070)": Post http://127.0.0.1:8080/api/v1/namespaces/kube-system/pods: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Jun 22 13:11:15 k8s-1 kubelet-wrapper[1445]: W0622 13:11:15.352675    1445 status_manager.go:465] Failed to update status for pod "_()": Get http://127.0.0.1:8080/api/v1/namespaces/kube-system/pods/kube-proxy-10.44.1.191: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Jun 22 13:11:15 k8s-1 kubelet-wrapper[1445]: W0622 13:11:15.355075    1445 status_manager.go:465] Failed to update status for pod "_()": Get http://127.0.0.1:8080/api/v1/namespaces/kube-system/pods/kube-scheduler-10.44.1.191: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Jun 22 13:11:15 k8s-1 kubelet-wrapper[1445]: E0622 13:11:15.355651    1445 kubelet.go:1535] Failed creating a mirror pod for "kube-scheduler-10.44.1.191_kube-system(dee2e28bf1e64115c48f6eb837ddf312)": Post http://127.0.0.1:8080/api/v1/namespaces/kube-system/pods: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Jun 22 13:11:15 k8s-1 kubelet-wrapper[1445]: I0622 13:11:15.371227    1445 kuberuntime_manager.go:458] Container {Name:kube-scheduler Image:quay.io/coreos/hyperkube:v1.6.6_coreos.0 Command:[/hyperkube scheduler --master=http://127.0.0.1:8080 --leader-elect=true] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:10251,Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jun 22 13:11:15 k8s-1 kubelet-wrapper[1445]: I0622 13:11:15.377481    1445 reconciler.go:242] VerifyControllerAttachedVolume operation started for volume "kubernetes.io/host-path/2fb1fd773540a1f5e6563638a41f974f-ssl-certs-kubernetes" (spec.Name: "ssl-certs-kubernetes") pod "2fb1fd773540a1f5e6563638a41f974f" (UID: "2fb1fd773540a1f5e6563638a41f974f")
Jun 22 13:11:15 k8s-1 kubelet-wrapper[1445]: I0622 13:11:15.377550    1445 reconciler.go:242] VerifyControllerAttachedVolume operation started for volume "kubernetes.io/host-path/2fb1fd773540a1f5e6563638a41f974f-ssl-certs-host" (spec.Name: "ssl-certs-host") pod "2fb1fd773540a1f5e6563638a41f974f" (UID: "2fb1fd773540a1f5e6563638a41f974f")
Jun 22 13:11:15 k8s-1 kubelet-wrapper[1445]: I0622 13:11:15.377582    1445 reconciler.go:242] VerifyControllerAttachedVolume operation started for volume "kubernetes.io/host-path/2b6bdcbe503076f461f4a48a6b92308c-ssl-certs-kubernetes" (spec.Name: "ssl-certs-kubernetes") pod "2b6bdcbe503076f461f4a48a6b92308c" (UID: "2b6bdcbe503076f461f4a48a6b92308c")
Jun 22 13:11:15 k8s-1 kubelet-wrapper[1445]: I0622 13:11:15.377613    1445 reconciler.go:242] VerifyControllerAttachedVolume operation started for volume "kubernetes.io/host-path/2b6bdcbe503076f461f4a48a6b92308c-ssl-certs-host" (spec.Name: "ssl-certs-host") pod "2b6bdcbe503076f461f4a48a6b92308c" (UID: "2b6bdcbe503076f461f4a48a6b92308c")
Jun 22 13:11:15 k8s-1 kubelet-wrapper[1445]: I0622 13:11:15.377644    1445 reconciler.go:242] VerifyControllerAttachedVolume operation started for volume "kubernetes.io/host-path/3c34b284dc6e937ee99dffc54d193070-ssl-certs-host" (spec.Name: "ssl-certs-host") pod "3c34b284dc6e937ee99dffc54d193070" (UID: "3c34b284dc6e937ee99dffc54d193070")
Jun 22 13:11:15 k8s-1 kubelet-wrapper[1445]: E0622 13:11:15.453566    1445 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:390: Failed to list *v1.Node: Get http://127.0.0.1:8080/api/v1/nodes?fieldSelector=metadata.name%3D10.44.1.191&resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Jun 22 13:11:15 k8s-1 kubelet-wrapper[1445]: I0622 13:11:15.485035    1445 operation_generator.go:597] MountVolume.SetUp succeeded for volume "kubernetes.io/host-path/2fb1fd773540a1f5e6563638a41f974f-ssl-certs-kubernetes" (spec.Name: "ssl-certs-kubernetes") pod "2fb1fd773540a1f5e6563638a41f974f" (UID: "2fb1fd773540a1f5e6563638a41f974f").
Jun 22 13:11:15 k8s-1 kubelet-wrapper[1445]: I0622 13:11:15.485161    1445 operation_generator.go:597] MountVolume.SetUp succeeded for volume "kubernetes.io/host-path/2fb1fd773540a1f5e6563638a41f974f-ssl-certs-host" (spec.Name: "ssl-certs-host") pod "2fb1fd773540a1f5e6563638a41f974f" (UID: "2fb1fd773540a1f5e6563638a41f974f").
Jun 22 13:11:15 k8s-1 kubelet-wrapper[1445]: I0622 13:11:15.485234    1445 operation_generator.go:597] MountVolume.SetUp succeeded for volume "kubernetes.io/host-path/2b6bdcbe503076f461f4a48a6b92308c-ssl-certs-kubernetes" (spec.Name: "ssl-certs-kubernetes") pod "2b6bdcbe503076f461f4a48a6b92308c" (UID: "2b6bdcbe503076f461f4a48a6b92308c").
Jun 22 13:11:15 k8s-1 kubelet-wrapper[1445]: I0622 13:11:15.485315    1445 operation_generator.go:597] MountVolume.SetUp succeeded for volume "kubernetes.io/host-path/2b6bdcbe503076f461f4a48a6b92308c-ssl-certs-host" (spec.Name: "ssl-certs-host") pod "2b6bdcbe503076f461f4a48a6b92308c" (UID: "2b6bdcbe503076f461f4a48a6b92308c").
Jun 22 13:11:15 k8s-1 kubelet-wrapper[1445]: I0622 13:11:15.485404    1445 operation_generator.go:597] MountVolume.SetUp succeeded for volume "kubernetes.io/host-path/3c34b284dc6e937ee99dffc54d193070-ssl-certs-host" (spec.Name: "ssl-certs-host") pod "3c34b284dc6e937ee99dffc54d193070" (UID: "3c34b284dc6e937ee99dffc54d193070").
Jun 22 13:11:15 k8s-1 kubelet-wrapper[1445]: E0622 13:11:15.518394    1445 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:382: Failed to list *v1.Service: Get http://127.0.0.1:8080/api/v1/services?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Jun 22 13:11:15 k8s-1 kubelet-wrapper[1445]: E0622 13:11:15.584488    1445 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get http://127.0.0.1:8080/api/v1/pods?fieldSelector=spec.nodeName%3D10.44.1.191&resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Jun 22 13:11:15 k8s-1 kubelet-wrapper[1445]: I0622 13:11:15.659373    1445 kuberuntime_manager.go:458] Container {Name:kube-apiserver Image:quay.io/coreos/hyperkube:v1.6.6_coreos.0 Command:[/hyperkube apiserver --bind-address=0.0.0.0 --etcd-servers=http://10.44.1.191:2379,http://10.44.1.192:2379,http://10.44.1.193:2379 --allow-privileged=true --service-cluster-ip-range=10.30.0.0/16 --secure-port=443 --advertise-address=10.44.1.191 --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota --tls-cert-file=/etc/kubernetes/ssl/apiserver.pem --tls-private-key-file=/etc/kubernetes/ssl/apiserver-key.pem --client-ca-file=/etc/kubernetes/ssl/ca.pem --service-account-key-file=/etc/kubernetes/ssl/apiserver-key.pem --runtime-config=extensions/v1beta1/networkpolicies=true --anonymous-auth=false] Args:[] WorkingDir: Ports:[{Name:https HostPort:443 ContainerPort:443 Protocol:TCP HostIP:} {Name:local HostPort:8080 ContainerPort:8080 Protocol:TCP HostIP:}] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:ssl-certs-kubernetes ReadOnly:true MountPath:/etc/kubernetes/ssl SubPath:} {Name:ssl-certs-host ReadOnly:true MountPath:/etc/ssl/certs SubPath:}] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:8080,Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jun 22 13:11:15 k8s-1 kubelet-wrapper[1445]: I0622 13:11:15.679980    1445 kuberuntime_manager.go:458] Container {Name:kube-controller-manager Image:quay.io/coreos/hyperkube:v1.6.6_coreos.0 Command:[/hyperkube controller-manager --master=http://127.0.0.1:8080 --leader-elect=true --service-account-private-key-file=/etc/kubernetes/ssl/apiserver-key.pem --root-ca-file=/etc/kubernetes/ssl/ca.pem] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:200 scale:-3} d:{Dec:<nil>} s:200m Format:DecimalSI}]} VolumeMounts:[{Name:ssl-certs-kubernetes ReadOnly:true MountPath:/etc/kubernetes/ssl SubPath:} {Name:ssl-certs-host ReadOnly:true MountPath:/etc/ssl/certs SubPath:}] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:10252,Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jun 22 13:11:15 k8s-1 kubelet-wrapper[1445]: I0622 13:11:15.680393    1445 kuberuntime_manager.go:458] Container {Name:kube-proxy Image:quay.io/coreos/hyperkube:v1.6.6_coreos.0 Command:[/hyperkube proxy --master=http://127.0.0.1:8080] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:ssl-certs-host ReadOnly:true MountPath:/etc/ssl/certs SubPath:}] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,} Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jun 22 13:11:15 k8s-1 kubelet-wrapper[1445]: I0622 13:11:15.680393    1445 kuberuntime_manager.go:458] Container {Name:kube-proxy Image:quay.io/coreos/hyperkube:v1.6.6_coreos.0 Command:[/hyperkube proxy --master=http://127.0.0.1:8080] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:ssl-certs-host ReadOnly:true MountPath:/etc/ssl/certs SubPath:}] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,} Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jun 22 13:11:16 k8s-1 kubelet-wrapper[1445]: E0622 13:11:16.454648    1445 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:390: Failed to list *v1.Node: Get http://127.0.0.1:8080/api/v1/nodes?fieldSelector=metadata.name%3D10.44.1.191&resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Jun 22 13:11:16 k8s-1 kubelet-wrapper[1445]: E0622 13:11:16.519202    1445 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:382: Failed to list *v1.Service: Get http://127.0.0.1:8080/api/v1/services?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Jun 22 13:11:16 k8s-1 kubelet-wrapper[1445]: E0622 13:11:16.585458    1445 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get http://127.0.0.1:8080/api/v1/pods?fieldSelector=spec.nodeName%3D10.44.1.191&resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Jun 22 13:11:17 k8s-1 kubelet-wrapper[1445]: E0622 13:11:17.456527    1445 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:390: Failed to list *v1.Node: Get http://127.0.0.1:8080/api/v1/nodes?fieldSelector=metadata.name%3D10.44.1.191&resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Jun 22 13:11:17 k8s-1 kubelet-wrapper[1445]: E0622 13:11:17.520805    1445 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:382: Failed to list *v1.Service: Get http://127.0.0.1:8080/api/v1/services?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Jun 22 13:11:17 k8s-1 kubelet-wrapper[1445]: E0622 13:11:17.586804    1445 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get http://127.0.0.1:8080/api/v1/pods?fieldSelector=spec.nodeName%3D10.44.1.191&resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Jun 22 13:11:17 k8s-1 kubelet-wrapper[1445]: I0622 13:11:17.779272    1445 kubelet_node_status.go:230] Setting node annotation to enable volume controller attach/detach
Jun 22 13:11:17 k8s-1 kubelet-wrapper[1445]: I0622 13:11:17.782569    1445 kubelet_node_status.go:77] Attempting to register node 10.44.1.191
Jun 22 13:11:17 k8s-1 kubelet-wrapper[1445]: E0622 13:11:17.783670    1445 kubelet_node_status.go:101] Unable to register node "10.44.1.191" with API server: Post http://127.0.0.1:8080/api/v1/nodes: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Jun 22 13:11:18 k8s-1 kubelet-wrapper[1445]: E0622 13:11:18.458221    1445 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:390: Failed to list *v1.Node: Get http://127.0.0.1:8080/api/v1/nodes?fieldSelector=metadata.name%3D10.44.1.191&resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Jun 22 13:11:18 k8s-1 kubelet-wrapper[1445]: E0622 13:11:18.522295    1445 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:382: Failed to list *v1.Service: Get http://127.0.0.1:8080/api/v1/services?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Jun 22 13:11:18 k8s-1 kubelet-wrapper[1445]: E0622 13:11:18.588173    1445 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get http://127.0.0.1:8080/api/v1/pods?fieldSelector=spec.nodeName%3D10.44.1.191&resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Jun 22 13:11:19 k8s-1 kubelet-wrapper[1445]: E0622 13:11:19.460023    1445 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:390: Failed to list *v1.Node: Get http://127.0.0.1:8080/api/v1/nodes?fieldSelector=metadata.name%3D10.44.1.191&resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Jun 22 13:11:19 k8s-1 kubelet-wrapper[1445]: E0622 13:11:19.524012    1445 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:382: Failed to list *v1.Service: Get http://127.0.0.1:8080/api/v1/services?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Jun 22 13:11:19 k8s-1 kubelet-wrapper[1445]: E0622 13:11:19.589826    1445 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get http://127.0.0.1:8080/api/v1/pods?fieldSelector=spec.nodeName%3D10.44.1.191&resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Jun 22 13:11:20 k8s-1 kubelet-wrapper[1445]: E0622 13:11:20.460841    1445 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:390: Failed to list *v1.Node: Get http://127.0.0.1:8080/api/v1/nodes?fieldSelector=metadata.name%3D10.44.1.191&resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Jun 22 13:11:20 k8s-1 kubelet-wrapper[1445]: E0622 13:11:20.524952    1445 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:382: Failed to list *v1.Service: Get http://127.0.0.1:8080/api/v1/services?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
# rkt list
UUID        APP     IMAGE NAME                  STATE   CREATED     STARTED     NETWORKS
074b77c8    etcd        quay.io/coreos/etcd:v3.1.6          running 1 day ago   1 day ago   
451e54ed    flannel     quay.io/coreos/flannel:v0.7.1           running 1 day ago   1 day ago   
5e5f7f3f    hyperkube   quay.io/coreos/hyperkube:v1.6.6_coreos.0    running 41 minutes ago  41 minutes ago
# docker ps -a
CONTAINER ID        IMAGE                                      COMMAND             CREATED             STATUS              PORTS               NAMES
04ad0658a21f        gcr.io/google_containers/pause-amd64:3.0   "/pause"            About an hour ago   Up About an hour                        k8s_POD_kube-controller-manager-10.44.1.191_kube-system_2b6bdcbe503076f461f4a48a6b92308c_0
7dac5cb7828e        gcr.io/google_containers/pause-amd64:3.0   "/pause"            About an hour ago   Up About an hour                        k8s_POD_kube-apiserver-10.44.1.191_kube-system_2fb1fd773540a1f5e6563638a41f974f_0
a9d466d1189d        gcr.io/google_containers/pause-amd64:3.0   "/pause"            About an hour ago   Up About an hour                        k8s_POD_kube-proxy-10.44.1.191_kube-system_3c34b284dc6e937ee99dffc54d193070_0
deb4b9345a9d        gcr.io/google_containers/pause-amd64:3.0   "/pause"            About an hour ago   Up About an hour                        k8s_POD_kube-scheduler-10.44.1.191_kube-system_dee2e28bf1e64115c48f6eb837ddf312_0
# systemctl cat kubelet | cat
# /etc/systemd/system/kubelet.service
[Service]
Environment="KUBELET_IMAGE_TAG=v1.6.6_coreos.0"
Environment="RKT_RUN_ARGS=--uuid-file-save=/var/run/kubelet-pod.uuid   --volume var-log,kind=host,source=/var/log   --mount volume=var-log,target=/var/log   --volume dns,kind=host,source=/etc/resolv.conf   --mount volume=dns,target=/etc/resolv.conf"
#  --volume cni-bin,kind=host,source=/opt/cni/bin #  --mount volume=cni-bin,target=/opt/cni/bin"
ExecStartPre=/usr/bin/mkdir -p /opt/cni/bin
ExecStartPre=/usr/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/usr/bin/mkdir -p /var/log/containers
ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/run/kubelet-pod.uuid
ExecStart=/usr/lib/coreos/kubelet-wrapper   --require-kubeconfig=true   --kubeconfig=/var/lib/kubelet/kubeconfig.yml   --register-node=true   --cni-conf-dir=/etc/kubernetes/cni/net.d   --network-plugin=   --container-runtime=docker   --allow-privileged=true   --pod-manifest-path=/etc/kubernetes/manifests   --hostname-override=10.44.1.191   --cluster_dns=10.30.0.2   --cluster_domain=cluster.local
ExecStop=-/usr/bin/rkt stop --uuid-file=/var/run/kubelet-pod.uuid
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.target
# cat /var/lib/kubelet/kubeconfig.yml 

current-context: kubelet-to-kubernetes
kind: Config
apiVersion: v1
preferences:
  colors: true


# A cluster contains endpoint data for a kubernetes cluster.
# This includes the fully qualified url for the kubernetes apiserver,
# as well as the cluster’s certificate authority.
clusters:
  - name: kubernetes
    cluster:
      api-version: v1
      insecure-skip-tls-verify: true
      certificate-authority: /etc/kubernetes/ssl/ca.pem
      server: http://127.0.0.1:8080
  - name: dev-cluster
    cluster:
      api-version: v1
      insecure-skip-tls-verify: true
      certificate-authority: /etc/kubernetes/ssl/ca.pem
      server: http://127.0.0.1:8080


# A user defines client credentials for authenticating to a kubernetes cluster.
users:
  - name: kubelet
    user:
      client-certificate: /etc/kubernetes/ssl/kubelet.pem
      client-key: /etc/kubernetes/ssl/kubelet-key.pem
  - name: admin
    user:
      token: blue-token
      username: admin
      password: password
      client-certificate: /etc/kubernetes/ssl/admin.pem
      client-key: /etc/kubernetes/ssl/admin-key.pem


# A context defines a named cluster,user,namespace tuple which is used to send
# requests to the specified cluster using the provided authentication info and namespace.
contexts:
  - name: kubelet-to-kubernetes
    context:
      cluster: kubernetes
      user: kubelet
  - name: dev-context
    context:
      cluster: dev-cluster
      user: admin
# cat /etc/kubernetes/manifests/kube-apiserver.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: kube-apiserver
  namespace: kube-system
spec:
  hostNetwork: true
  containers:
    - name: kube-apiserver
      image: quay.io/coreos/hyperkube:v1.6.6_coreos.0
      command:
        - /hyperkube
        - apiserver
        - --bind-address=0.0.0.0
        - --etcd-servers=http://10.44.1.191:2379,http://10.44.1.192:2379,http://10.44.1.193:2379
        - --allow-privileged=true
        - --service-cluster-ip-range=10.30.0.0/16
        - --secure-port=443
        - --advertise-address=10.44.1.191
        - --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota
        - --tls-cert-file=/etc/kubernetes/ssl/apiserver.pem
        - --tls-private-key-file=/etc/kubernetes/ssl/apiserver-key.pem
        - --client-ca-file=/etc/kubernetes/ssl/ca.pem
        - --service-account-key-file=/etc/kubernetes/ssl/apiserver-key.pem
        - --runtime-config=extensions/v1beta1/networkpolicies=true
        - --anonymous-auth=false
      livenessProbe:
        httpGet:
          host: 127.0.0.1
          port: 8080
          path: /healthz
        initialDelaySeconds: 15
        timeoutSeconds: 15
      ports:
        - containerPort: 443
          hostPort: 443
          name: https
        - containerPort: 8080
          hostPort: 8080
          name: local
      volumeMounts:
        - mountPath: /etc/kubernetes/ssl
          name: ssl-certs-kubernetes
          readOnly: true
        - mountPath: /etc/ssl/certs
          name: ssl-certs-host
          readOnly: true
  volumes:
    - hostPath:
        path: /etc/kubernetes/ssl
      name: ssl-certs-kubernetes
    - hostPath:
        path: /usr/share/ca-certificates
      name: ssl-certs-host
# etcdctl member list
5d95d39e66f6b4cd: name=k8s-1 peerURLs=http://10.44.1.191:2380 clientURLs=http://10.44.1.191:2379 isLeader=false
7f06520a56293939: name=k8s-3 peerURLs=http://10.44.1.193:2380 clientURLs=http://10.44.1.193:2379 isLeader=false
a3082589c281438e: name=k8s-2 peerURLs=http://10.44.1.192:2380 clientURLs=http://10.44.1.192:2379 isLeader=true
# netstat -ntlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 127.0.0.1:10248         0.0.0.0:*               LISTEN      30367/kubelet       
tcp        0      0 127.0.0.1:2379          0.0.0.0:*               LISTEN      1043/etcd           
tcp        0      0 10.44.1.191:2379        0.0.0.0:*               LISTEN      1043/etcd           
tcp        0      0 10.44.1.191:2380        0.0.0.0:*               LISTEN      1043/etcd           
tcp6       0      0 :::22                   :::*                    LISTEN      1/systemd           
tcp6       0      0 :::4194                 :::*                    LISTEN      30367/kubelet       
tcp6       0      0 :::10250                :::*                    LISTEN      30367/kubelet       
tcp6       0      0 :::10255                :::*                    LISTEN      30367/kubelet 

Most helpful comment

@qrpike sorry for late reply.

Could be due to image pulling ...

Yes, in my case a lack of bandwidth was a bottleneck (as far as I recall), but kubernetes could be more helpful if it would show a single line in the logs what it's doing behind the blankets rather than flooding console with errors.

All 5 comments

It starts eventually, but it is ~11 minutes to initialize! Any idea why so long?

What are the recommended ways to debug in the such early stages? Logs are full of errors, which create an impression that something is misconfigured, but actually it simply takes long time to start up, which is confusing.

k8s-1 ~ # systemctl status kubelet
● kubelet.service
   Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled)
   Active: active (running) since Fri 2017-06-23 15:02:55 UTC; 14min ago
 Main PID: 6609 (kubelet)
    Tasks: 15 (limit: 32768)
   Memory: 485.5M
      CPU: 1min 28.402s
   CGroup: /system.slice/kubelet.service
           ├─6609 /kubelet --require-kubeconfig=true --kubeconfig=/var/lib/kubelet/kubeconfig.yml --register-node=true --cni-conf-dir=/etc/kubernetes/cni/net.d --network-plugin= --container-runtime=docker --allo
           └─6771 journalctl -k -f

Jun 23 15:14:05 k8s-1 kubelet-wrapper[6609]: E0623 15:14:05.687253    6609 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get http://127.0.0.1:8080/api/v1/pods?fi
Jun 23 15:14:05 k8s-1 kubelet-wrapper[6609]: E0623 15:14:05.821811    6609 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:382: Failed to list *v1.Service: Get http://127.0.0.1:8080/api/v1/services?re
Jun 23 15:14:05 k8s-1 kubelet-wrapper[6609]: E0623 15:14:05.835673    6609 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:390: Failed to list *v1.Node: Get http://127.0.0.1:8080/api/v1/nodes?fieldSel
Jun 23 15:14:06 k8s-1 kubelet-wrapper[6609]: I0623 15:14:06.592376    6609 kubelet_node_status.go:230] Setting node annotation to enable volume controller attach/detach
Jun 23 15:14:07 k8s-1 kubelet-wrapper[6609]: I0623 15:14:07.613172    6609 kubelet_node_status.go:230] Setting node annotation to enable volume controller attach/detach
Jun 23 15:14:09 k8s-1 kubelet-wrapper[6609]: I0623 15:14:09.746682    6609 kubelet_node_status.go:230] Setting node annotation to enable volume controller attach/detach
Jun 23 15:14:09 k8s-1 kubelet-wrapper[6609]: W0623 15:14:09.753377    6609 kubelet.go:1524] Deleting mirror pod "kube-proxy-10.44.1.191_kube-system(99a00f6d-5826-11e7-b358-000c295952a0)" because it is outdated
Jun 23 15:14:11 k8s-1 kubelet-wrapper[6609]: I0623 15:14:11.449647    6609 kubelet_node_status.go:230] Setting node annotation to enable volume controller attach/detach
Jun 23 15:14:11 k8s-1 kubelet-wrapper[6609]: I0623 15:14:11.453102    6609 kubelet_node_status.go:77] Attempting to register node 10.44.1.191
Jun 23 15:14:11 k8s-1 kubelet-wrapper[6609]: I0623 15:14:11.644769    6609 kubelet_node_status.go:80] Successfully registered node 10.44.1.191
k8s-1 ~ # docker ps -a
CONTAINER ID        IMAGE                                                                                              COMMAND                  CREATED             STATUS              PORTS               NAMES
de782d3b4fea        quay.io/coreos/hyperkube@sha256:415b32275d8b850c77041ec7c83f0bbc55c1a8178efdfa0ecd8c5ded34fda6e1   "/hyperkube proxy --m"   6 minutes ago       Up 6 minutes                            k8s_kube-proxy_kube-proxy-10.44.1.191_kube-system_5fdb8119fd16f93de8c527f273288d27_0
166c04548901        quay.io/coreos/hyperkube@sha256:415b32275d8b850c77041ec7c83f0bbc55c1a8178efdfa0ecd8c5ded34fda6e1   "/hyperkube apiserver"   6 minutes ago       Up 6 minutes                            k8s_kube-apiserver_kube-apiserver-10.44.1.191_kube-system_2fb1fd773540a1f5e6563638a41f974f_0
6fb9a7097fb4        quay.io/coreos/hyperkube@sha256:415b32275d8b850c77041ec7c83f0bbc55c1a8178efdfa0ecd8c5ded34fda6e1   "/hyperkube scheduler"   6 minutes ago       Up 6 minutes                            k8s_kube-scheduler_kube-scheduler-10.44.1.191_kube-system_dee2e28bf1e64115c48f6eb837ddf312_0
2213760d1ab5        quay.io/coreos/hyperkube@sha256:415b32275d8b850c77041ec7c83f0bbc55c1a8178efdfa0ecd8c5ded34fda6e1   "/hyperkube controlle"   6 minutes ago       Up 6 minutes                            k8s_kube-controller-manager_kube-controller-manager-10.44.1.191_kube-system_2b6bdcbe503076f461f4a48a6b92308c_0
932559efa7f7        gcr.io/google_containers/pause-amd64:3.0                                                           "/pause"                 11 minutes ago      Up 11 minutes                           k8s_POD_kube-controller-manager-10.44.1.191_kube-system_2b6bdcbe503076f461f4a48a6b92308c_0
01a6d4333d22        gcr.io/google_containers/pause-amd64:3.0                                                           "/pause"                 11 minutes ago      Up 11 minutes                           k8s_POD_kube-proxy-10.44.1.191_kube-system_5fdb8119fd16f93de8c527f273288d27_0
d4cc12f1d848        gcr.io/google_containers/pause-amd64:3.0                                                           "/pause"                 11 minutes ago      Up 11 minutes                           k8s_POD_kube-scheduler-10.44.1.191_kube-system_dee2e28bf1e64115c48f6eb837ddf312_0
231ed7873970        gcr.io/google_containers/pause-amd64:3.0                                                           "/pause"                 11 minutes ago      Up 11 minutes                           k8s_POD_kube-apiserver-10.44.1.191_kube-system_2fb1fd773540a1f5e6563638a41f974f_0

@narunask 👋 This issue sounds more like a request for support and less like an issue specifically for docs. I encourage you to bring your question to the #kubernetes-users channel in Kubernetes slack.

Was this ever solved? @narunask

Could be due to image pulling ...

@qrpike sorry for late reply.

Could be due to image pulling ...

Yes, in my case a lack of bandwidth was a bottleneck (as far as I recall), but kubernetes could be more helpful if it would show a single line in the logs what it's doing behind the blankets rather than flooding console with errors.

Was this page helpful?
0 / 5 - 0 ratings