Kubespray: scale.yml

Created on 8 Nov 2019  路  8Comments  路  Source: kubernetes-sigs/kubespray

Environment:

  • Cloud provider or hardware configuration:
  • OS (printf "$(uname -srm)\n$(cat /etc/os-release)\n"):

  • Version of Ansible (ansible --version):

Kubespray version (commit) (git rev-parse --short HEAD):

Network plugin used:

Copy of your inventory file:

Command used to invoke ansible:

Output of ansible run:

Anything else do we need to know:

I think I have found 2 things :
1/ scale.yml is failing on a fresh centos because kubelet is deployed with a different cgroup driver than docker. But strangely cluster.yml is also doing the same difference, and it works, except if you try to restart kubelet...
2/ File /etc/cni/net.d/10-flannel.conflist is not deployed and prevent kubelet from starting.

kinbug lifecyclrotten

Most helpful comment

Same issue here with Kubespray v2.11.0 + CentOS 7
scale.yml does not work.
/etc/cni/net.d/10-calico.conflist and /etc/cni/net.d/calico-kubeconfig are missing.

kubelet: I1119 17:44:32.195010   17070 server.go:1025] Using root directory: /var/lib/kubelet
kubelet: I1119 17:44:32.195035   17070 kubelet.go:281] Adding pod path: /etc/kubernetes/manifests
kubelet: I1119 17:44:32.195095   17070 file.go:68] Watching path "/etc/kubernetes/manifests"
kubelet: I1119 17:44:32.195117   17070 kubelet.go:306] Watching apiserver
kubelet: E1119 17:44:32.197231   17070 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dk8s83&limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused
kubelet: E1119 17:44:32.197231   17070 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://localhost:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused
kubelet: E1119 17:44:32.197349   17070 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://localhost:6443/api/v1/nodes?fieldSelector=metadata.name%3Dk8s83&limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused
kubelet: I1119 17:44:32.199089   17070 client.go:75] Connecting to docker on unix:///var/run/docker.sock
kubelet: I1119 17:44:32.199120   17070 client.go:104] Start docker client with request timeout=2m0s
kubelet: W1119 17:44:32.201295   17070 docker_service.go:561] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
kubelet: I1119 17:44:32.201330   17070 docker_service.go:238] Hairpin mode set to "hairpin-veth"
kubelet: W1119 17:44:32.201615   17070 cni.go:213] Unable to update cni config: No networks found in /etc/cni/net.d
kubelet: W1119 17:44:32.204678   17070 hostport_manager.go:68] The binary conntrack is not installed, this can cause failures in network connection cleanup.
kubelet: W1119 17:44:32.204747   17070 cni.go:213] Unable to update cni config: No networks found in /etc/cni/net.d
kubelet: I1119 17:44:32.204778   17070 plugins.go:161] Loaded network plugin "cni"
kubelet: I1119 17:44:32.204805   17070 docker_service.go:253] Docker cri networking managed by cni
kubelet: W1119 17:44:32.204905   17070 cni.go:213] Unable to update cni config: No networks found in /etc/cni/net.d
kubelet: I1119 17:44:32.220107   17070 docker_service.go:258] Docker Info: &{ID:VXQM:4VS7:4G2O:BPSU:S63E:RCJT:WLLO:GEJH:EOJW:MOD4:W5HA:VCQN Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem xfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:false IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:45 SystemTime:2019-11-19T17:44:32.205954456+02:00 LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:3.10.0-1062.4.3.el7.x86_64 OperatingSystem:CentOS Linux 7 (Core) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc00072c070 NCPU:16 MemTotal:16802422784 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:k8s83 Labels:[] ExperimentalBuild:false ServerVersion:18.09.7 ClusterStore: ClusterAdvertise: Runtimes:map[runc:{Path:runc Args:[]}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:<nil> Warnings:[]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b34a5c8af56e510852c35414db4c1f4fa6172339 Expected:b34a5c8af56e510852c35414db4c1f4fa6172339} RuncCommit:{ID:3e425f80a8c931f88e6d94a8c831b9d5aa481657 Expected:3e425f80a8c931f88e6d94a8c831b9d5aa481657} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:[]}
kubelet: F1119 17:44:32.220300   17070 server.go:273] failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "systemd" is different from docker cgroup driver: "cgroupfs"

All 8 comments

Same issue here with Kubespray v2.11.0 + CentOS 7
scale.yml does not work.
/etc/cni/net.d/10-calico.conflist and /etc/cni/net.d/calico-kubeconfig are missing.

kubelet: I1119 17:44:32.195010   17070 server.go:1025] Using root directory: /var/lib/kubelet
kubelet: I1119 17:44:32.195035   17070 kubelet.go:281] Adding pod path: /etc/kubernetes/manifests
kubelet: I1119 17:44:32.195095   17070 file.go:68] Watching path "/etc/kubernetes/manifests"
kubelet: I1119 17:44:32.195117   17070 kubelet.go:306] Watching apiserver
kubelet: E1119 17:44:32.197231   17070 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dk8s83&limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused
kubelet: E1119 17:44:32.197231   17070 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://localhost:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused
kubelet: E1119 17:44:32.197349   17070 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://localhost:6443/api/v1/nodes?fieldSelector=metadata.name%3Dk8s83&limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused
kubelet: I1119 17:44:32.199089   17070 client.go:75] Connecting to docker on unix:///var/run/docker.sock
kubelet: I1119 17:44:32.199120   17070 client.go:104] Start docker client with request timeout=2m0s
kubelet: W1119 17:44:32.201295   17070 docker_service.go:561] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
kubelet: I1119 17:44:32.201330   17070 docker_service.go:238] Hairpin mode set to "hairpin-veth"
kubelet: W1119 17:44:32.201615   17070 cni.go:213] Unable to update cni config: No networks found in /etc/cni/net.d
kubelet: W1119 17:44:32.204678   17070 hostport_manager.go:68] The binary conntrack is not installed, this can cause failures in network connection cleanup.
kubelet: W1119 17:44:32.204747   17070 cni.go:213] Unable to update cni config: No networks found in /etc/cni/net.d
kubelet: I1119 17:44:32.204778   17070 plugins.go:161] Loaded network plugin "cni"
kubelet: I1119 17:44:32.204805   17070 docker_service.go:253] Docker cri networking managed by cni
kubelet: W1119 17:44:32.204905   17070 cni.go:213] Unable to update cni config: No networks found in /etc/cni/net.d
kubelet: I1119 17:44:32.220107   17070 docker_service.go:258] Docker Info: &{ID:VXQM:4VS7:4G2O:BPSU:S63E:RCJT:WLLO:GEJH:EOJW:MOD4:W5HA:VCQN Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem xfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:false IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:45 SystemTime:2019-11-19T17:44:32.205954456+02:00 LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:3.10.0-1062.4.3.el7.x86_64 OperatingSystem:CentOS Linux 7 (Core) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc00072c070 NCPU:16 MemTotal:16802422784 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:k8s83 Labels:[] ExperimentalBuild:false ServerVersion:18.09.7 ClusterStore: ClusterAdvertise: Runtimes:map[runc:{Path:runc Args:[]}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:<nil> Warnings:[]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b34a5c8af56e510852c35414db4c1f4fa6172339 Expected:b34a5c8af56e510852c35414db4c1f4fa6172339} RuncCommit:{ID:3e425f80a8c931f88e6d94a8c831b9d5aa481657 Expected:3e425f80a8c931f88e6d94a8c831b9d5aa481657} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:[]}
kubelet: F1119 17:44:32.220300   17070 server.go:273] failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "systemd" is different from docker cgroup driver: "cgroupfs"

Same issue here with Kubespray v2.11.0 + CentOS 7
scale.yml does not work.
/etc/cni/net.d/10-calico.conflist and /etc/cni/net.d/calico-kubeconfig are missing.

kubelet: I1119 17:44:32.195010   17070 server.go:1025] Using root directory: /var/lib/kubelet
kubelet: I1119 17:44:32.195035   17070 kubelet.go:281] Adding pod path: /etc/kubernetes/manifests
kubelet: I1119 17:44:32.195095   17070 file.go:68] Watching path "/etc/kubernetes/manifests"
kubelet: I1119 17:44:32.195117   17070 kubelet.go:306] Watching apiserver
kubelet: E1119 17:44:32.197231   17070 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dk8s83&limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused
kubelet: E1119 17:44:32.197231   17070 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://localhost:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused
kubelet: E1119 17:44:32.197349   17070 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://localhost:6443/api/v1/nodes?fieldSelector=metadata.name%3Dk8s83&limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused
kubelet: I1119 17:44:32.199089   17070 client.go:75] Connecting to docker on unix:///var/run/docker.sock
kubelet: I1119 17:44:32.199120   17070 client.go:104] Start docker client with request timeout=2m0s
kubelet: W1119 17:44:32.201295   17070 docker_service.go:561] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
kubelet: I1119 17:44:32.201330   17070 docker_service.go:238] Hairpin mode set to "hairpin-veth"
kubelet: W1119 17:44:32.201615   17070 cni.go:213] Unable to update cni config: No networks found in /etc/cni/net.d
kubelet: W1119 17:44:32.204678   17070 hostport_manager.go:68] The binary conntrack is not installed, this can cause failures in network connection cleanup.
kubelet: W1119 17:44:32.204747   17070 cni.go:213] Unable to update cni config: No networks found in /etc/cni/net.d
kubelet: I1119 17:44:32.204778   17070 plugins.go:161] Loaded network plugin "cni"
kubelet: I1119 17:44:32.204805   17070 docker_service.go:253] Docker cri networking managed by cni
kubelet: W1119 17:44:32.204905   17070 cni.go:213] Unable to update cni config: No networks found in /etc/cni/net.d
kubelet: I1119 17:44:32.220107   17070 docker_service.go:258] Docker Info: &{ID:VXQM:4VS7:4G2O:BPSU:S63E:RCJT:WLLO:GEJH:EOJW:MOD4:W5HA:VCQN Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem xfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:false IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:45 SystemTime:2019-11-19T17:44:32.205954456+02:00 LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:3.10.0-1062.4.3.el7.x86_64 OperatingSystem:CentOS Linux 7 (Core) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc00072c070 NCPU:16 MemTotal:16802422784 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:k8s83 Labels:[] ExperimentalBuild:false ServerVersion:18.09.7 ClusterStore: ClusterAdvertise: Runtimes:map[runc:{Path:runc Args:[]}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:<nil> Warnings:[]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b34a5c8af56e510852c35414db4c1f4fa6172339 Expected:b34a5c8af56e510852c35414db4c1f4fa6172339} RuncCommit:{ID:3e425f80a8c931f88e6d94a8c831b9d5aa481657 Expected:3e425f80a8c931f88e6d94a8c831b9d5aa481657} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:[]}
kubelet: F1119 17:44:32.220300   17070 server.go:273] failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "systemd" is different from docker cgroup driver: "cgroupfs"

@lystor which version of kubespray are you using ? if you are on release 2.11.0 then you are experiencing this bug https://github.com/kubernetes-sigs/kubespray/pull/5193
It is fixed in the master branches (https://github.com/kubernetes-sigs/kubespray/commit/8cb54cd74d93d79a9af4487215be2ebfed7d4baa#diff-3b62b6b5deadb2d4dcb5a404322f697a) so you may pull changes in your repo.

Hi @Pefou-flo,
I already use it as a workaround.
Thank you

Same here using Ubuntu 16.04.

I've found this issue using Ubuntu. https://github.com/kubernetes-sigs/kubespray/issues/5262

As a workaround they propose to explicitly set "kubelet_cgroup_driver=cgroupfs"

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Was this page helpful?
0 / 5 - 0 ratings