kubeadm complains about settings:
detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd".
02:43:03 | ! I0716 02:43:03.032081 3466 utils.go:247] > [init] Using Kubernetes version: v1.15.0
02:43:03 | ! I0716 02:43:03.043702 3466 utils.go:247] > [preflight] Running pre-flight checks
02:43:03 | ! I0716 02:43:03.338279 3466 utils.go:247] ! [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
02:43:03 | ! I0716 02:43:03.566271 3466 utils.go:247] ! [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
02:43:03 | ! I0716 02:43:03.990579 3466 utils.go:247] > [preflight] Pulling images required for setting up a Kubernetes cluster
02:43:03 | ! I0716 02:43:03.990626 3466 utils.go:247] > [preflight] This might take a minute or two, depending on the speed of your internet connection
02:43:03 | ! I0716 02:43:03.990649 3466 utils.go:247] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
02:43:03 | ! I0716 02:43:03.990706 3466 utils.go:247] ! [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
https://kubernetes.io/docs/setup/production-environment/container-runtimes/#cgroup-drivers
Control groups are used to constrain resources that are allocated to processes. A single cgroup manager will simplify the view of what resources are being allocated and will by default have a more consistent view of the available and in-use resources. When we have two managers we end up with two views of those resources. We have seen cases in the field where nodes that are configured to use
cgroupfsfor the kubelet and Docker, andsystemdfor the rest of the processes running on the node becomes unstable under resource pressure.
Changing the settings such that your container runtime and kubelet use systemd as the cgroup driver stabilized the system.
Recommended /etc/docker/daemon.json (storage driver "overlay2" is now the default anyway)
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
Similar setting for crio:
# Cgroup management implementation used for the runtime.
cgroup_manager = "cgroupfs"
And for containerd:
systemd_cgroup = false
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#configure-cgroup-driver-used-by-kubelet-on-control-plane-node
When using Docker, kubeadm will automatically detect the cgroup driver for the kubelet [...]
The automatic detection of cgroup driver for other container runtimes like CRI-O and containerd is work in progress.
Related to #4172
Possibly related: #4144 #2381
Here is the current workaround, when using something like the default Docker on CentOS:
sudo minikube start --vm-driver=none --extra-config=kubelet.cgroup-driver=systemd
The "docker" package (1.13.1) already has "systemd" as the default.
Note: also need to set SELinux to permissive (setenforce 0)
Marking as dupe of #4172
This is not a dupe, this one is about actually changing the setting for the VM (to "systemd").
The other issue is about detecting the current configuration, and setting kubernetes config.
@afbjorklund could I assign it to you ?
_For Docker:_
The other drivers are already the default (overlay2 and json-file)
"log-opts": {
"max-size": "100m"
},
The default max-size is unlimited, guess we can cap it to 100m.
https://docs.docker.com/config/containers/logging/json-file/
It's not really related to cgroup, but recommended by kubeadm...
Most helpful comment
Here is the current workaround, when using something like the default Docker on CentOS:
sudo minikube start --vm-driver=none --extra-config=kubelet.cgroup-driver=systemdThe "docker" package (1.13.1) already has "systemd" as the default.
Note: also need to set SELinux to permissive (
setenforce 0)