Kubeadm: this might take a minute or longer if the control plane images have to be pulled

Created on 26 Jul 2018  ยท  9Comments  ยท  Source: kubernetes/kubeadm

[init] using Kubernetes version: v1.11.1
[preflight] running pre-flight checks
I0726 15:19:12.715065   19856 kernel_validator.go:81] Validating kernel version
I0726 15:19:12.715212   19856 kernel_validator.go:96] Validating kernel config
    [WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.03.1-ce. Max validated version: 17.03
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [leoyer kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.1.104]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [leoyer localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [leoyer localhost] and IPs [192.168.1.104 127.0.0.1 ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" 
[init] this might take a minute or longer if the control plane images have to be pulled

        Unfortunately, an error has occurred:
            timed out waiting for the condition

        This error is likely caused by:
            - The kubelet is not running
            - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
            - No internet connection is available so the kubelet cannot pull or find the following control plane images:
                - k8s.gcr.io/kube-apiserver-amd64:v1.11.1
                - k8s.gcr.io/kube-controller-manager-amd64:v1.11.1
                - k8s.gcr.io/kube-scheduler-amd64:v1.11.1
                - k8s.gcr.io/etcd-amd64:3.2.18
                - You can check or miligate this in beforehand with "kubeadm config images pull" to make sure the images
                  are downloaded locally and cached.

        If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
            - 'systemctl status kubelet'
            - 'journalctl -xeu kubelet'

        Additionally, a control plane component may have crashed or exited when started by the container runtime.
        To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
        Here is one example how you may list all Kubernetes containers running in docker:
            - 'docker ps -a | grep kube | grep -v pause'
            Once you have found the failing container, you can inspect its logs with:
            - 'docker logs CONTAINERID'
couldn't initialize a Kubernetes cluster




the kubelete log


7ๆœˆ 26 15:19:20 leoyer kubelet[20002]: I0726 15:19:20.501338   20002 server.go:408] Version: v1.11.1
7ๆœˆ 26 15:19:20 leoyer kubelet[20002]: I0726 15:19:20.525177   20002 plugins.go:97] No cloud provider specified.
7ๆœˆ 26 15:19:20 leoyer kubelet[20002]: W0726 15:19:20.525706   20002 server.go:549] standalone mode, no API client
7ๆœˆ 26 15:19:20 leoyer kubelet[20002]: W0726 15:19:20.733586   20002 server.go:465] No api server defined - no events will be sent to API server.
7ๆœˆ 26 15:19:20 leoyer kubelet[20002]: I0726 15:19:20.733634   20002 server.go:648] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
7ๆœˆ 26 15:19:20 leoyer kubelet[20002]: I0726 15:19:20.746850   20002 container_manager_linux.go:243] container manager verified user specified cgroup-root exists: []
7ๆœˆ 26 15:19:20 leoyer kubelet[20002]: I0726 15:19:20.747319   20002 container_manager_linux.go:248] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletC
7ๆœˆ 26 15:19:20 leoyer kubelet[20002]: I0726 15:19:20.747783   20002 container_manager_linux.go:267] Creating device plugin manager: true
7ๆœˆ 26 15:19:20 leoyer kubelet[20002]: I0726 15:19:20.748565   20002 state_mem.go:36] [cpumanager] initializing new in-memory state store
7ๆœˆ 26 15:19:20 leoyer kubelet[20002]: I0726 15:19:20.748975   20002 state_file.go:82] [cpumanager] state file: created new state file "/var/lib/kubelet/cpu_manager_state"
7ๆœˆ 26 15:19:20 leoyer kubelet[20002]: I0726 15:19:20.859102   20002 client.go:75] Connecting to docker on unix:///var/run/docker.sock
7ๆœˆ 26 15:19:20 leoyer kubelet[20002]: I0726 15:19:20.859583   20002 client.go:104] Start docker client with request timeout=2m0s
7ๆœˆ 26 15:19:20 leoyer kubelet[20002]: W0726 15:19:20.904002   20002 docker_service.go:545] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
7ๆœˆ 26 15:19:20 leoyer kubelet[20002]: I0726 15:19:20.904459   20002 docker_service.go:238] Hairpin mode set to "hairpin-veth"
7ๆœˆ 26 15:19:20 leoyer kubelet[20002]: W0726 15:19:20.904904   20002 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
7ๆœˆ 26 15:19:20 leoyer kubelet[20002]: W0726 15:19:20.937671   20002 hostport_manager.go:68] The binary conntrack is not installed, this can cause failures in network connection cleanup.
7ๆœˆ 26 15:19:21 leoyer kubelet[20002]: I0726 15:19:21.063779   20002 docker_service.go:253] Docker cri networking managed by kubernetes.io/no-op
7ๆœˆ 26 15:19:21 leoyer kubelet[20002]: I0726 15:19:21.095838   20002 docker_service.go:258] Docker Info: &{ID:FCWV:FLC6:JNY2:4GHW:4UPY:CMBF:YJQP:W2CZ:SZTM:GYHO:SILX:APZV Containers:0 ContainersRunning:0 Co
7ๆœˆ 26 15:19:21 leoyer kubelet[20002]: I0726 15:19:21.097348   20002 docker_service.go:271] Setting cgroupDriver to cgroupfs
7ๆœˆ 26 15:19:21 leoyer kubelet[20002]: I0726 15:19:21.215066   20002 kuberuntime_manager.go:186] Container runtime docker initialized, version: 18.03.1-ce, apiVersion: 1.37.0
7ๆœˆ 26 15:19:21 leoyer kubelet[20002]: I0726 15:19:21.287303   20002 csi_plugin.go:111] kubernetes.io/csi: plugin initializing...
7ๆœˆ 26 15:19:21 leoyer kubelet[20002]: E0726 15:19:21.299128   20002 kubelet.go:1261] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unab
7ๆœˆ 26 15:19:21 leoyer kubelet[20002]: W0726 15:19:21.299608   20002 kubelet.go:1359] No api server defined - no node status update will be sent.
7ๆœˆ 26 15:19:21 leoyer kubelet[20002]: I0726 15:19:21.300392   20002 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer
7ๆœˆ 26 15:19:21 leoyer kubelet[20002]: I0726 15:19:21.313064   20002 status_manager.go:148] Kubernetes client is nil, not starting status manager.
7ๆœˆ 26 15:19:21 leoyer kubelet[20002]: I0726 15:19:21.313987   20002 kubelet.go:1758] Starting kubelet main sync loop.
7ๆœˆ 26 15:19:21 leoyer kubelet[20002]: I0726 15:19:21.314821   20002 kubelet.go:1775] skipping pod synchronization - [container runtime is down PLEG is not healthy: pleg was last seen active 2562047h47m16.
7ๆœˆ 26 15:19:21 leoyer kubelet[20002]: I0726 15:19:21.315589   20002 server.go:129] Starting to listen on 0.0.0.0:10250
7ๆœˆ 26 15:19:21 leoyer kubelet[20002]: I0726 15:19:21.317735   20002 server.go:302] Adding debug handlers to kubelet server.
7ๆœˆ 26 15:19:21 leoyer kubelet[20002]: I0726 15:19:21.321822   20002 server.go:986] Started kubelet
7ๆœˆ 26 15:19:21 leoyer kubelet[20002]: I0726 15:19:21.319239   20002 volume_manager.go:247] Starting Kubelet Volume Manager
7ๆœˆ 26 15:19:21 leoyer kubelet[20002]: I0726 15:19:21.319256   20002 desired_state_of_world_populator.go:130] Desired state populator starts to run
7ๆœˆ 26 15:19:21 leoyer kubelet[20002]: I0726 15:19:21.417391   20002 kubelet.go:1775] skipping pod synchronization - [container runtime is down]
7ๆœˆ 26 15:19:21 leoyer kubelet[20002]: I0726 15:19:21.576700   20002 reconciler.go:154] Reconciler: start to sync state
7ๆœˆ 26 15:19:21 leoyer kubelet[20002]: I0726 15:19:21.618315   20002 kubelet.go:1775] skipping pod synchronization - [container runtime is down]
7ๆœˆ 26 15:19:22 leoyer kubelet[20002]: I0726 15:19:22.019335   20002 kubelet.go:1775] skipping pod synchronization - [container runtime is down]
7ๆœˆ 26 15:19:22 leoyer kubelet[20002]: I0726 15:19:22.097825   20002 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
7ๆœˆ 26 15:19:22 leoyer kubelet[20002]: I0726 15:19:22.106382   20002 cpu_manager.go:155] [cpumanager] starting with none policy
7ๆœˆ 26 15:19:22 leoyer kubelet[20002]: I0726 15:19:22.106861   20002 cpu_manager.go:156] [cpumanager] reconciling every 10s
7ๆœˆ 26 15:19:22 leoyer kubelet[20002]: I0726 15:19:22.107261   20002 policy_none.go:42] [cpumanager] none policy: Start
7ๆœˆ 26 15:19:22 leoyer kubelet[20002]: Starting Device Plugin manager
7ๆœˆ 26 15:19:22 leoyer kubelet[20002]: W0726 15:19:22.118685   20002 manager.go:496] Failed to retrieve checkpoint for "kubelet_internal_checkpoint": checkpoint is not found
7ๆœˆ 26 15:19:22 leoyer kubelet[20002]: I0726 15:19:22.119173   20002 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
7ๆœˆ 26 15:19:32 leoyer kubelet[20002]: I0726 15:19:32.140225   20002 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
7ๆœˆ 26 15:19:42 leoyer kubelet[20002]: I0726 15:19:42.160273   20002 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
7ๆœˆ 26 15:19:52 leoyer kubelet[20002]: I0726 15:19:52.176949   20002 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
7ๆœˆ 26 15:20:02 leoyer kubelet[20002]: I0726 15:20:02.263526   20002 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
7ๆœˆ 26 15:20:12 leoyer kubelet[20002]: I0726 15:20:12.285758   20002 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
7ๆœˆ 26 15:20:22 leoyer kubelet[20002]: I0726 15:20:22.331523   20002 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
7ๆœˆ 26 15:20:32 leoyer kubelet[20002]: I0726 15:20:32.523319   20002 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
7ๆœˆ 26 15:20:42 leoyer kubelet[20002]: I0726 15:20:42.651191   20002 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
7ๆœˆ 26 15:20:52 leoyer kubelet[20002]: I0726 15:20:52.676981   20002 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
help wanted

Most helpful comment

I think it was because I installed some kubernetes components before. After I modified the kubelet.service file, it seemed that I could work.

  • vim /lib/systemd/system/kubelet.service
[Unit]
Description=kubelet: The Kubernetes Node Agent
Documentation=http://kubernetes.io/docs/

[Service]
ExecStart=/usr/bin/kubelet --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml
Restart=always
StartLimitInterval=0
RestartSec=10

[Install]
WantedBy=multi-user.target

And then I met another problem

mounting "/sys/fs/cgroup" to rootfs "/var/lib/docker/overlay/b5331adb3bf783718e85bedb706d430d79d52aba138be8e06594826158d29164/merged" caused "no subsystem for mount".
Edit /etc/default/grub
Replace GRUB_CMDLINE_LINUX_DEFAULT="quiet splash" withGRUB_CMDLINE_LINUX_DEFAULT="quiet systemd.legacy_systemd_cgroup_controller=yes"
Update and restart: restart sudo update-grub && sudo reboot

I modified a system configuration. Finally, I succeeded. Thank you for your patience.

All 9 comments

@leoyer
your bug report is incomplete.

  • what OS / distro / CPU architecture is this?
  • what is your docker version?
  • the kubelet seems to be running - what is the output of sudo systemctl status kubelet?
  • do you have an internet connection?
  • did you pre-pull the images using kubeadm config images pull --kubernetes-version=1.11.1?
  • are you using a kubeadm config file (--config) what are the contents of the file? (remove sensitive data before sharing it)
  • OS : Ubuntu 17.10
  • CPU: Intel(R) Core(TM) i5-6400 CPU @ 2.70GHz
  • architecture x64
  • Docker version 17.03.0-ce
  • sudo systemctl status kubelet output
   kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
  Drop-In: /etc/systemd/system/kubelet.service.d
           โ””โ”€10-kubeadmin.conf
   Active: active (running) since Fri 2018-07-27 09:22:33 CST; 10min ago
     Docs: http://kubernetes.io/docs/
 Main PID: 26082 (kubelet)
    Tasks: 20 (limit: 4915)
   Memory: 39.8M
      CPU: 10.489s
   CGroup: /system.slice/kubelet.service
           โ””โ”€26082 /usr/bin/kubelet
7ๆœˆ 27 09:31:05 leoyer kubelet[26082]: I0727 09:31:05.385977   26082 kubelet_node_status.go:269] Setti
7ๆœˆ 27 09:31:15 leoyer kubelet[26082]: I0727 09:31:15.403479   26082 kubelet_node_status.go:269] Setti
7ๆœˆ 27 09:31:25 leoyer kubelet[26082]: I0727 09:31:25.419190   26082 kubelet_node_status.go:269] Setti
7ๆœˆ 27 09:31:35 leoyer kubelet[26082]: I0727 09:31:35.435863   26082 kubelet_node_status.go:269] Setti
7ๆœˆ 27 09:31:45 leoyer kubelet[26082]: I0727 09:31:45.453341   26082 kubelet_node_status.go:269] Setti
7ๆœˆ 27 09:31:55 leoyer kubelet[26082]: I0727 09:31:55.475820   26082 kubelet_node_status.go:269] Setti
7ๆœˆ 27 09:32:05 leoyer kubelet[26082]: I0727 09:32:05.493034   26082 kubelet_node_status.go:269] Setti
7ๆœˆ 27 09:32:15 leoyer kubelet[26082]: I0727 09:32:15.510566   26082 kubelet_node_status.go:269] Setti
7ๆœˆ 27 09:32:25 leoyer kubelet[26082]: I0727 09:32:25.535113   26082 kubelet_node_status.go:269] Setti
7ๆœˆ 27 09:32:35 leoyer kubelet[26082]: I0727 09:32:35.551342   26082 kubelet_node_status.go:269] Setti 
- and i hava the internet , - docker images
REPOSITORY                                 TAG                 IMAGE ID            CREATED             SIZE
etcd-integration-test                      latest              2317207e4579        8 days ago          931 MB
k8s.gcr.io/etcd-amd64                      3.2.18-0            062f5eb5dde8        8 days ago          219 MB
k8s.gcr.io/kube-proxy-amd64                v1.11.1             d5c25579d0ff        9 days ago          97.8 MB
k8s.gcr.io/kube-controller-manager-amd64   v1.11.1             52096ee87d0e        9 days ago          155 MB
k8s.gcr.io/kube-apiserver-amd64            v1.11.1             816332bd9d11        9 days ago          187 MB
k8s.gcr.io/kube-scheduler-amd64            v1.11.1             272b3a60cd68        9 days ago          56.8 MB
busybox                                    latest              22c2dd5ee85d        10 days ago         1.16 MB
k8s.gcr.io/coredns                         1.1.3               b3b94275d97c        2 months ago        45.6 MB
k8s.gcr.io/etcd-amd64                      3.2.18              b8df3b177be2        3 months ago        219 MB
k8s.gcr.io/debian-hyperkube-base-amd64     0.10                7812d248bfc9        4 months ago        398 MB
golang                                     1.8.7               0d283eb41a92        5 months ago        713 MB
k8s.gcr.io/pause                           3.1                 da86e6ba6ca1        7 months ago        742 kB
k8s.gcr.io/pause                           latest              350b164e7ae1        4 years ago         240 kB
  • kubeadm config images pull --kubernetes-version=1.11.1
[config/images] Pulled k8s.gcr.io/kube-apiserver-amd64:v1.11.1
[config/images] Pulled k8s.gcr.io/kube-controller-manager-amd64:v1.11.1
[config/images] Pulled k8s.gcr.io/kube-scheduler-amd64:v1.11.1
[config/images] Pulled k8s.gcr.io/kube-proxy-amd64:v1.11.1
[config/images] Pulled k8s.gcr.io/pause:3.1
[config/images] Pulled k8s.gcr.io/etcd-amd64:3.2.18
[config/images] Pulled k8s.gcr.io/coredns:1.1.3
  • and i just use kubeadm init

  • kubeadm init

[init] using Kubernetes version: v1.11.1
[preflight] running pre-flight checks
    [WARNING HTTPProxy]: Connection to "https://192.168.1.104" uses proxy "http://127.0.0.1:8123". If that is not intended, adjust your proxy settings
    [WARNING HTTPProxyCIDR]: connection to "10.96.0.0/12" uses proxy "http://127.0.0.1:8123". This may lead to malfunctional cluster setup. Make sure that Pod and Services IP ranges specified correctly as exceptions in proxy configuration
I0727 09:36:00.771978   27635 kernel_validator.go:81] Validating kernel version
I0727 09:36:00.772200   27635 kernel_validator.go:96] Validating kernel config
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [leoyer kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.1.104]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [leoyer localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [leoyer localhost] and IPs [192.168.1.104 127.0.0.1 ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" 
[init] this might take a minute or longer if the control plane images have to be pulled

        Unfortunately, an error has occurred:
            timed out waiting for the condition

        This error is likely caused by:
            - The kubelet is not running
            - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
            - No internet connection is available so the kubelet cannot pull or find the following control plane images:
                - k8s.gcr.io/kube-apiserver-amd64:v1.11.1
                - k8s.gcr.io/kube-controller-manager-amd64:v1.11.1
                - k8s.gcr.io/kube-scheduler-amd64:v1.11.1
                - k8s.gcr.io/etcd-amd64:3.2.18
                - You can check or miligate this in beforehand with "kubeadm config images pull" to make sure the images
                  are downloaded locally and cached.

        If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
            - 'systemctl status kubelet'
            - 'journalctl -xeu kubelet'

        Additionally, a control plane component may have crashed or exited when started by the container runtime.
        To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
        Here is one example how you may list all Kubernetes containers running in docker:
            - 'docker ps -a | grep kube | grep -v pause'
            Once you have found the failing container, you can inspect its logs with:
            - 'docker logs CONTAINERID'
couldn't initialize a Kubernetes cluster
  • now the kubelet service active but no container be run
  • docker ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
  • cat /lib/systemd/system/kubelet.service
[Unit]
Description=kubelet: The Kubernetes Node Agent
Documentation=http://kubernetes.io/docs/

[Service]
ExecStart=/usr/bin/kubelet
Restart=always
StartLimitInterval=0
RestartSec=10

[Install]
WantedBy=multi-user.target
  • cat /etc/systemd/system/kubelet.service.d/10-kubeadmin.conf
Environment="KUBELET_EXTRA_ARGS=--fail-swap-on=false"
  • cat /var/lib/kubelet/config.yaml
address: 0.0.0.0
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 2m0s
    enabled: true
  x509:
    clientCAFile: /etc/kubernetes/pki/ca.crt
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 5m0s
    cacheUnauthorizedTTL: 30s
cgroupDriver: cgroupfs
cgroupsPerQOS: true
clusterDNS:
- 10.96.0.10
clusterDomain: cluster.local
containerLogMaxFiles: 5
containerLogMaxSize: 10Mi
contentType: application/vnd.kubernetes.protobuf
cpuCFSQuota: true
cpuManagerPolicy: none
cpuManagerReconcilePeriod: 10s
enableControllerAttachDetach: true
enableDebuggingHandlers: true
enforceNodeAllocatable:
- pods
eventBurst: 10
eventRecordQPS: 5
evictionHard:
  imagefs.available: 15%
  memory.available: 100Mi
  nodefs.available: 10%
  nodefs.inodesFree: 5%
evictionPressureTransitionPeriod: 5m0s
failSwapOn: true
fileCheckFrequency: 20s
hairpinMode: promiscuous-bridge
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 20s
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
imageMinimumGCAge: 2m0s
iptablesDropBit: 15
iptablesMasqueradeBit: 14
kind: KubeletConfiguration
kubeAPIBurst: 10
kubeAPIQPS: 5
makeIPTablesUtilChains: true
maxOpenFiles: 1000000
maxPods: 110
nodeStatusUpdateFrequency: 10s
oomScoreAdj: -999
podPidsLimit: -1
port: 10250
registryBurst: 10
registryPullQPS: 5
resolvConf: /etc/resolv.conf
rotateCertificates: true
runtimeRequestTimeout: 2m0s
serializeImagePulls: true
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 4h0m0s
syncFrequency: 1m0s
volumeStatsAggPeriod: 1m0s
  • ll /etc/kubernetes/manifests/

    drwx------ 2 root root 4096 7ๆœˆ 27 09:36 ./
    drwxr-xr-x 4 root root 4096 7ๆœˆ 27 09:36 ../
    -rw------- 1 root root 1939 7ๆœˆ 27 09:36 etcd.yaml
    -rw------- 1 root root 3820 7ๆœˆ 27 09:36 kube-apiserver.yaml
    -rw------- 1 root root 3267 7ๆœˆ 27 09:36 kube-controller-manager.yaml
    -rw------- 1 root root 1477 7ๆœˆ 27 09:36 kube-scheduler.yaml

I don't know why it looks like this. Everything looks normal.

thanks for the details.

    [WARNING HTTPProxy]: Connection to "https://192.168.1.104" uses proxy "http://127.0.0.1:8123". If that is not intended, adjust your proxy settings
    [WARNING HTTPProxyCIDR]: connection to "10.96.0.0/12" uses proxy "http://127.0.0.1:8123". This may lead to malfunctional cluster setup. Make sure that Pod and Services IP ranges specified correctly as exceptions in proxy configuration

this seems like a proxy issue?

what flags flags are you passing to kubeadm init?
if you are you using kubeadm init --config, could you share the contents of the config file as well?

  • kubeadm init --config
Error: flag needs an argument: --config
Usage:
  kubeadm init [flags]

Flags:
      --apiserver-advertise-address string   The IP address the API Server will advertise it's listening on. Specify '0.0.0.0' to use the address of the default network interface.
      --apiserver-bind-port int32            Port for the API Server to bind to. (default 6443)
      --apiserver-cert-extra-sans strings    Optional extra Subject Alternative Names (SANs) to use for the API Server serving certificate. Can be both IP addresses and DNS names.
      --cert-dir string                      The path where to save and store the certificates. (default "/etc/kubernetes/pki")
      --config string                        Path to kubeadm config file. WARNING: Usage of a configuration file is experimental.
      --cri-socket string                    Specify the CRI socket to connect to. (default "/var/run/dockershim.sock")
      --dry-run                              Don't apply any changes; just output what would be done.
      --feature-gates string                 A set of key=value pairs that describe feature gates for various features. Options are:
                                             Auditing=true|false (ALPHA - default=false)
                                             CoreDNS=true|false (default=true)
                                             DynamicKubeletConfig=true|false (ALPHA - default=false)
                                             SelfHosting=true|false (ALPHA - default=false)
                                             StoreCertsInSecrets=true|false (ALPHA - default=false)
  -h, --help                                 help for init
      --ignore-preflight-errors strings      A list of checks whose errors will be shown as warnings. Example: 'IsPrivilegedUser,Swap'. Value 'all' ignores errors from all checks.
      --kubernetes-version string            Choose a specific Kubernetes version for the control plane. (default "stable-1.11")
      --node-name string                     Specify the node name.
      --pod-network-cidr string              Specify range of IP addresses for the pod network. If set, the control plane will automatically allocate CIDRs for every node.
      --service-cidr string                  Use alternative range of IP address for service VIPs. (default "10.96.0.0/12")
      --service-dns-domain string            Use alternative domain for services, e.g. "myorg.internal". (default "cluster.local")
      --skip-token-print                     Skip printing of the default bootstrap token generated by 'kubeadm init'.
      --token string                         The token to use for establishing bidirectional trust between nodes and masters. The format is [a-z0-9]{6}\.[a-z0-9]{16} - e.g. abcdef.0123456789abcdef
      --token-ttl duration                   The duration before the token is automatically deleted (e.g. 1s, 2m, 3h). If set to '0', the token will never expire (default 24h0m0s)

Global Flags:
  -v, --v Level   log level for V logs

error: flag needs an argument: --config

I just used kubeadm init without adding -- config configuration
I thought it was an agency problem before, so I got rid of agency execution as well.

I don't use the agent now, but the result is the same

  • kubeadm init
[init] using Kubernetes version: v1.11.1
[preflight] running pre-flight checks
I0727 10:18:21.013997   32184 kernel_validator.go:81] Validating kernel version
I0727 10:18:21.014182   32184 kernel_validator.go:96] Validating kernel config
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [leoyer kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.1.104]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [leoyer localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [leoyer localhost] and IPs [192.168.1.104 127.0.0.1 ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" 
[init] this might take a minute or longer if the control plane images have to be pulled

hard to say what the problem is.

Have you noticed that there is no parameter behind my kubelet.service kubelet command? Does this matter? I think there are many parameters behind the kubelet of other people, or is the configuration of 1.11.1 version changed?

ubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
  Drop-In: /etc/systemd/system/kubelet.service.d
           โ””โ”€10-kubeadmin.conf
   Active: active (running) since Fri 2018-07-27 09:22:33 CST; 10min ago
     Docs: http://kubernetes.io/docs/
 Main PID: 26082 (kubelet)
    Tasks: 20 (limit: 4915)
   Memory: 39.8M
      CPU: 10.489s
   CGroup: /system.slice/kubelet.service
           โ””โ”€26082 /usr/bin/kubelet

$ ps aux | grep kubelet

root      4783  1.8  1.0 1472896 83672 ?       Ssl  11:11   0:08 /usr/bin/kubelet
root      6216  0.0  0.0  16104   972 pts/1    S+   11:18   0:00 grep --color=auto kubelet

I think it was because I installed some kubernetes components before. After I modified the kubelet.service file, it seemed that I could work.

  • vim /lib/systemd/system/kubelet.service
[Unit]
Description=kubelet: The Kubernetes Node Agent
Documentation=http://kubernetes.io/docs/

[Service]
ExecStart=/usr/bin/kubelet --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml
Restart=always
StartLimitInterval=0
RestartSec=10

[Install]
WantedBy=multi-user.target

And then I met another problem

mounting "/sys/fs/cgroup" to rootfs "/var/lib/docker/overlay/b5331adb3bf783718e85bedb706d430d79d52aba138be8e06594826158d29164/merged" caused "no subsystem for mount".
Edit /etc/default/grub
Replace GRUB_CMDLINE_LINUX_DEFAULT="quiet splash" withGRUB_CMDLINE_LINUX_DEFAULT="quiet systemd.legacy_systemd_cgroup_controller=yes"
Update and restart: restart sudo update-grub && sudo reboot

I modified a system configuration. Finally, I succeeded. Thank you for your patience.

@leoyer thanks, by following your way to make change of the below, make v1.11.1 working like charm.

ExecStart=/usr/bin/kubelet --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml

Funny things is, each time after upgrade to a new version, i have to make change on that for the file kubelet.service

Was this page helpful?
0 / 5 - 0 ratings