------------- BUG REPORT TEMPLATE --------------------
What commands did you run? What is the simplest way to reproduce this issue?
kubeadm init --kubernetes-version=v1.9.0 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=117....(my server's ip)
When this operation failed, I delete the existing directories which can not have any contents when this operation starts, so it's easy to reproduce this issue.
What happened after the commands executed?
This is the whole information below after the commands executed:
[init] Using Kubernetes version: v1.9.0
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks.
[WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 17.09.0-ce. Max validated version: 17.03
[WARNING FileExisting-crictl]: crictl not found in system path
[preflight] Starting the kubelet service
[certificates] Using the existing ca certificate and key.
[certificates] Using the existing apiserver certificate and key.
[certificates] Using the existing apiserver-kubelet-client certificate and key.
[certificates] Using the existing sa key.
[certificates] Using the existing front-proxy-ca certificate and key.
[certificates] Using the existing front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Using existing up-to-date KubeConfig file: "admin.conf"
[kubeconfig] Using existing up-to-date KubeConfig file: "kubelet.conf"
[kubeconfig] Using existing up-to-date KubeConfig file: "controller-manager.conf"
[kubeconfig] Using existing up-to-date KubeConfig file: "scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests".
[init] This might take a minute or longer if the control plane images have to be pulled.
[apiclient] All control plane components are healthy after 17.501353 seconds
[uploadconfig]Β Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[markmaster] Will mark node k8s.master as master by adding a label and a taint
error marking master: timed out waiting for the condition
What did you expect to happen?
Get my kubernetes master initialized successfully.
Anything else do we need to know?
I'm quite new to k8s 1.9, because of the special reason in chinese network, we can't connet google directly, so I download the software packages, including kubernetes-cni-0.6.0-0.x86_64, kubeadm-1.9.0-0.x86_64, kubelet-1.9.0-0.x86_64 and kubectl-1.9.0-0.x86_64, then installing them manually.Meanwhile, there is no way to pull basic images from google, so I pull these images from docker hub, including kube-proxy-amd64:v1.9.0, kube-apiserver-amd64:v1.9.0, kube-controller-manager-amd64:v1.9.0, kube-scheduler-amd64:1.9.0, k8s-dns-sidecar-amd64:v1.14.7, k8s-dns-kube-dns-amd64:v1.14.7, k8s-dns-dnsmasq-nanny-amd64:v1.14.7, etcd-amd64:v3.1.10 and pause-amd64:v3.0.
when the initializing command executed, it hangs at the 'martmaster' step, a few minutes later, it interrupted and quit, shows 'error marking master: timed out waiting for the condition
'. Seeing the state of kubelet: systemctl status kubelet:
kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Drop-In: /etc/systemd/system/kubelet.service.d
ββ10-kubeadm.conf
Active: active (running) since δΊ 2018-02-06 14:22:14 CST; 10min ago
Docs: http://kubernetes.io/docs/
Main PID: 18593 (kubelet)
Memory: 37.7M
CGroup: /system.slice/kubelet.service
ββ18593 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --pod-manifest-path=/etc/kub...
2ζ 06 14:32:09 k8s.master kubelet[18593]: E0206 14:32:09.160566 18593 kubelet_node_status.go:106] Unable to register node "k8s.master" with API server: no...8s.master"
2ζ 06 14:32:12 k8s.master kubelet[18593]: W0206 14:32:12.107539 18593 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d
2ζ 06 14:32:12 k8s.master kubelet[18593]: E0206 14:32:12.107636 18593 kubelet.go:2105] Container runtime network not ready: NetworkReady=false reason:Netw...nitialized
2ζ 06 14:32:15 k8s.master kubelet[18593]: I0206 14:32:15.916562 18593 kubelet_node_status.go:273] Setting node annotation to enable volume controller attach/detach
2ζ 06 14:32:16 k8s.master kubelet[18593]: I0206 14:32:16.160681 18593 kubelet_node_status.go:273] Setting node annotation to enable volume controller attach/detach
2ζ 06 14:32:16 k8s.master kubelet[18593]: I0206 14:32:16.161903 18593 kubelet_node_status.go:82] Attempting to register node k8s.master
2ζ 06 14:32:16 k8s.master kubelet[18593]: E0206 14:32:16.162735 18593 kubelet_node_status.go:106] Unable to register node "k8s.master" with API server: no...8s.master"
2ζ 06 14:32:17 k8s.master kubelet[18593]: E0206 14:32:17.035755 18593 eviction_manager.go:238] eviction manager: unexpected err: failed to get node info: ... not found
2ζ 06 14:32:17 k8s.master kubelet[18593]: W0206 14:32:17.108232 18593 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d
2ζ 06 14:32:17 k8s.master kubelet[18593]: E0206 14:32:17.108330 18593 kubelet.go:2105] Container runtime network not ready: NetworkReady=false reason:Netw...nitialized
Hint: Some lines were ellipsized, use -l to show in full.
I've been stuck there for several days, waiting and appreciating your suggestions, thanks a lot!
I had this problem and I fixed it by setting the --node-name switch of kubeadm init to the FQDN of the master.
@duckie thanks bro, I've solved this problem by tagging these images' name to gcr.io/google_containers/...
that's it, quiet wired, but fortunately.
Hello,
I have the same issue in kubeadm-1.9.3
None of mentioned methods worked for me, may be there are detailed logs for kubeadm?
Thanks a lot!
@jnickc kubeadm has its own repo ;) you may want to file issues there.
The same issue in kubeadm v1.9.3
1.check the hostname can help you
or
2.check your config file of kubelet which in /etc/systemd/system/kubelet.service.d/10-kubeadm.conf and check the flag "--hostname-override" and look for it's spelled aright.
for mark master timeout condition.
Hi All,
I have the same issue in kubeadm-1.10.0
None of mentioned methods worked for me. After removing --hostname-override from /etc/systemd/system/kubelet.service.d/10-kubeadm.conf file, atleast able to initialize cluster. Didn't give provide --node-name.
If i give --node-name=
eviction manager: failed to get get summary stats: failed to get node info: node "ubuntu" not found
Thanks!
@manojgoyal04 --node-name must be set to the actual fqdn of the host. A name of your own devising wont work.
@kirsazlid did you solve this issue? I am having the same error.
Same issue for me. I've tried the documentation above but it didn't resolve the problem.
I had similar issue - fixed by adding --pod-network-cidr= parameter with my local IP addresses range to kubeadm init call.
I too had the same issue in 1.11.0-00 , but adding --node-name= fixed the issue. Thanks @duckie
Is there another solution? I'm having the same problem, but all the solutions above won't work
I had the same issue, and I found that etcd was not listening on 127.0.0.1:2379 , once I fixed it by editing the etcd manifest ( adding --listen-client-urls=https://127.0.0.1:2379,https://
everything works.
Met same problem when I following official document to setup HA master.
[markmaster] Marking the node ip-10-32-166-162.us-west-2.compute.internal as master by adding the label "node-role.kubernetes.io/master=''"
[markmaster] Marking the node ip-10-32-166-162.us-west-2.compute.internal as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
error marking master: timed out waiting for the condition
I have tried to specify --node-name as above, It works when setup one master cluster using argument to kubeadm, but not works when using --configfile style (Specify nodeRegistration.node field) for HA cluster in official docs.
Hi MarkBooch
I am also stuck in the same place. kubeadm alpha phase mark-master --config kubeadm-config.yaml. following the link https://v1-11.docs.kubernetes.io/docs/setup/independent/high-availability/
[markmaster] Marking the node kb8-master2 as master by adding the label "node-role.kubernetes.io/master=''"
[markmaster] Marking the node kb8-master2 as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
timed out waiting for the condition
Did you find the solution to this!!!
kubeadm is a low-level building block tool, that is intended to be consumed through higher-level tools such as kops or the cluster-api.
With kops, to create an HA cluster, you do:
kops create cluster ha.k8s.local --master-count=3 --zones us-east-2a
To spread across zones:
kops create cluster ha.k8s.local --master-count=3 --zones us-east-2a,us-east-2b,us-east-2c
(EBS disks are tied to zones, hence we need to be specific about the zones)
I really wanted to understand and solve the issue I raised based on the document.
Most helpful comment
I had this problem and I fixed it by setting the
--node-nameswitch ofkubeadm initto the FQDN of the master.