Kubeadm: Problem with Init pod

Created on 26 Sep 2018  路  11Comments  路  Source: kubernetes/kubeadm

Hi all ! I have problem with execute command

uadmin@kubernetes-master:~$ sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --kubernetes-version v1.12.0
[sudo] password for uadmin:
[init] using Kubernetes version: v1.12.0
[preflight] running pre-flight checks
[WARNING KubernetesVersion]: kubernetes version is greater than kubeadm version. Please consider to upgrade kubeadm. kubernetes version: 1.12.0. Kubeadm version: 1.11.x
I0926 05:27:35.104772 76803 kernel_validator.go:81] Validating kernel version
I0926 05:27:35.105030 76803 kernel_validator.go:96] Validating kernel config
[WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 17.05.0-ce. Max validated version: 17.03
[preflight] Some fatal errors occurred:
[ERROR Port-6443]: Port 6443 is in use
[ERROR Port-10251]: Port 10251 is in use
[ERROR Port-10252]: Port 10252 is in use
[ERROR FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
[ERROR FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
[ERROR FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
[ERROR FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists
[ERROR Port-10250]: Port 10250 is in use
[ERROR Port-2379]: Port 2379 is in use
[ERROR DirAvailable--var-lib-etcd]: /var/lib/etcd is not empty
[preflight] If you know what you are doing, you can make a check non-fatal with --ignore-preflight-errors=...

But command uadmin@kubernetes-master:~$ netstat -a | grep 102 shows me

uadmin@kubernetes-master:~$ netstat -a | grep 102
tcp 0 0 localhost.localdo:10248 0.0.0.0:* LISTEN
tcp 0 0 localhost.localdo:10251 0.0.0.0:* LISTEN
tcp 0 0 localhost.localdo:10252 0.0.0.0:* LISTEN
tcp 0 0 localhost.localdo:10251 localhost.localdo:51418 TIME_WAIT
tcp 0 0 localhost.localdo:10252 localhost.localdo:35284 TIME_WAIT
tcp 0 0 localhost.localdo:35258 localhost.localdo:10252 TIME_WAIT
tcp 0 0 localhost.localdo:10252 localhost.localdo:35204 TIME_WAIT
tcp 0 0 localhost.localdo:10251 localhost.localdo:51360 TIME_WAIT
tcp 0 0 localhost.localdo:10251 localhost.localdo:51484 TIME_WAIT
tcp 0 0 localhost.localdo:10252 localhost.localdo:35324 TIME_WAIT
tcp 0 0 localhost.localdo:51598 localhost.localdo:10251 TIME_WAIT
tcp 0 0 localhost.localdo:35348 localhost.localdo:10252 TIME_WAIT
tcp 0 0 localhost.localdo:10251 localhost.localdo:51508 TIME_WAIT
tcp 0 0 localhost.localdo:10251 localhost.localdo:51444 TIME_WAIT
tcp 0 0 localhost.localdo:10252 localhost.localdo:35438 TIME_WAIT
tcp6 0 0 [::]:10250 [::]:* LISTEN

Is the port 10251 in use ? I do not see any process which uses port 10251. Any idea ?

prioritawaiting-more-evidence

Most helpful comment

@antihacker81

try calling kubeadm reset first.
it seems like kubeadm init was already called on this node.

/priority awaiting-more-evidence

All 11 comments

@antihacker81

try calling kubeadm reset first.
it seems like kubeadm init was already called on this node.

/priority awaiting-more-evidence

Thanks ! It help ! But with errors

uadmin@kubernetes-master:~$ sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --kubernetes-version v1.12.0
[init] using Kubernetes version: v1.12.0
[preflight] running pre-flight checks
[WARNING KubernetesVersion]: kubernetes version is greater than kubeadm version. Please consider to upgrade kubeadm. kubernetes version: 1.12.0. Kubeadm version: 1.11.x
I0926 05:46:10.804698 83029 kernel_validator.go:81] Validating kernel version
I0926 05:46:10.805059 83029 kernel_validator.go:96] Validating kernel config
[WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 17.05.0-ce. Max validated version: 17.03
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[preflight] Some fatal errors occurred:
[ERROR ImagePull]: failed to pull image [k8s.gcr.io/kube-apiserver-amd64:v1.12.0]: exit status 1
[ERROR ImagePull]: failed to pull image [k8s.gcr.io/kube-controller-manager-amd64:v1.12.0]: exit status 1
[ERROR ImagePull]: failed to pull image [k8s.gcr.io/kube-scheduler-amd64:v1.12.0]: exit status 1
[ERROR ImagePull]: failed to pull image [k8s.gcr.io/kube-proxy-amd64:v1.12.0]: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with --ignore-preflight-errors=...
uadmin@kubernetes-master:~$ sudo kubeadm config images pull
unable to get URL "https://dl.k8s.io/release/stable-1.11.txt": Get https://dl.k8s.io/release/stable-1.11.txt: dial tcp: lookup dl.k8s.io on 127.0.0.53:53: server misbehaving

you are using kubeadm 1.11 so you should use this:

sudo kubeadm init --pod-network-cidr=10.244.0.0/16

without --kubernetes-version=1.12.0.
1.12.0 is not released yet.

unable to get URL "https://dl.k8s.io/release/stable-1.11.txt": Get https://dl.k8s.io/release/stable-1.11.txt: dial tcp: lookup dl.k8s.io on 127.0.0.53:53: server misbehaving

you don't seem to have internet access.
you can download the images when you have internet and try again.

/close

@neolit123: Closing this issue.

In response to this:

you are using kubeadm 1.11 so you should use this:

sudo kubeadm init --pod-network-cidr=10.244.0.0/16

without --kubernetes-version=1.12.0.
1.12.0 is not release yet.

unable to get URL "https://dl.k8s.io/release/stable-1.11.txt": Get https://dl.k8s.io/release/stable-1.11.txt: dial tcp: lookup dl.k8s.io on 127.0.0.53:53: server misbehaving

you don't seem to have internet access.
you can download the images when you have internet and try again.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Hmm, very interesting. I have stable internet connection. Ok. I will try later.

neolit123

After executing the following commands that you mentioned, kubeadm has been initialized . However, it still has one of the worker nodes NotReady. It is marked with ContainerCreating.

$ sudo kubeadm reset
$ sudo kubeadm init --pod-network-cidr=10.244.0.0/16

While I tried to configure rbac, it reminds me of the error of 404.

$ kubectl create -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel-rbac.yml

error: unable to read URL "https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel-rbac.yml", server reported 404 Not Found, status code=40

hi, flannel is pretty much unsupported by the kubeadm team at this point due to a number of bugs.
you can try another CNI plugin.

error: unable to read URL

you can try logging an issue in the flannel repository about this.

Hi neolit123

I applied the following rbac according to Flannel's recent update.

$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml

It shows the followng messages.

clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created

Please see the weblink: https://coreos.com/flannel/docs/latest/kubernetes.html

How to install CNI for Flannel? I see the following commands recommended by someone. But I could not execute it because I did not install go -a google programming language.

go get -d github.com/containernetworking/plugins
cd ~/go/src/github.com/containernetworking/plugins
./build.sh
sudo cp bin/* /opt/cni/bin/

I am in the Nvidia Jetson AI Dev Environment that enable Docker. The default configuration is host-gw for ARM64 (not vxlan for AMD64) in Kubernetes. But Jetson has pre-installed Ubuntu 18.04. At present, I operate it in the Ubuntu 18.04 Command Line interface of Jetson.

Can I need to conduct the above-mentioned commands with the docker command line by setting up a docker user?

Please indicate how to install the CNI for Flannel.

Best regards

How to install CNI for Flannel?

i haven't tried flannel in a while.
you can try asking in the support channels, such as #kubeadm on k8s slack.

Hi neoit123

Flannel is recommended by Joe Beda and Brendan Burns, Google Kubernetes co-creators. They have published Kubernetes: Up & Running, a classical book on Kubernetes. But they just has a quite short description on Flannel. Please see the go installation guide for both the prerequisite and the CNI plugins.

1. Prerequisite

Install go on Ubuntu 18.04

1). create go file and enter into it

 $ mkdir ~go && cd ~/go

2). Download go to the go directory from the official weblink

https://golang.org/doc/install?download=go1.14.4.linux-amd64.tar.gz

(home page: https://golang.org/dl/)

3). Extract it into /usr/local and creating a Go tree in /usr/local/go.

$ sudo tar -C /usr/local -zxvf go1.14.4.linux-arm64.tar.gz

4). Add /usr/local/go/bin to the PATH environment variable.

Open $HOME/.profile

$ gedit $HOME/.profile

Add the following line into $HOME/.profile

export PATH=$PATH:/usr/local/go/bin

Save and exit from $HOME/.profile

5). Reboot for activating the env variables

$ sudo reboot

6). Test go

Please have a look at the above-written weblink to build and test hello.go for the success installation of go in your Ubuntu 18.04.

2. Build the CNI plugins for Flannel

There are four steps to configure the CNI plugs for Flannel.

go get -d github.com/containernetworking/plugins
cd ~/go/src/github.com/containernetworking/plugins
./build_linux.sh
sudo cp bin/* /opt/cni/bin/

Please see the detailed configuration processes as follows.

$ go get -d github.com/containernetworking/plugins`

package github.com/containernetworking/plugins: no Go files in /home/cm/go/src/github.com/containernetworking/plugins

Even though it has the above message, the plugins marked with the directory have existed.

$ cd ~/go/src/github.com/containernetworking/plugins
~/go/src/github.com/containernetworking/plugins$ ./build_linux.sh

Building plugins
bandwidth
firewall
flannel
portmap
sbr
tuning
bridge
host-device
ipvlan
loopback
macvlan
ptp
vlan
dhcp
host-local
static

~/go/src/github.com/containernetworking/plugins$ sudo cp bin/* /opt/cni/bin/

It addresses the completion of the CNI plugins.

Notes:

Both CoreOS and Kubernetes has a quite short (vague) description of building the CNI plugins for Flannel. It is quite depressing. In contrast, Calico has a quite detailed description of building the CNI plugins. I do not not know why Joe Beda and Brendan Burns prefers Flannel to Calico.

Reference:
https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/
https://docs.projectcalico.org/getting-started/kubernetes/hardway/install-cni-plugin

Following the late-shared go installation, I have solved the issue with the following setup of Flannel cluster network.

1. Configure the Flannel

_For ARM64 Architecture_

$ curl https://rawgit.com/coreos/flannel/master/Documentation/kube-flannel.yml \
> | sed "s/amd64/arm/g" | sed "s/vxlan/host-gw/g" \
> > kube-flannel.yaml

Or

_For AMD64 Architecture_

$ curl https://rawgit.com/coreos/flannel/master/Documentation/kube-flannel.yml \ > kube-flannel.yaml

2. Apply Flannel and Check the Configuration

1). Apply Flannel

$ kubectl apply -f kube-flannel.yaml

2). Check the Configuration

_Check kube-flannel-cfg_

$ kubectl describe --namespace=kube-system configmaps/kube-flannel-cfg
Check the Device

_For ARM64 Architecture_

$ kubectl describe --namespace=kube-system daemonsets/kube-flannel-ds-arm64

or

_For AMD64 Architecture_

$ kubectl describe --namespace=kube-system daemonsets/kube-flannel-ds

3. Apply RBAC

$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml

clusterrole.rbac.authorization.k8s.io/flannel configured
clusterrolebinding.rbac.authorization.k8s.io/flannel unchanged

4. Get nodes

I can get all the nodes with the Status of Ready now after executing the commands on the Master node.

$ kubectl get nodes

Notes:

Before setting up the Flannel, you must complete the master node initiation of Kubernetes and worker nodes joining.

Cheers.

Reference

  1. Kubernetes: Up & Running, Second Edition, Brendan Burns, Joe Beda, Kelsey Hightower

  2. Kubernetes Network Guideline, Du Jun

  3. https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/

  4. https://coreos.com/flannel/docs/latest/kubernetes.html

Was this page helpful?
0 / 5 - 0 ratings