Kubespray: FAIL: kubeadm fails to Init other uninitialized masters using Openstack

Created on 28 Dec 2018  路  7Comments  路  Source: kubernetes-sigs/kubespray

BUG REPORT :

Problem

It is possible to set up a cluster using kubeadm with ONE master and one or more nodes.

If one uses THREE masters, the first master gets initialized without problems. As soon as the other masters get initialized a time out occur.

This is true for tag 2.8.1 and also with the latest commit.

If I do not use kubeadm no error happens and a cluster with three nodes gets created without problems.

_Kubespray Configuration:_

k8s-cluster.yml
all.yml

Ansible error using kubespray tag 2.8.1

As soon as kubeadm_enabled: true is set a fatal error occurs here
TASK [kubernetes/master : kubeadm | Init other uninitialized masters]

Ansible: TASK [kubernetes/master : kubeadm | Init other uninitialized masters]

After days of trying out different setups no time it was possible to set up the cluster using kubeadm.

Environment:

  • Cloud provider or hardware configuration:
    Openstack
    openstack 3.17.0
    Release PIKE

  • OS (printf "$(uname -srm)\n$(cat /etc/os-release)\n"):
    Linux 4.4.0-139-generic x86_64
    NAME="Ubuntu"
    VERSION="16.04.5 LTS (Xenial Xerus)"
    ID=ubuntu
    ID_LIKE=debian
    PRETTY_NAME="Ubuntu 16.04.5 LTS"
    VERSION_ID="16.04"
    HOME_URL="http://www.ubuntu.com/"
    SUPPORT_URL="http://help.ubuntu.com/"
    BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"
    VERSION_CODENAME=xenial
    UBUNTU_CODENAME=xenial

  • Version of Ansible (ansible --version):

ansible 2.6.11
config file = None
configured module search path = [u'/Users/user123456/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python2.7/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 2.7.15 (default, May 1 2018, 16:44:37) [GCC 4.2.1 Compatible Apple LLVM 9.0.0 (clang-900.0.39.2)]

Kubespray version (commit) (git rev-parse --short HEAD):
2ac1c7562f46
tag 2.8.1

Network plugin used:
weave

inventory file:

using ansible-inventory

--@dc=compute:
| |--mykc-dev-k8s-master-1
| |--mykc-dev-k8s-master-nf-1
| |--mykc-dev-k8s-master-nf-2
| |--mykc-dev-k8s-node-nf-1
|--@etcd:
| |--mykc-dev-k8s-master-1
| |--mykc-dev-k8s-master-nf-1
| |--mykc-dev-k8s-master-nf-2
|--@k8s-cluster:
| |--mykc-dev-k8s-master-1
| |--mykc-dev-k8s-master-nf-1
| |--mykc-dev-k8s-master-nf-2
| |--mykc-dev-k8s-node-nf-1
|--@kube-master:
| |--mykc-dev-k8s-master-1
| |--mykc-dev-k8s-master-nf-1
| |--mykc-dev-k8s-master-nf-2
|--@kube-node:
| |--mykc-dev-k8s-node-nf-1
|--@no-floating:
| |--mykc-dev-k8s-master-nf-1
| |--mykc-dev-k8s-master-nf-2
| |--mykc-dev-k8s-node-nf-1
|--@os_flavor=m1.small:
| |--mykc-dev-k8s-master-1
| |--mykc-dev-k8s-master-nf-1
| |--mykc-dev-k8s-master-nf-2
| |--mykc-dev-k8s-node-nf-1

Command used to invoke ansible:

ansible-playbook --become -i contrib/terraform/openstack/hosts --flush-cache cluster.yml

Anything else do we need to know:

journalctl -xeu kubelet

Background information
The infrastructure gets build on openstack.
Only the node has a floating ip. No separate bastion is used.

Using latest tag and kuberentes kube_version: v1.13.1 the same problem occurrs.
Ansible output with fail using latest commit 5834e60
`
kubeadm-config.yaml

Can someone take a look at this kubeadm-config.yaml ?

This part looks weird to me (as created, no manual change). Is it ok, that the ips are not delimited ?

certSANs:

  • kubernetes
  • kubernetes.default
  • kubernetes.default.svc
  • kubernetes.default.svc.cluster.local
  • 10.233.0.1
  • localhost
  • 127.0.0.1
  • mykc-dev-k8s-master-nf-1
  • mykc-dev-k8s-master-nf-2
  • mykc-dev-k8s-master-nf-3
  • 11.0.0.7
  • 11.0.0.711.0.0.10
  • 11.0.0.1011.0.0.8
  • 11.0.0.810.248.114.4210.248.113.24

Most helpful comment

It does not solve the problem. I still have the same issue. One exception: I am running 4 physical Ubuntu 19 nodes fresh install.

All 7 comments

Solved

The issue could be solved by patching [kubeadm-setup.yml] (https://gist.github.com/harrycain72/f8640de950e54789c063762028ba5083)

As pointed above the reason for not beeing able to init the remaining two masters using kubeadm was the wrong configuration of certsans.

In original coding the IPs were selected twice and not separated. This was corrected by the patch.

With the above patch, everything works like a charm.

This issue should not be closed. I ran into the same issue. Perhaps the changes should be merged into master?

as more people are having this issue ... I reopen this issue.

same here...

I confirm this is still a problem in v2.8.4, submitted PR #4435 to fix this.

Wicht PR #4435 the issue is fixed. Thanx a lot !

It does not solve the problem. I still have the same issue. One exception: I am running 4 physical Ubuntu 19 nodes fresh install.

Was this page helpful?
0 / 5 - 0 ratings