Kubespray: Missing node role after installation

Created on 1 May 2019  路  6Comments  路  Source: kubernetes-sigs/kubespray

After clean installation on Ubuntu 18.04 i got result like that:

NAME      STATUS    ROLES     AGE       VERSION
node1     Ready     master    1h        v1.14.1
node2     Ready     master    1h        v1.14.1
node3     Ready     <none>    1h        v1.14.1

Environment:

  • Cloud provider or hardware configuration:
    Full bare metal, without cloud provider.
  • OS:
    Linux 4.15.0-48-generic x86_64
    NAME="Ubuntu"
    VERSION="18.04.1 LTS (Bionic Beaver)"
    ID=ubuntu
    ID_LIKE=debian
    PRETTY_NAME="Ubuntu 18.04.1 LTS"
    VERSION_ID="18.04"
    HOME_URL="https://www.ubuntu.com/"
    SUPPORT_URL="https://help.ubuntu.com/"
    BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
    PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
    VERSION_CODENAME=bionic
    UBUNTU_CODENAME=bionic

  • Version of Ansible (ansible --version):
    ansible 2.7.8
    config file = /root/.../kubespray/ansible.cfg
    configured module search path = [u'/root/.../kubespray/library']
    ansible python module location = /usr/lib/python2.7/dist-packages/ansible
    executable location = /usr/bin/ansible
    python version = 2.7.12 (default, Nov 12 2018, 14:36:49) [GCC 5.4.0 20160609]

Kubespray version (commit) (git rev-parse --short HEAD):
d6fd0d2

Network plugin used:
calico

Copy of your inventory file:

all:
  hosts:
    node1:
      ansible_host: 192.168.100.81
      access_ip: 192.168.100.81
      ip: 192.168.100.81
    node2:
      ansible_host: 192.168.100.82
      access_ip: 192.168.100.82
      ip: 192.168.100.82
    node3:
      ansible_host: 192.168.100.83
      access_ip: 192.168.100.83
      ip: 192.168.100.83
  children:
    kube-master:
      hosts:
        node1:
        node2:
    kube-node:
      hosts:
        node1:
        node2:
        node3:
    etcd:
      hosts:
        node1:
        node2:
        node3:
    k8s-cluster:
      children:
        kube-node:
        kube-master:
    calico-rr:
      hosts: {}

Command used to invoke ansible:
ansible-playbook -i inventory/mycluster/hosts.yml --become --become-user=root cluster.yml

Output of ansible run:
https://gist.github.com/aiv/c89fa002a9a45ea171eacfab3158e6f0 (log from second run)

kinbug

Most helpful comment

At some point, the contents of ./roles/kubernetes/node/templates/kubelet.kubeadm.env.j2 included some node-role information inside {# Kubelet node labels #} --i.e. here.

However, now I cannot find any reference to node-role.kubernetes.io/node in the whole project. So it seems that this was removed.

A workaround is to use the node_labels variable. But, I like the default node annotation, makes sense for typical orchestration afaict.

All 6 comments

At some point, the contents of ./roles/kubernetes/node/templates/kubelet.kubeadm.env.j2 included some node-role information inside {# Kubelet node labels #} --i.e. here.

However, now I cannot find any reference to node-role.kubernetes.io/node in the whole project. So it seems that this was removed.

A workaround is to use the node_labels variable. But, I like the default node annotation, makes sense for typical orchestration afaict.

Seems to be intended in commit 05dc2b3a097fda2ffff7a77f4ca843d0e41dec76. I assume that @mattymo was refering to this labeling behaviour of _node_ nodes in the

  • Remove kubelet autolabel of kube-node (...)

Same thing here, node isn't showing up!

The label node-role.kubernetes.io/node is not used and therefore removed in the commit. Only the label master is used in Kubespray.

I see. I assumed (just like @aiv, I suppose) that it was a pseudo-standard label, but by looking at kubeadm / kubernetes I understand that it's not, only the .../master one is.

I'm closing the issue as this is not an error. You just don't have the label and since it is not used for anything this will not result in any errors.

Was this page helpful?
0 / 5 - 0 ratings