Kubespray: Kubespray setup with IPv6 addresses throws "ip in ansible_all_ipv4_addresses" assertion

Created on 10 Jul 2018  路  5Comments  路  Source: kubernetes-sigs/kubespray

Is this a BUG REPORT or FEATURE REQUEST? (choose one):
BUG REPORT

I have 3 VMs, with both IPv4 and IPv6 addresses. I am trying to get K8s cluster up with IPv6 addresses. In the hosts.ini file I have only provided IPv6 addresses. The Kubespray script runs, but errs out with assertion about IPv4 addresses.
I have run the same setup with IPv4 addresses and it successfully sets up K8s cluster. Am I missing any setting/pre-req step for IPv6 ?

TASK [kubernetes/preinstall : Stop if ip var does not match local ips] *********************************
task path: /packaging/kubespray-master/roles/kubernetes/preinstall/tasks/verify-settings.yml:78
Monday 09 July 2018 22:14:25 -0700 (0:00:00.149) 0:05:18.477 ***
fatal: [helm-worker02]: FAILED! => {
"assertion": "ip in ansible_all_ipv4_addresses",
"changed": false,
"evaluated_to": false
}
fatal: [helm-worker01]: FAILED! => {
"assertion": "ip in ansible_all_ipv4_addresses",
"changed": false,
"evaluated_to": false
}
fatal: [helm-master01]: FAILED! => {
"assertion": "ip in ansible_all_ipv4_addresses",
"changed": false,
"evaluated_to": false
}

  • Running ifconfig on the master VM :
    ens162: flags=4163 mtu 1500
    inet 10.30.8.97 netmask 255.255.255.0 broadcast 10.30.8.255
    inet6 fe80::250:56ff:fe89:511a prefixlen 64 scopeid 0x20
    inet6 2001:420:28f:2032::100 prefixlen 64 scopeid 0x0
    ...

Environment:

  • Cloud provider or hardware configuration:
  • OS (printf "$(uname -srm)\n$(cat /etc/os-release)\n"):
    Running Kubespray from MAC - Darwin 17.6.0 x86_64
    Target are Centos 7 based VMs ( 1 master, 2 workers )

  • Version of Ansible (ansible --version):
    ansible 2.5.5

Kubespray version (commit) (git rev-parse --short HEAD):
da0004f

Network plugin used:
calico

Copy of your inventory file:
helm-worker02 ansible_ssh_host="2001:420:28f:2032::102" ip="2001:420:28f:2032::102"
helm-worker01 ansible_ssh_host="2001:420:28f:2032::101" ip="2001:420:28f:2032::101"
helm-master01 ansible_ssh_host="2001:420:28f:2032::100" ip="2001:420:28f:2032::100"

[kube-master]
helm-master01

[kube-node]
helm-worker02
helm-worker01

[calico-rr]

[etcd:children]
kube-master
kube-node

[k8s-cluster:children]
kube-master
kube-node

[vault:children]
kube-master
kube-node

Command used to invoke ansible:
ansible-playbook -i hosts.ini cluster.yaml

Output of ansible run:

TASK [kubernetes/preinstall : Guarantee that enough network address space is available for all pods] ***********************
task path: /packaging/kubespray-master/roles/kubernetes/preinstall/tasks/verify-settings.yml:69
Monday 09 July 2018 22:14:25 -0700 (0:00:00.308) 0:05:18.328 ***
fatal: [helm-worker02]: FAILED! => {
"msg": "The task includes an option with an undefined variable. The error was: 'kubelet_max_pods' is undefinednnThe error appears to have been in '/packaging/kubespray-master/roles/kubernetes/preinstall/tasks/verify-settings.yml': line 69, column 3, but maynbe elsewhere in the file depending on the exact syntax problem.nnThe offending line appears to be:nn# NOTICE: the check blatantly ignores the inet6-casen- name: Guarantee that enough network address space is available for all podsn ^ heren"
}
...ignoring
fatal: [helm-worker01]: FAILED! => {
"msg": "The task includes an option with an undefined variable. The error was: 'kubelet_max_pods' is undefinednnThe error appears to have been in '/packaging/kubespray-master/roles/kubernetes/preinstall/tasks/verify-settings.yml': line 69, column 3, but maynbe elsewhere in the file depending on the exact syntax problem.nnThe offending line appears to be:nn# NOTICE: the check blatantly ignores the inet6-casen- name: Guarantee that enough network address space is available for all podsn ^ heren"
}
...ignoring
skipping: [helm-master01] => {
"changed": false,
"skip_reason": "Conditional result was False"
}

TASK [kubernetes/preinstall : Stop if ip var does not match local ips] *********************************
task path: /packaging/kubespray-master/roles/kubernetes/preinstall/tasks/verify-settings.yml:78
Monday 09 July 2018 22:14:25 -0700 (0:00:00.149) 0:05:18.477 ***
fatal: [helm-worker02]: FAILED! => {
"assertion": "ip in ansible_all_ipv4_addresses",
"changed": false,
"evaluated_to": false
}
fatal: [helm-worker01]: FAILED! => {
"assertion": "ip in ansible_all_ipv4_addresses",
"changed": false,
"evaluated_to": false
}
fatal: [helm-master01]: FAILED! => {
"assertion": "ip in ansible_all_ipv4_addresses",
"changed": false,
"evaluated_to": false
}

Anything else do we need to know:

Most helpful comment

Removed ip= part from the host definitions.
helm-worker02 ansible_ssh_host="2001:420:28f:2032::102" ip="2001:420:28f:2032::102"
helm-worker01 ansible_ssh_host="2001:420:28f:2032::101" ip="2001:420:28f:2032::101"
helm-master01 ansible_ssh_host="2001:420:28f:2032::100" ip="2001:420:28f:2032::100"

My hosts.ini now looks like
helm-worker02 ansible_ssh_host="2001:420:28f:2032::102"
helm-worker01 ansible_ssh_host="2001:420:28f:2032::101"
helm-master01 ansible_ssh_host="2001:420:28f:2032::100"

With this change the assertion went away. The script proceeds further and I am not seeing "ip in ansible_all_ipv4_addresses" assertion

All 5 comments

Removed ip= part from the host definitions.
helm-worker02 ansible_ssh_host="2001:420:28f:2032::102" ip="2001:420:28f:2032::102"
helm-worker01 ansible_ssh_host="2001:420:28f:2032::101" ip="2001:420:28f:2032::101"
helm-master01 ansible_ssh_host="2001:420:28f:2032::100" ip="2001:420:28f:2032::100"

My hosts.ini now looks like
helm-worker02 ansible_ssh_host="2001:420:28f:2032::102"
helm-worker01 ansible_ssh_host="2001:420:28f:2032::101"
helm-master01 ansible_ssh_host="2001:420:28f:2032::100"

With this change the assertion went away. The script proceeds further and I am not seeing "ip in ansible_all_ipv4_addresses" assertion

This is still happening...

same here

but it occurs not in all environments
e.g. Centos7.7 no-premise KVM works fine
Centos7 from Hetzner Cloud needs workaround posted by @dhurtakolha

kubespray: release-2.12

@dhurtakolha @pathcl @lz006 I ran into the same problem. Can you try this patch: https://github.com/kubernetes-sigs/kubespray/pull/5773 and provide feedback?

Also for this particular bug report, trying to use an IPv6 address in ip= and trying to compare it to an ipv4 variable probably isn't good.

Was this page helpful?
0 / 5 - 0 ratings