Kubespray: Deployment on OpenStack fails if "ansible_host" is different than "ip" variable.

Created on 24 Oct 2017  路  10Comments  路  Source: kubernetes-sigs/kubespray

Recently kubespray introduced some preinstall checks. One of them is to check
if "ip" var is different than "ansible_host" one. It stops my installation on OpenStack
because I use floating_ips in order to connect to machines and ips from private_network
for service binding.

Task was added in ./roles/kubernetes/preinstall/tasks/verify-settings.yml

Is this a BUG REPORT or FEATURE REQUEST? (choose one):
BUG REPORT

Environment:

  • Cloud provider or hardware configuration:
    OpenStack
  • OS (printf "$(uname -srm)\n$(cat /etc/os-release)\n"):
    Ubuntu 16.04

  • Version of Ansible (ansible --version):
    2.3.1.0

Kubespray version (commit) (git rev-parse --short HEAD):
master

Network plugin used:
flannel

Copy of your inventory file:

[all]
k8s-node-001 ansible_host=10.91.60.98 ip=10.10.128.4
k8s-node-002 ansible_host=10.91.60.90 ip=10.10.128.12
k8s-node-003 ansible_host=10.91.60.104 ip=10.10.128.8

[kube-master]
k8s-node-001
k8s-node-002
k8s-node-003

[kube-node]
k8s-node-001
k8s-node-002
k8s-node-003

[etcd]
k8s-node-001
k8s-node-002
k8s-node-003

[k8s-cluster:children]
kube-node
kube-master

Command used to invoke ansible:
ansible-playbook main.yaml -i invetory [email protected] -u ubuntu -b

Output of ansible run:

...

TASK [kubernetes/preinstall : Stop if ip var does not match local ips] ************************************************************************************************************************************************************************
Tuesday 24 October 2017  11:34:07 +0200 (0:00:00.033)       0:00:05.163 *******
fatal: [k8s-node-001]: FAILED! => {
    "assertion": "ip in ansible_all_ipv4_addresses",
    "changed": false,
    "evaluated_to": false,
    "failed": true
}
fatal: [k8s-node-002]: FAILED! => {
    "assertion": "ip in ansible_all_ipv4_addresses",
    "changed": false,
    "evaluated_to": false,
    "failed": true
}
fatal: [k8s-node-003]: FAILED! => {
    "assertion": "ip in ansible_all_ipv4_addresses",
    "changed": false,
    "evaluated_to": false,
    "failed": true
}
        to retry, use: --limit @/home/st4nson/git/ess_pb-fms/ansible/cluster_up.retry

PLAY RECAP ************************************************************************************************************************************************************************************************************************************
k8s-node-001               : ok=16   changed=0    unreachable=0    failed=1
k8s-node-002               : ok=14   changed=0    unreachable=0    failed=1
k8s-node-003               : ok=14   changed=0    unreachable=0    failed=1
...

Anything else do we need to know:

kinsupport

Most helpful comment

Try clearing your cache and try again? add --flush-cache option to the ansible-playbook command.

All 10 comments

The check is if ip is actually a local ip address. It's not a bug. You can't tell etcd to bind to the floating IP address. You should set access_ip instead to specify the floating IP, like this:
k8s-node-001 ansible_host=10.91.60.98 access_ip=10.10.128.4
k8s-node-002 ansible_host=10.91.60.90 access_ip=10.10.128.12
k8s-node-003 ansible_host=10.91.60.104 access_ip=10.10.128.8

Thanks for a fast reply. I'll test that.
Just to clarify in inventory file above. ansible_host ips are my floating addresses and ip are private ones that services used for binding.
So for my case valid inventory file should look like this ?

k8s-node-001 access_ip=10.91.60.98 ip=10.10.128.4
k8s-node-002 access_ip=10.91.60.90 ip=10.10.128.12
k8s-node-003 access_ip=10.91.60.104 ip=10.10.128.8

ip should be set to the private ip address, as you said. ansible_host and access_ip should point to the floating addresses.

ansible_ip is not a variable that is used in Kubespray, so you can remove that.

ansible_ip it was a typo ;) I corrected it.

Still got the same error with following inventory :/

k8s-node-001 ansible_ssh_host=10.91.60.106 access_ip=10.91.60.106 ip=10.10.128.11
k8s-node-002 ansible_ssh_host=10.91.60.91  access_ip=10.91.60.91  ip=10.10.128.8
k8s-node-003 ansible_ssh_host=10.91.60.81  access_ip=10.91.60.81  ip=10.10.128.5

any clue ?

Try clearing your cache and try again? add --flush-cache option to the ansible-playbook command.

@mattymo thanks for help. It worked.

I had the same issue but none of your solutions worked.
When my /etc/etcd.env gets rendered something really weird happens.

ETCD_DATA_DIR=/var/lib/etcd ETCD_ADVERTISE_CLIENT_URLS=https://172.16.0.23:2379 ETCD_INITIAL_ADVERTISE_PEER_URLS=https://172.16.0.23:2380 ETCD_INITIAL_CLUSTER_STATE=existing ETCD_METRICS=basic ETCD_LISTEN_CLIENT_URLS=https://kube-test-2.hydra.staging:2379,https://127.0.0.1:2379 ETCD_ELECTION_TIMEOUT=5000 ETCD_HEARTBEAT_INTERVAL=250 ETCD_INITIAL_CLUSTER_TOKEN=k8s_etcd ETCD_LISTEN_PEER_URLS=https://kube-test-2.hydra.staging:2380 ETCD_NAME=etcd2 ETCD_PROXY=off ETCD_INITIAL_CLUSTER=etcd1=https://172.16.0.22:2380,etcd2=https://172.16.0.23:2380,etcd3=https://172.16.0.24:2380 ETCD_AUTO_COMPACTION_RETENTION=8

some of the IPs get translated into FQDNs, which etc doesn't like when starting.

Try clearing your cache and try again? add --flush-cache option to the ansible-playbook command.

Thank you so much I had been struggling with errors for 2 hours and this solved it

Maybe add to documentation?

Was this page helpful?
0 / 5 - 0 ratings

Related issues

hellwen picture hellwen  路  4Comments

dlifanov picture dlifanov  路  3Comments

mattdornfeld picture mattdornfeld  路  4Comments

IvanBiv picture IvanBiv  路  3Comments

danielm0hr picture danielm0hr  路  4Comments