/kind bug
TASK [deploy warning for non kubeadm] *********************************************************************************************************
fatal: [localhost]: FAILED! => {"msg": "The conditional check 'not kubeadm_enabled and not skip_non_kubeadm_warning' failed. The error was: error while evaluating conditional (not kubeadm_enabled and not skip_non_kubeadm_warning): 'kubeadm_enabled' is undefined\n\nThe error appears to have been in '/home/thoth/.kubash/submodules/kubespray/cluster.yml': line 18, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n tasks:\n - name: deploy warning for non kubeadm\n ^ here\n"}
to retry, use: --limit @/home/thoth/.kubash/submodules/kubespray/cluster.retry
Environment:
baremetal
printf "$(uname -srm)\n$(cat /etc/os-release)\n"):root@extetcdingress1:~# printf "$(uname -srm)\n$(cat /etc/os-release)\n"
Linux 4.4.0-131-generic x86_64
NAME="Ubuntu"
VERSION="16.04.5 LTS (Xenial Xerus)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 16.04.5 LTS"
VERSION_ID="16.04"
HOME_URL="http://www.ubuntu.com/"
SUPPORT_URL="http://help.ubuntu.com/"
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"
VERSION_CODENAME=xenial
UBUNTU_CODENAME=xenial
ansible --version):ansible 2.7.2
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.5 (default, Jul 13 2018, 13:06:57) [GCC 4.8.5 20150623 (Red Hat 4.8.5-28)]
Kubespray version (commit) (git rev-parse --short HEAD):
edfec269
Network plugin used:
calico
Copy of your inventory file:
[all]
kubespraymaster1 ip=10.0.23.112 etcd_member_name=kubespraymaster1 ansible_ssh_host=10.0.23.112 ansible_ssh_port=22 ansible_user=root
kubespraymaster2 ip=10.0.23.113 etcd_member_name=kubespraymaster2 ansible_ssh_host=10.0.23.113 ansible_ssh_port=22 ansible_user=root
kubespraymaster3 ip=10.0.23.114 etcd_member_name=kubespraymaster3 ansible_ssh_host=10.0.23.114 ansible_ssh_port=22 ansible_user=root
kubesprayetcd1 ip=10.0.23.115 etcd_member_name=kubesprayetcd1 ansible_ssh_host=10.0.23.115 ansible_ssh_port=22 ansible_user=root
kubesprayetcd2 ip=10.0.23.116 etcd_member_name=kubesprayetcd2 ansible_ssh_host=10.0.23.116 ansible_ssh_port=22 ansible_user=root
kubesprayetcd3 ip=10.0.23.117 etcd_member_name=kubesprayetcd3 ansible_ssh_host=10.0.23.117 ansible_ssh_port=22 ansible_user=root
kubespraynode1 ip=10.0.23.118 etcd_member_name=kubespraynode1 ansible_ssh_host=10.0.23.118 ansible_ssh_port=22 ansible_user=root
kubespraynode2 ip=10.0.23.119 etcd_member_name=kubespraynode2 ansible_ssh_host=10.0.23.119 ansible_ssh_port=22 ansible_user=root
kubespraynode3 ip=10.0.23.120 etcd_member_name=kubespraynode3 ansible_ssh_host=10.0.23.120 ansible_ssh_port=22 ansible_user=root
kubesprayingress1 ip=10.0.23.127 etcd_member_name=kubesprayingress1 ansible_ssh_host=10.0.23.127 ansible_ssh_port=22 ansible_user=root
[kube-node]
kubespraynode1
kubespraynode2
kubespraynode3
[kube-node:vars]
ansible_ssh_extra_args="-o StrictHostKeyChecking=no"
[calico-rr]
kubespraymaster1
kubespraymaster2
kubespraymaster3
[kube-master]
kubespraymaster1
kubespraymaster2
kubespraymaster3
[kube-master:vars]
ansible_ssh_extra_args="-o StrictHostKeyChecking=no"
[etcd]
kubesprayetcd1
kubesprayetcd2
kubesprayetcd3
[vault]
kubesprayetcd1
kubesprayetcd2
kubesprayetcd3
[ingress]
kubesprayingress1
[k8s-cluster:children]
kube-node
kube-master
Command used to invoke ansible:
ansible-playbook \
-i hosts \
-e kube_version=v1.12.2 \
kubespray/cluster.yml
Output of ansible run:
PLAY [localhost] ******************************************************************************************************************************
TASK [Check ansible version !=2.7.0] **********************************************************************************************************
ok: [localhost] => {
"changed": false,
"msg": "All assertions passed"
}
PLAY [localhost] ******************************************************************************************************************************
TASK [Gathering Facts] ************************************************************************************************************************
ok: [localhost]
TASK [deploy warning for non kubeadm] *********************************************************************************************************
fatal: [localhost]: FAILED! => {"msg": "The conditional check 'not kubeadm_enabled and not skip_non_kubeadm_warning' failed. The error was: error while evaluating conditional (not kubeadm_enabled and not skip_non_kubeadm_warning): 'kubeadm_enabled' is undefined\n\nThe error appears to have been in '/home/thoth/.kubash/submodules/kubespray/cluster.yml': line 18, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n tasks:\n - name: deploy warning for non kubeadm\n ^ here\n"}
to retry, use: --limit @/home/thoth/.kubash/submodules/kubespray/cluster.retry
PLAY RECAP ************************************************************************************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=1
Anything else do we need to know:
collect-info fails:
ansible-playbook -i clusters/kubespray/hosts submodules/kubespray/scripts/collect-info.yaml -e dir=`pwd` -u root -e ansible_ssh_user=root -b --become-user=root &> /tmp/ansiblelog
full log of playbook run for collect-info here:
Same issue
define here:
https://github.com/kubernetes-sigs/kubespray/blob/master/inventory/sample/group_vars/all/all.yml#L50
Are you code cloned from the master?
@riverzhang commit at that point it was from master or commit edfec269
pulling to commit deff6a82faceb1e5b6efb5a13300a1da44b3be45 now
@riverzhang and it fails at the same point:
TASK [deploy warning for non kubeadm] *********************************************************************************************************fatal: [localhost]: FAILED! => {"msg": "The conditional check 'not kubeadm_enabled and not skip_non_kubeadm_warning' failed. The error was: error while evaluating conditional (not kubeadm_enabled and not skip_non_kubeadm_warning): 'kubeadm_enabled' is undefined\n\nThe error appears to
have been in '/home/thoth/.kubash/submodules/kubespray/cluster.yml': line 18, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n tasks:\n - name: deploy warning for non kubeadm\n ^ here\n"}
to retry, use: --limit @/home/thoth/.kubash/submodules/kubespray/cluster.retry
I think you should be
ansible-playbook -i inventory/kubespray/hosts cluster.yml
@riverzhang I am not certain what you are indicating should be different, you do not set the kubeadm_enabled var, and the inventory file you indicate does not exist in the kubespray repo relative to the cluster.yml file:
ls inventory/kubespray
ls: cannot access 'inventory/kubespray': No such file or directory
The command I'm using has been like this:
ansible-playbook \
-i /home/thoth/.kubash/clusters/kubespray/hosts \
-e kubeadm_enabled=true \
/home/thoth/.kubash/submodules/kubespray/cluster.yml
@joshuacox Have you copied the group_vars directory from the sample inventory to /home/thoth/.kubash/clusters/kubespray/ ? If not, you should especially if you need to tune your deployment afterwards
@mirwan I have not been copying that directory historically, but I will test that now
I am trying kubespray from this commit: https://github.com/kubernetes-sigs/kubespray/commit/0a19d1bf010330dfd9099ff68bfe89e066321e8e
and am getting this:
2018-12-06T19:46:03 â–¶ DEBU fatal: [localhost]: FAILED! => {"msg": "The conditional check 'not kubeadm_enabled and not skip_non_kubeadm_warning' failed. The error was: error while evaluating conditional (not kubeadm_enabled and not skip_non_kubeadm_warning): 'kubeadm_enabled' is undefined\n\nThe error appears to have been in '/root/kubespray/cluster.yml': line 19, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n tasks:\n - name: deploy warning for non kubeadm\n ^ here\n"}
I've been trying to follow the discussion in this issue, but I don't understand.
@joshuacox did you find a solution or workaround to the problem?
@rodrigc no I have not seen a successful build with kubespray for awhile now.
Hey guys, I ran into this issue and it happened to be that I didn't have the group_vars directory set up correctly for my ansible inventory.
Could you post your file tree so I can take a look?
So I was seeing this problem also but in the upgrade-cluster.yml playbook. I am upgrade 2 things.
I ran in to the same error and found it in the cluster inventory folder groups_var/all/all.yml file did not have the kubeadm var set to true. It was commented out all together.
Also new problem to the 2.8 that I have not seen before was a sudo password needed. Simple change of sudo config so group wheel did not have to use a password nor did -b or -become work with the Ansible.
Please read Readme https://github.com/kubernetes-sigs/kubespray#ansible
/close
@riverzhang: Closing this issue.
In response to this:
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
I am checked out to
commit 9051aa5296ef76fcff69a2e3827cef28752aa475
Author: Rong Zhang <[email protected]>
Date: Tue Dec 4 15:01:32 2018 +0800
Fix ubuntu-contiv test failed (#3808)
netchecker agent status is pending
I had the same issue. I called the ansible-playbook parameter in the wrong order. Maybe, this is also the problem of your approaches...
# NOT WORKING
ansible-playbook cluster.yml --become --become-user=root -i $ANSIBLE_INVENTORY \
-e @../custom_install_vars.yml \
-e cluster_name=$CLUSTER_BASE_ENV
# WORKING
ansible-playbook -i $ANSIBLE_INVENTORY \
-e @../custom_install_vars.yml \
-e cluster_name=$CLUSTER_BASE_ENV \
--become --become-user=root cluster.yml
Same here.
ansible-playbook -i inventory/mycluster/hosts.ini --become --become-user=root cluster.yml -b
PLAY [localhost] ***************************************************************
TASK [Check ansible version !=2.7.0] *******************************************
Friday 05 April 2019 15:46:13 +0630 (0:00:00.026) 0:00:00.026 **********
ok: [localhost] => {
"changed": false,
"msg": "All assertions passed"
}
PLAY [localhost] ***************************************************************
TASK [deploy warning for non kubeadm] ******************************************
Friday 05 April 2019 15:46:13 +0630 (0:00:00.034) 0:00:00.060 **********
fatal: [localhost]: FAILED! => {"msg": "The conditional check 'not kubeadm_enabled and not skip_non_kubeadm_warning' failed. The error was: error while evaluating conditional (not kubeadm_enabled and not skip_non_kubeadm_warning): 'kubeadm_enabled' is undefined\n\nThe error appears to have been in '/home/bim/workspace/kubespray/cluster.yml': line 19, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n tasks:\n - name: deploy warning for non kubeadm\n ^ here\n"}
to retry, use: --limit @/home/bim/workspace/kubespray/cluster.retry
PLAY RECAP *********************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=1
Friday 05 April 2019 15:46:13 +0630 (0:00:00.015) 0:00:00.076 **********
===============================================================================
Check ansible version !=2.7.0 ------------------------------------------- 0.03s
deploy warning for non kubeadm ------------------------------------------ 0.02s
Most helpful comment
Same issue