Kubespray: ansible playbook failure to complete a task

Created on 13 Apr 2018  路  6Comments  路  Source: kubernetes-sigs/kubespray

BUG

Environment:
Linux 3.10.0-514.21.1.el7.x86_64 x86_64
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"

CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"

ansible 2.4.2.0
config file = /root/kubespray/ansible.cfg
configured module search path = [u'/root/kubespray/library']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.5 (default, Aug 4 2017, 00:39:18) [GCC 4.8.5 20150623 (Red Hat 4.8.5-16)]

Kubespray version (commit) (git rev-parse --short HEAD):
e95ba80

Copy of your inventory file:
[all]
node1 ansible_host=10.174.2.154 ip=10.174.2.154
node2 ansible_host=10.174.2.155 ip=10.174.2.155
node3 ansible_host=10.174.2.156 ip=10.174.2.156

[kube-master]
node1
node2

[kube-node]
node1
node2
node3

[etcd]
node1
node2
node3

[k8s-cluster:children]
kube-node
kube-master

[calico-rr]

[vault]
node1
node2
node3

Command used to invoke ansible:
ansible-playbook -i inventory/mycluster/hosts.ini cluster.yml

Output of ansible run:
TASK [kubernetes/preinstall : Stop if swap enabled] ************************************************************************
Friday 13 April 2018 08:12:17 -0700 (0:00:00.054) 0:00:13.960 **
fatal: [node1]: FAILED! => {
"assertion": "ansible_swaptotal_mb == 0",
"changed": false,
"evaluated_to": false
}
fatal: [node2]: FAILED! => {
"assertion": "ansible_swaptotal_mb == 0",
"changed": false,
"evaluated_to": false
}
fatal: [node3]: FAILED! => {
"assertion": "ansible_swaptotal_mb == 0",
"changed": false,
"evaluated_to": false
}

It looks like ansible cluster creation playbook is failure in this step. Can you please look into this issue?

lifecyclrotten

Most helpful comment

Easy workaround in lieu of repairing the role:
ansible all -i inventory/mycluster/hosts.ini -m mount -a "name=swap fstype=swap state=absent" && ansible all -i inventory/mycluster/hosts.ini -a "/sbin/swapoff -a"

All 6 comments

Easy workaround in lieu of repairing the role:
ansible all -i inventory/mycluster/hosts.ini -m mount -a "name=swap fstype=swap state=absent" && ansible all -i inventory/mycluster/hosts.ini -a "/sbin/swapoff -a"

I get crash at this point with ubuntu hosts as well. Looks like I just have to comment out this check

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Was this page helpful?
0 / 5 - 0 ratings