Is this a BUG REPORT or FEATURE REQUEST? (choose one):
BUG REPORT
Environment:
printf "$(uname -srm)\n$(cat /etc/os-release)\n"):ansible --version):Kubespray version (commit) (git rev-parse --short HEAD):
270d21f5
Network plugin used:
flannel
Copy of your inventory file:
[all]
node1 ansible_host=1.1.1.1 ip=1.1.1.1
node2 ansible_host=2.2.2.2 ip=2.2.2.2
node3 ansible_host=3.3.3.3 ip=3.3.3.3
[kube-master]
node1
node2
[kube-node]
node1
node2
node3
[etcd]
node1
node2
node3
[k8s-cluster:children]
kube-node
kube-master
[calico-rr]
[vault]
node1
node2
node3
Command used to invoke ansible:
ansible-playbook -i inventory/mycluster/hosts.ini cluster.yml -u root --key-file ~/key
Output of ansible run:
TASK [vault : check_etcd | Check if etcd is up and reachable] **********************************************************
Thursday 29 March 2018 14:20:31 +0000 (0:00:01.702) 0:17:00.087 **
fatal: [node1]: FAILED! => {"msg": "The conditional check 'vault_etcd_health_check.status == 200 or vault_etcd_health_check.status == 401' failed. The error was: error while evaluating conditional (vault_etcd_health_check.status == 200 or vault_etcd_health_check.status == 401): 'dict object' has no attribute 'status'"}
Anything else do we need to know:
install fail with cert_management: vault
even with much older commits, error is same.
I'm having same issue
What I can't understand is why vault status is checked when previous tasks are vault : stop vault-temp container and vault : check_vault | Attempt to pull local https Vault health
fatal: [node1]: FAILED! => {"changed": false, "content": "", "msg": "Status code was -1 and not [200, 429, 500, 501, 503]: Request failed: <urlopen error [Errno 111] Connection refused>", "redirected": false, "status": -1, "url": "https://localhost:8200/v1/sys/health"}
...ignoring
Seems related to https://github.com/kubernetes-incubator/kubespray/issues/2712
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue with/reopen.
Mark the issue as fresh with/remove-lifecycle rotten.Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Most helpful comment
I'm having same issue
What I can't understand is why vault status is checked when previous tasks are
vault : stop vault-temp containerandvault : check_vault | Attempt to pull local https Vault health