Kubespray: Install fails with cert_management: vault

Created on 29 Mar 2018  路  6Comments  路  Source: kubernetes-sigs/kubespray

Is this a BUG REPORT or FEATURE REQUEST? (choose one):
BUG REPORT

Environment:

  • Cloud provider or hardware configuration:
    Bare metal
  • OS (printf "$(uname -srm)\n$(cat /etc/os-release)\n"):
    Linux 4.4.0-116-generic x86_64
    NAME="Ubuntu"
    VERSION="16.04.4 LTS (Xenial Xerus)"
    ID=ubuntu
    ID_LIKE=debian
    PRETTY_NAME="Ubuntu 16.04.4 LTS"
    VERSION_ID="16.04"
    HOME_URL="http://www.ubuntu.com/"
    SUPPORT_URL="http://help.ubuntu.com/"
    BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"
    VERSION_CODENAME=xenial
    UBUNTU_CODENAME=xenial
  • Version of Ansible (ansible --version):
    ansible 2.5.0
    config file = /kubespray/ansible.cfg
    configured module search path = [u'/kubespray/library']
    ansible python module location = /usr/local/lib/python2.7/dist-packages/ansible
    executable location = /usr/local/bin/ansible
    python version = 2.7.12 (default, Dec 4 2017, 14:50:18) [GCC 5.4.0 20160609]

Kubespray version (commit) (git rev-parse --short HEAD):
270d21f5

Network plugin used:
flannel

Copy of your inventory file:
[all]
node1 ansible_host=1.1.1.1 ip=1.1.1.1
node2 ansible_host=2.2.2.2 ip=2.2.2.2
node3 ansible_host=3.3.3.3 ip=3.3.3.3

[kube-master]
node1
node2

[kube-node]
node1
node2
node3

[etcd]
node1
node2
node3

[k8s-cluster:children]
kube-node
kube-master

[calico-rr]

[vault]
node1
node2
node3

Command used to invoke ansible:
ansible-playbook -i inventory/mycluster/hosts.ini cluster.yml -u root --key-file ~/key

Output of ansible run:

TASK [vault : check_etcd | Check if etcd is up and reachable] **********************************************************
Thursday 29 March 2018 14:20:31 +0000 (0:00:01.702) 0:17:00.087 **
fatal: [node1]: FAILED! => {"msg": "The conditional check 'vault_etcd_health_check.status == 200 or vault_etcd_health_check.status == 401' failed. The error was: error while evaluating conditional (vault_etcd_health_check.status == 200 or vault_etcd_health_check.status == 401): 'dict object' has no attribute 'status'"}

Anything else do we need to know:

install fail with cert_management: vault
even with much older commits, error is same.

lifecyclrotten

Most helpful comment

I'm having same issue

What I can't understand is why vault status is checked when previous tasks are vault : stop vault-temp container and vault : check_vault | Attempt to pull local https Vault health

fatal: [node1]: FAILED! => {"changed": false, "content": "", "msg": "Status code was -1 and not [200, 429, 500, 501, 503]: Request failed: <urlopen error [Errno 111] Connection refused>", "redirected": false, "status": -1, "url": "https://localhost:8200/v1/sys/health"}
...ignoring

All 6 comments

I'm having same issue

What I can't understand is why vault status is checked when previous tasks are vault : stop vault-temp container and vault : check_vault | Attempt to pull local https Vault health

fatal: [node1]: FAILED! => {"changed": false, "content": "", "msg": "Status code was -1 and not [200, 429, 500, 501, 503]: Request failed: <urlopen error [Errno 111] Connection refused>", "redirected": false, "status": -1, "url": "https://localhost:8200/v1/sys/health"}
...ignoring

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Was this page helpful?
0 / 5 - 0 ratings