kubespray installation on hetzner / ubuntu 18.04 fails

Created on 15 May 2019  路  10Comments  路  Source: kubernetes-sigs/kubespray

Environment:

  • Cloud provider or hardware configuration:
    hetzner cx11 * 3 servers
    Example:
hcloud server create --datacenter nbg1-dc3 --image ubuntu-18.04 --name k8s-master-1  --type cx11
  • OS (printf "$(uname -srm)\n$(cat /etc/os-release)\n"):
    Darwin 18.5.0 x86_64
    Ubuntu 18.04

  • Version of Ansible (ansible --version):

ansible 2.7.10
  config file = k8s/kubespray/ansible.cfg
  configured module search path = ['k8s/kubespray/library']
  ansible python module location = k8s/cluster/lib/python3.6/site-packages/ansible
  executable location = k8s/cluster/bin/ansible
  python version = 3.6.5 (default, Mar 30 2018, 06:42:10) [GCC 4.2.1 Compatible Apple LLVM 9.0.0 (clang-900.0.39.2)]

Kubespray version (commit) (git rev-parse --short HEAD):
3f62492a

Network plugin used:
default

Copy of your inventory file:
You should explain what you want here.

all:
  hosts:
    node1:
      ansible_host: 195.201.18.72
      ip: 195.201.18.72
      access_ip: 195.201.18.72
    node2:
      ansible_host: 116.203.152.98
      ip: 116.203.152.98
      access_ip: 116.203.152.98
    node3:
      ansible_host: 116.203.176.171
      ip: 116.203.176.171
      access_ip: 116.203.176.171
  children:
    kube-master:
      hosts:
        node1:
        node2:
    kube-node:
      hosts:
        node1:
        node2:
        node3:
    etcd:
      hosts:
        node1:
        node2:
        node3:
    k8s-cluster:
      children:
        kube-master:
        kube-node:
    calico-rr:
      hosts: {}

Command used to invoke ansible:

ansible-playbook  --user root  -i inventory/mycluster/hosts.yml --become --become-user=root cluster.yml

Output of ansible run:

https://gist.github.com/alexanderkjeldaas/6b7d354b59ab3801c8da0d5b69bf7bf9

Anything else do we need to know:

By default kubespray tries to ssh into the nodes using the default id of the user. That's why I used --user root to avoid failures like that. I am not sure how kubespray can assume that the username of the person invoking the command has anything to do with configured users on the servers. Maybe the README.md should mention this?

kinbug

Most helpful comment

I just sprayed some variables everywhere to fix this. I have no idea which ones of them actually made a difference, but those two changes made it possible to install the cluster.

diff --git a/cluster.yml b/cluster.yml
index 1ee5fc2b..fc175ca9 100644
--- a/cluster.yml
+++ b/cluster.yml
@@ -12,12 +12,18 @@
         - check
   vars:
     ansible_connection: local
+  vars:
+    ansible_facts:
+      pkg_mgr: apt

 - hosts: bastion[0]
   gather_facts: False
   roles:
     - { role: kubespray-defaults}
     - { role: bastion-ssh-config, tags: ["localhost", "bastion"]}
+  vars:
+    ansible_facts:
+      pkg_mgr: apt

 - hosts: k8s-cluster:etcd:calico-rr
   any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
@@ -25,6 +31,9 @@
   roles:
     - { role: kubespray-defaults}
     - { role: bootstrap-os, tags: bootstrap-os}
+  vars:
+    ansible_facts:
+      pkg_mgr: apt

 - hosts: k8s-cluster:etcd:calico-rr
   any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
@@ -38,6 +47,9 @@
       delegate_facts: true
       with_items: "{{ groups['k8s-cluster'] + groups['etcd'] + groups['calico-rr']|default([]) }}"
       run_once: true
+  vars:
+    ansible_facts:
+      pkg_mgr: apt

 - hosts: k8s-cluster:etcd:calico-rr
   any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
@@ -47,18 +59,27 @@
     - { role: "container-engine", tags: "container-engine", when: deploy_container_engine|default(true) }
     - { role: download, tags: download, when: "not skip_downloads" }
   environment: "{{ proxy_env }}"
+  vars:
+    ansible_facts:
+      pkg_mgr: apt

 - hosts: etcd
   any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
   roles:
     - { role: kubespray-defaults}
     - { role: etcd, tags: etcd, etcd_cluster_setup: true, etcd_events_cluster_setup: "{{ etcd_events_cluster_enabled }}" }
+  vars:
+    ansible_facts:
+      pkg_mgr: apt

 - hosts: k8s-cluster:calico-rr
   any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
   roles:
     - { role: kubespray-defaults}
     - { role: etcd, tags: etcd, etcd_cluster_setup: false, etcd_events_cluster_setup: false }
+  vars:
+    ansible_facts:
+      pkg_mgr: apt

 - hosts: k8s-cluster
   any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
@@ -66,6 +87,9 @@
     - { role: kubespray-defaults}
     - { role: kubernetes/node, tags: node }
   environment: "{{ proxy_env }}"
+  vars:
+    ansible_facts:
+      pkg_mgr: apt

 - hosts: kube-master
   any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
@@ -74,6 +98,9 @@
     - { role: kubernetes/master, tags: master }
     - { role: kubernetes/client, tags: client }
     - { role: kubernetes-apps/cluster_roles, tags: cluster-roles }
+  vars:
+    ansible_facts:
+      pkg_mgr: apt

 - hosts: k8s-cluster
   any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
@@ -81,6 +108,9 @@
     - { role: kubespray-defaults}
     - { role: kubernetes/kubeadm, tags: kubeadm}
     - { role: network_plugin, tags: network }
+  vars:
+    ansible_facts:
+      pkg_mgr: apt

 - hosts: kube-master[0]
   any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
@@ -88,6 +118,9 @@
     - { role: kubespray-defaults}
     - { role: kubernetes-apps/rotate_tokens, tags: rotate_tokens, when: "secret_changed|default(false)" }
     - { role: win_nodes/kubernetes_patch, tags: ["master", "win_nodes"]}
+  vars:
+    ansible_facts:
+      pkg_mgr: apt

 - hosts: kube-master
   any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
@@ -97,12 +130,18 @@
     - { role: kubernetes-apps/policy_controller, tags: policy-controller }
     - { role: kubernetes-apps/ingress_controller, tags: ingress-controller }
     - { role: kubernetes-apps/external_provisioner, tags: external-provisioner }
+  vars:
+    ansible_facts:
+      pkg_mgr: apt

 - hosts: calico-rr
   any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
   roles:
     - { role: kubespray-defaults}
     - { role: network_plugin/calico/rr, tags: network }
+  vars:
+    ansible_facts:
+      pkg_mgr: apt

 - hosts: kube-master
   any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
@@ -110,9 +149,15 @@
     - { role: kubespray-defaults}
     - { role: kubernetes-apps, tags: apps }
   environment: "{{ proxy_env }}"
+  vars:
+    ansible_facts:
+      pkg_mgr: apt

 - hosts: k8s-cluster
   any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
   roles:
     - { role: kubespray-defaults}
     - { role: kubernetes/preinstall, when: "dns_mode != 'none' and resolvconf_mode == 'host_resolvconf'", tags: resolvconf, dns_late: true }
+  vars:
+    ansible_facts:
+      pkg_mgr: apt

All 10 comments

I added some vars that sets the package manager to apt, and it would continue.

However, it would then fail on import zipfile where the zipfile module was not found.

This required installing apt-get install sudo python-minimal python-setuptools on all the nodes, although there seems to be an install python task that should do this.

I just sprayed some variables everywhere to fix this. I have no idea which ones of them actually made a difference, but those two changes made it possible to install the cluster.

diff --git a/cluster.yml b/cluster.yml
index 1ee5fc2b..fc175ca9 100644
--- a/cluster.yml
+++ b/cluster.yml
@@ -12,12 +12,18 @@
         - check
   vars:
     ansible_connection: local
+  vars:
+    ansible_facts:
+      pkg_mgr: apt

 - hosts: bastion[0]
   gather_facts: False
   roles:
     - { role: kubespray-defaults}
     - { role: bastion-ssh-config, tags: ["localhost", "bastion"]}
+  vars:
+    ansible_facts:
+      pkg_mgr: apt

 - hosts: k8s-cluster:etcd:calico-rr
   any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
@@ -25,6 +31,9 @@
   roles:
     - { role: kubespray-defaults}
     - { role: bootstrap-os, tags: bootstrap-os}
+  vars:
+    ansible_facts:
+      pkg_mgr: apt

 - hosts: k8s-cluster:etcd:calico-rr
   any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
@@ -38,6 +47,9 @@
       delegate_facts: true
       with_items: "{{ groups['k8s-cluster'] + groups['etcd'] + groups['calico-rr']|default([]) }}"
       run_once: true
+  vars:
+    ansible_facts:
+      pkg_mgr: apt

 - hosts: k8s-cluster:etcd:calico-rr
   any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
@@ -47,18 +59,27 @@
     - { role: "container-engine", tags: "container-engine", when: deploy_container_engine|default(true) }
     - { role: download, tags: download, when: "not skip_downloads" }
   environment: "{{ proxy_env }}"
+  vars:
+    ansible_facts:
+      pkg_mgr: apt

 - hosts: etcd
   any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
   roles:
     - { role: kubespray-defaults}
     - { role: etcd, tags: etcd, etcd_cluster_setup: true, etcd_events_cluster_setup: "{{ etcd_events_cluster_enabled }}" }
+  vars:
+    ansible_facts:
+      pkg_mgr: apt

 - hosts: k8s-cluster:calico-rr
   any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
   roles:
     - { role: kubespray-defaults}
     - { role: etcd, tags: etcd, etcd_cluster_setup: false, etcd_events_cluster_setup: false }
+  vars:
+    ansible_facts:
+      pkg_mgr: apt

 - hosts: k8s-cluster
   any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
@@ -66,6 +87,9 @@
     - { role: kubespray-defaults}
     - { role: kubernetes/node, tags: node }
   environment: "{{ proxy_env }}"
+  vars:
+    ansible_facts:
+      pkg_mgr: apt

 - hosts: kube-master
   any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
@@ -74,6 +98,9 @@
     - { role: kubernetes/master, tags: master }
     - { role: kubernetes/client, tags: client }
     - { role: kubernetes-apps/cluster_roles, tags: cluster-roles }
+  vars:
+    ansible_facts:
+      pkg_mgr: apt

 - hosts: k8s-cluster
   any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
@@ -81,6 +108,9 @@
     - { role: kubespray-defaults}
     - { role: kubernetes/kubeadm, tags: kubeadm}
     - { role: network_plugin, tags: network }
+  vars:
+    ansible_facts:
+      pkg_mgr: apt

 - hosts: kube-master[0]
   any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
@@ -88,6 +118,9 @@
     - { role: kubespray-defaults}
     - { role: kubernetes-apps/rotate_tokens, tags: rotate_tokens, when: "secret_changed|default(false)" }
     - { role: win_nodes/kubernetes_patch, tags: ["master", "win_nodes"]}
+  vars:
+    ansible_facts:
+      pkg_mgr: apt

 - hosts: kube-master
   any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
@@ -97,12 +130,18 @@
     - { role: kubernetes-apps/policy_controller, tags: policy-controller }
     - { role: kubernetes-apps/ingress_controller, tags: ingress-controller }
     - { role: kubernetes-apps/external_provisioner, tags: external-provisioner }
+  vars:
+    ansible_facts:
+      pkg_mgr: apt

 - hosts: calico-rr
   any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
   roles:
     - { role: kubespray-defaults}
     - { role: network_plugin/calico/rr, tags: network }
+  vars:
+    ansible_facts:
+      pkg_mgr: apt

 - hosts: kube-master
   any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
@@ -110,9 +149,15 @@
     - { role: kubespray-defaults}
     - { role: kubernetes-apps, tags: apps }
   environment: "{{ proxy_env }}"
+  vars:
+    ansible_facts:
+      pkg_mgr: apt

 - hosts: k8s-cluster
   any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
   roles:
     - { role: kubespray-defaults}
     - { role: kubernetes/preinstall, when: "dns_mode != 'none' and resolvconf_mode == 'host_resolvconf'", tags: resolvconf, dns_late: true }
+  vars:
+    ansible_facts:
+      pkg_mgr: apt

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

Same problem here.
Additional bootstrap is needed with ubuntu image 18.04 on hetzner.
bootsrap.yml:

---
- hosts: all
  gather_facts: false
  tasks:`
  - name: Install bootstrap packages
    raw: "apt -y update && apt install -y sudo python-minimal python-setuptools"

@vuliad using Ubuntu 18.04 on Hetzner Cloud the only package that was missing for me was python-setuptools, python-minimal is included in Kubespray's playbooks.

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

I ran into the same issue.

Thanks @vuliad for sharing your solution, it pointed me to the right direction. By the way, there's a backtick that breaks the format.

So a simple solution would be adding at the beggining of the cluster.yml file the following after the ---:

- hosts: all
  gather_facts: false
  tasks:
  - name: Install bootstrap packages
    raw: "apt -y update && apt install -y sudo python-minimal python-setuptools"

Could you please consider merging @khawaga's PR #5252?

/remove-lifecycle rotten
/reopen

@yilmi: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to this:

I ran into the same issue.

Thanks @vuliad for sharing your solution, it pointed me to the right direction. By the way, there's a backtick that breaks the format.

So a simple solution would be adding at the beggining of the cluster.yml file the following after the ---:

- hosts: all
 gather_facts: false
 tasks:
 - name: Install bootstrap packages
   raw: "apt -y update && apt install -y sudo python-minimal python-setuptools"

Could you please consider merging @khawaga's PR #5252?

/remove-lifecycle rotten
/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Was this page helpful?
0 / 5 - 0 ratings