BUG REPORT
2 Core + 4G
- OS (
printf "$(uname -srm)\n$(cat /etc/os-release)\n"):
```Linux 3.10.0-514.10.2.el7.x86_64 x86_64
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"
CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"
- **Version of Ansible** (`ansible --version`):
ansible 2.2.1.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
- **Kargo version (commit) (`git rev-parse --short HEAD`):**
> v2.1.0
- **Network plugin used**:
> calico
- **Copy of your inventory file:**
[all]
node1 ansible_host=192.168.138.168 ip=192.168.138.168
node2 ansible_host=192.168.138.169 ip=192.168.138.169
node3 ansible_host=192.168.138.170 ip=192.168.138.170
[kube-master]
node1
node2
[kube-node]
node1
node2
node3
[etcd]
node1
node2
node3
[k8s-cluster:children]
kube-node
kube-master
[calico-rr]
- **Command used to invoke ansible**:
ansible-playbook -i inventory/inventory.cfg cluster.yml -b -v --private-key=~/.ssh/id_rsa
- **Output of ansible run**:
TASK [kubernetes/master : Pre-upgrade | check for kube-apiserver unit file] *
Friday 10 March 2017 15:59:51 +0800 (0:00:00.320) 0:08:08.037 ***
ok: [node2] => {"changed": false, "stat": {"exists": false}}
ok: [node1] => {"changed": false, "stat": {"exists": false}}
TASK [kubernetes/master : Pre-upgrade | check for kube-apiserver init script] *
Friday 10 March 2017 15:59:51 +0800 (0:00:00.395) 0:08:08.432 ***
ok: [node2] => {"changed": false, "stat": {"exists": false}}
ok: [node1] => {"changed": false, "stat": {"exists": false}}
TASK [kubernetes/master : Pre-upgrade | stop kube-apiserver if service defined]
Friday 10 March 2017 15:59:51 +0800 (0:00:00.384) 0:08:08.816 **
TASK [kubernetes/master : Pre-upgrade | remove kube-apiserver service definition] *
Friday 10 March 2017 15:59:51 +0800 (0:00:00.089) 0:08:08.906 ****
TASK [kubernetes/master : Pre-upgrade | See if kube-apiserver manifest exists] *
Friday 10 March 2017 15:59:52 +0800 (0:00:00.128) 0:08:09.034 **
ok: [node2] => {"changed": false, "stat": {"atime": 1489131096.4959111, "checksum": "427930fa3a336080ad5b31dd54e669e4ea24e933", "ctime": 1489131096.4959111, "dev": 2051, "executable": false, "exists": true, "gid": 0, "gr_name": "root", "inode": 826274, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "md5": "c53a94d35b1e1770ad68a5098193e055", "mode": "0644", "mtime": 1489131096.0459092, "nlink": 1, "path": "/etc/kubernetes/manifests/kube-apiserver.manifest", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 2209, "uid": 0, "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}}
TASK [kubernetes/master : Pre-upgrade | Write invalid image to kube-apiserver manifest if secrets were changed] *
Friday 10 March 2017 15:59:52 +0800 (0:00:00.280) 0:08:09.315 ****
changed: [node2] => {"changed": true, "msg": "1 replacements made"}
TASK [kubernetes/master : Pre-upgrade | Pause while waiting for kubelet to delete kube-apiserver pod] *
Friday 10 March 2017 15:59:52 +0800 (0:00:00.545) 0:08:09.860 ****
TASK [kubernetes/master : Copy kubectl from hyperkube container] ****
Friday 10 March 2017 15:59:53 +0800 (0:00:00.056) 0:08:09.917 ***
ok: [node1] => {"attempts": 1, "changed": false, "cmd": ["/usr/bin/docker", "run", "--rm", "-v", "/usr/local/bin:/systembindir", "quay.io/coreos/hyperkube:v1.5.3_coreos.0", "/bin/cp", "/hyperkube", "/systembindir/kubectl"], "delta": "0:00:05.719675", "end": "2017-03-10 15:59:58.979345", "rc": 0, "start": "2017-03-10 15:59:53.259670", "stderr": "", "stdout": "", "stdout_lines": [], "warnings": []}
ok: [node2] => {"attempts": 1, "changed": false, "cmd": ["/usr/bin/docker", "run", "--rm", "-v", "/usr/local/bin:/systembindir", "quay.io/coreos/hyperkube:v1.5.3_coreos.0", "/bin/cp", "/hyperkube", "/systembindir/kubectl"], "delta": "0:00:58.963368", "end": "2017-03-10 16:00:52.256051", "rc": 0, "start": "2017-03-10 15:59:53.292683", "stderr": "", "stdout": "", "stdout_lines": [], "warnings": []}
TASK [kubernetes/master : Install kubectl bash completion] ******
Friday 10 March 2017 16:00:52 +0800 (0:00:59.268) 0:09:09.185 ***
changed: [node1] => {"changed": true, "cmd": "/usr/local/bin/kubectl completion bash >/etc/bash_completion.d/kubectl.sh", "delta": "0:00:00.245541", "end": "2017-03-10 16:00:52.781420", "rc": 0, "start": "2017-03-10 16:00:52.535879", "stderr": "", "stdout": "", "stdout_lines": [], "warnings": []}
changed: [node2] => {"changed": true, "cmd": "/usr/local/bin/kubectl completion bash >/etc/bash_completion.d/kubectl.sh", "delta": "0:00:00.895981", "end": "2017-03-10 16:00:54.007936", "rc": 0, "start": "2017-03-10 16:00:53.111955", "stderr": "", "stdout": "", "stdout_lines": [], "warnings": []}
TASK [kubernetes/master : Set kubectl bash completion file] *****
Friday 10 March 2017 16:00:54 +0800 (0:00:01.748) 0:09:10.934 ***
ok: [node1] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/etc/bash_completion.d/kubectl.sh", "secontext": "unconfined_u:object_r:etc_t:s0", "size": 286345, "state": "file", "uid": 0}
ok: [node2] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/etc/bash_completion.d/kubectl.sh", "size": 286345, "state": "file", "uid": 0}
TASK [kubernetes/master : Write kube-apiserver manifest] ******
Friday 10 March 2017 16:00:54 +0800 (0:00:00.323) 0:09:11.258 ***
ok: [node1] => {"changed": false, "gid": 0, "group": "root", "mode": "0644", "owner": "root", "path": "/etc/kubernetes/manifests/kube-apiserver.manifest", "secontext": "system_u:object_r:etc_t:s0", "size": 2209, "state": "file", "uid": 0}
changed: [node2] => {"changed": true, "checksum": "427930fa3a336080ad5b31dd54e669e4ea24e933", "dest": "/etc/kubernetes/manifests/kube-apiserver.manifest", "gid": 0, "group": "root", "md5sum": "c53a94d35b1e1770ad68a5098193e055", "mode": "0644", "owner": "root", "size": 2209, "src": "/root/.ansible/tmp/ansible-tmp-1489132854.45-203931532834816/source", "state": "file", "uid": 0}
RUNNING HANDLER [kubernetes/master : Master | wait for the apiserver to be running] *
Friday 10 March 2017 16:00:55 +0800 (0:00:00.769) 0:09:12.028 ****
FAILED - RETRYING: HANDLER: kubernetes/master : Master | wait for the apiserver to be running (10 retries left).
FAILED - RETRYING: HANDLER: kubernetes/master : Master | wait for the apiserver to be running (9 retries left).
FAILED - RETRYING: HANDLER: kubernetes/master : Master | wait for the apiserver to be running (8 retries left).
FAILED - RETRYING: HANDLER: kubernetes/master : Master | wait for the apiserver to be running (7 retries left).
FAILED - RETRYING: HANDLER: kubernetes/master : Master | wait for the apiserver to be running (6 retries left).
FAILED - RETRYING: HANDLER: kubernetes/master : Master | wait for the apiserver to be running (5 retries left).
FAILED - RETRYING: HANDLER: kubernetes/master : Master | wait for the apiserver to be running (4 retries left).
FAILED - RETRYING: HANDLER: kubernetes/master : Master | wait for the apiserver to be running (3 retries left).
FAILED - RETRYING: HANDLER: kubernetes/master : Master | wait for the apiserver to be running (2 retries left).
FAILED - RETRYING: HANDLER: kubernetes/master : Master | wait for the apiserver to be running (1 retries left).
fatal: [node2]: FAILED! => {"attempts": 10, "changed": false, "content": "", "failed": true, "msg": "Status code was not [200]: Request failed:
TASK [kubernetes/master : copy kube system namespace manifest] ****
Friday 10 March 2017 16:10:59 +0800 (0:10:04.213) 0:19:16.241 ***
changed: [node1] => {"changed": true, "checksum": "8cf7340ba48168a824d7fd953424da1f10a1c85f", "dest": "/etc/kubernetes/kube-system-ns.yml", "gid": 0, "group": "root", "md5sum": "c4c89043e091d455e040d334ebecc73f", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:etc_t:s0", "size": 72, "src": "/root/.ansible/tmp/ansible-tmp-1489133460.08-188766057627979/source", "state": "file", "uid": 0}
TASK [kubernetes/master : Check if kube system namespace exists] ****
Friday 10 March 2017 16:11:00 +0800 (0:00:01.094) 0:19:17.336 ***
ok: [node1] => {"changed": false, "cmd": ["/usr/local/bin/kubectl", "get", "ns", "kube-system"], "delta": "0:00:00.219695", "end": "2017-03-10 16:11:00.882872", "failed": false, "failed_when_result": false, "rc": 1, "start": "2017-03-10 16:11:00.663177", "stderr": "The connection to the server localhost:8080 was refused - did you specify the right host or port?", "stdout": "", "stdout_lines": [], "warnings": []}
TASK [kubernetes/master : Create kube system namespace] *******
Friday 10 March 2017 16:11:00 +0800 (0:00:00.499) 0:19:17.835 ***
fatal: [node1]: FAILED! => {"changed": false, "cmd": ["/usr/local/bin/kubectl", "create", "-f", "/etc/kubernetes/kube-system-ns.yml"], "delta": "0:00:00.186650", "end": "2017-03-10 16:11:01.333078", "failed": true, "rc": 1, "start": "2017-03-10 16:11:01.146428", "stderr": "The connection to the server localhost:8080 was refused - did you specify the right host or port?", "stdout": "", "stdout_lines": [], "warnings": []}
NO MORE HOSTS LEFT *******************
to retry, use: --limit @/workspace/kargo/cluster.retry
PLAY RECAP ***********************
localhost : ok=3 changed=0 unreachable=0 failed=0
node1 : ok=347 changed=33 unreachable=0 failed=1
node2 : ok=348 changed=44 unreachable=0 failed=1
node3 : ok=325 changed=37 unreachable=0 failed=0
kubernetes/master : Master | wait for the apiserver to be running ----- 604.21s
kubernetes/master : Copy kubectl from hyperkube container -------------- 59.27s
download : Register docker images info --------------------------------- 32.66s
kubernetes/preinstall : Update package management cache (YUM) ---------- 13.42s
network_plugin/calico : Calico | Copy cni plugins from calico/cni container -- 11.82s
kubernetes/node : Enable kubelet ---------------------------------------- 8.32s
kubernetes/preinstall : Install packages requirements ------------------- 7.61s
network_plugin/calico : Calico | Copy cni plugins from hyperkube -------- 6.60s
etcd : Install | Copy etcdctl binary from docker container -------------- 3.72s
etcd : Install | Copy etcdctl binary from docker container -------------- 2.88s
etcd : Configure | Check if cluster is healthy -------------------------- 2.17s
download : Register docker images info ---------------------------------- 2.09s
etcd : Configure | Check if cluster is healthy -------------------------- 1.85s
kubernetes/preinstall : Install epel-release on RedHat/CentOS ----------- 1.83s
download : Register docker images info ---------------------------------- 1.79s
kubernetes/master : Install kubectl bash completion --------------------- 1.75s
download : Register docker images info ---------------------------------- 1.69s
download : Register docker images info ---------------------------------- 1.45s
network_plugin/calico : Calico | Set global as_num ---------------------- 1.43s
download : Register docker images info ---------------------------------- 1.34s
- **Docker Images in all the host**
[root@s168 kargo]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
redis-sentinel test 25fbda1beb4e 3 days ago 20.5 MB
registry.cn-shenzhen.aliyuncs.com/quay-io/etcd latest b64fb2914f13 4 days ago 33.6 MB
mqtest latest 03c51dd2cda9 4 days ago 227 MB
registry.cn-hangzhou.aliyuncs.com/kb88/rabbitmq-autocluster latest 24e462e894bf 5 days ago 221 MB
nginx latest 6b914bbcb89e 9 days ago 182 MB
hy/rabbitmq 3.6.6 a95128d68527 9 days ago 232 MB
hy/redis 3.2-alpine f7bb71f5e53e 11 days ago 20.5 MB
quay.io/coreos/flannel v0.7.0-amd64 07afd4ff9ab3 2 weeks ago 73.8 MB
registry.cn-hangzhou.aliyuncs.com/gcr-io/flannel v0.7.0-amd64 07afd4ff9ab3 2 weeks ago 73.8 MB
redis 3.2-alpine 0d39481626b2 3 weeks ago 20.5 MB
quay.io/coreos/hyperkube v1.5.3_coreos.0 4bea8f724e0f 3 weeks ago 641 MB
quay.io/l23network/k8s-netchecker-agent v1.0 4620e0f4070b 5 weeks ago 4.71 MB
weaveworks/weave-npc 1.9.0 460b9ad16e86 5 weeks ago 58.2 MB
weaveworks/weave-kube 1.9.0 568b0ac069ad 5 weeks ago 163 MB
gcr.io/google_containers/nginx-ingress-controller 0.8.3 e5db53ef2b86 5 weeks ago 149 MB
calico/ctl v1.0.2 c2631b8fe32d 5 weeks ago 42.7 MB
calico/cni v1.5.6 1ada35551018 5 weeks ago 67.2 MB
calico/node v1.0.2 ff8c7b8fd9dd 5 weeks ago 257 MB
quay.io/l23network/k8s-netchecker-server v1.0 28f89b4b36ac 5 weeks ago 24.2 MB
daocloud.io/rabbitmq 3.6.6 ec798eba2c56 7 weeks ago 179 MB
rabbitmq 3.6.6 ec798eba2c56 7 weeks ago 179 MB
weaveworks/weave-kube 1.7.2 0fe0d39d7ba5 7 weeks ago 163 MB
daocloud.io/library/debian jessie e5599115b6a6 7 weeks ago 123 MB
debian jessie e5599115b6a6 7 weeks ago 123 MB
hello-world latest 48b5124b2768 7 weeks ago 1.84 kB
busybox latest 7968321274dc 7 weeks ago 1.11 MB
gcr.io/google_containers/fluentd-elasticsearch 1.22 7896bdf952bf 8 weeks ago 266 MB
gcr.io/google_containers/kube-proxy-amd64 v1.5.2 a7a8d4700d00 8 weeks ago 174 MB
gcr.io/google_containers/kube-apiserver-amd64 v1.5.2 e457f05ce81e 8 weeks ago 126 MB
gcr.io/google_containers/kube-scheduler-amd64 v1.5.2 af12e87af780 8 weeks ago 54 MB
gcr.io/google_containers/kube-controller-manager-amd64 v1.5.2 e4fcd210c55b 8 weeks ago 103 MB
registry.cn-hangzhou.aliyuncs.com/google-containers/flannel-git v0.7.0-1-g8f00587-amd64 e2ca75fb4017 8 weeks ago 74.5 MB
gcr.io/google_containers/kubernetes-dashboard-amd64 v1.5.1 1180413103fd 8 weeks ago 104 MB
kubernetes/heapster canary 0a56f7040da5 8 weeks ago 971 MB
zookeeper 3.4.9 2834fb8fd569 2 months ago 155 MB
zookeeper latest 2834fb8fd569 2 months ago 155 MB
registry.cn-hangzhou.aliyuncs.com/spacexnice/nginx latest 01f818af747d 2 months ago 182 MB
jboss/base-jdk 8 8babbebd7a68 2 months ago 420 MB
gcr.io/google_containers/kube-apiserver-amd64 v1.5.1 8c12509df629 2 months ago 124 MB
gcr.io/google_containers/kube-controller-manager-amd64 v1.5.1 cd5684031720 2 months ago 102 MB
gcr.io/google_containers/kube-proxy-amd64 v1.5.1 71d2b27b03f6 2 months ago 176 MB
gcr.io/google_containers/kube-scheduler-amd64 v1.5.1 6506e7b74dac 2 months ago 54 MB
gcr.io/google_containers/kubernetes-dashboard-amd64 v1.5.0 e5133bac8024 3 months ago 88.9 MB
gcr.io/google_containers/elasticsearch v2.4.1 358e3f7fd81e 3 months ago 412 MB
gcr.io/google_containers/etcd-amd64 3.0.14-kubeadm 856e39ac7be3 3 months ago 175 MB
gcr.io/google_containers/kubedns-amd64 1.9 26cf1ed9b144 3 months ago 47 MB
gcr.io/google_containers/kibana v4.6.1 b65f0ed31993 4 months ago 237 MB
gcr.io/google_containers/dnsmasq-metrics-amd64 1.0 5271aabced07 4 months ago 14 MB
gcr.io/google_containers/heapster_grafana v3.1.1 41b92a01197f 4 months ago 279 MB
gcr.io/google_containers/kube-dnsmasq-amd64 1.4 3ec65756a89b 5 months ago 5.13 MB
gcr.io/google_containers/kube-discovery-amd64 1.0 c5e0c9a457fc 5 months ago 134 MB
nginx 1.11.4-alpine 00bc1e841a8f 5 months ago 54.2 MB
gcr.io/google_containers/exechealthz-amd64 1.2 93a43bfb39bf 5 months ago 8.37 MB
gcr.io/google_containers/etcd-amd64 2.2.5 5539402ce2d2 6 months ago 30.4 MB
gcr.io/google_containers/kubedns-amd64 1.7 bec33bc01f03 6 months ago 55.1 MB
quay.io/coreos/etcd v3.0.6 7529cce6b005 6 months ago 43.4 MB
gcr.io/google_containers/exechealthz-amd64 1.1 c3a89c92ef5b 7 months ago 8.33 MB
gcr.io/google_containers/kube-dnsmasq-amd64 1.3 9a15e39d0db8 8 months ago 5.13 MB
registry.cn-hangzhou.aliyuncs.com/google-containers/kube-dnsmasq-amd64 1.3 9a15e39d0db8 8 months ago 5.13 MB
gcr.io/google_containers/echoserver 1.4 a90209bb39e3 9 months ago 140 MB
gcr.io/google_containers/pause-amd64 3.0 99e59f495ffa 10 months ago 747 kB
kubernetes/heapster_influxdb v0.6 a40222acdde6 15 months ago 271 MB
andyshinn/dnsmasq 2.72 37aabe06468e 15 months ago 6.27 MB
gcr.io/google_containers/defaultbackend 1.0 137a07dfd084 16 months ago 7.51 MB
```
Have you tried to rerun it? sometimes it's only a timeout or a very slow download
yeah, i had run it several times, and i also increment the delay time to 60s, but it got the same result.
by the way, all the images(above) had been downloaded before run the script, and loaded to all the nodes.
did something i miss or do wrongly?
- name: Master | wait for the apiserver to be running
uri:
url: http://localhost:8080/healthz
register: result
until: result.status == 200
retries: 10
delay: 60
@davidopp @ant31
after i read the code of project, i found some images did not download sucessfully. so download the missing images fix my problem.
all the images are below.
REPOSITORY TAG IMAGE ID CREATED SIZE
gcr.io/google_containers/cluster-proportional-autoscaler-amd64 1.1.1 f6ddc7506801 37 hours ago 48.2 MB
quay.io/l23network/routereflector v0.1 e07574e67e92 2 days ago 207 MB
quay.io/coreos/hyperkube v1.5.3_coreos.0 4bea8f724e0f 4 weeks ago 641 MB
quay.io/l23network/k8s-netchecker-agent v1.0 4620e0f4070b 5 weeks ago 4.71 MB
calico/ctl v1.0.2 c2631b8fe32d 6 weeks ago 42.7 MB
calico/cni v1.5.6 1ada35551018 6 weeks ago 67.2 MB
calico/node v1.0.2 ff8c7b8fd9dd 6 weeks ago 257 MB
quay.io/l23network/k8s-netchecker-server v1.0 28f89b4b36ac 6 weeks ago 24.2 MB
busybox latest 7968321274dc 2 months ago 1.11 MB
gcr.io/google_containers/fluentd-elasticsearch 1.22 7896bdf952bf 2 months ago 266 MB
calico/kube-policy-controller v0.5.2 74f239528a80 2 months ago 31.6 MB
gcr.io/google_containers/elasticsearch v2.4.1 358e3f7fd81e 3 months ago 412 MB
gcr.io/google_containers/kubedns-amd64 1.9 26cf1ed9b144 3 months ago 47 MB
gcr.io/google_containers/kibana v4.6.1 b65f0ed31993 4 months ago 237 MB
nginx 1.11.4-alpine 00bc1e841a8f 5 months ago 54.2 MB
gcr.io/google_containers/kubedns-amd64 1.7 bec33bc01f03 6 months ago 55.1 MB
quay.io/coreos/etcd v3.0.6 7529cce6b005 6 months ago 43.4 MB
gcr.io/google_containers/exechealthz-amd64 1.1 c3a89c92ef5b 7 months ago 8.33 MB
gcr.io/google_containers/kube-dnsmasq-amd64 1.3 9a15e39d0db8 9 months ago 5.13 MB
gcr.io/google_containers/pause-amd64 3.0 99e59f495ffa 10 months ago 747 kB
andyshinn/dnsmasq 2.72 37aabe06468e 15 months ago 6.27 MB
wish it can be helpful.
@kaybinwong Which images did not download successfully?
Hi, is there any workaround or fix for this? I'm on CentOS and getting similar error
FAILED - RETRYING: Master | wait for the apiserver to be running (1 retries left).
fatal: [192.168.122.101]: FAILED! => {"attempts": 20, "changed": false, "content": "", "failed": true, "msg": "Status code was not [200]: Request failed:
Please help!
Finally, I got my k8s v1.6.7 running on CentOS 7.3 !!!
After struggling for 2 days and digging into previous issues finally found that this is due to low memory. Actually at the end script tries to upgrade the k8s components such as etcd, apiserver, controller-manager and restarts them. During re-start of the services at the end very low memory is left on the node resulting in failure to start apiserver and throws error Wait for the API server as it keeps re-trying and not enough memory doesn't allows it to ever come back.
I have just increased it to 2 GB on each node on my 1 Master + 2 Minion setup for Kubernetes 1.6.7 using kubespray and finally its up and running.
If you have used libvirt/KVM to spin your CentOS vm's below is command to increase memory:
virsh shutdown
virsh setmaxmem 4G --config
virsh setmem 2G --config
virsh start
PS: I wish in the prerequisites for Kubespray they should have mentioned to keep node memory to 2 GB
Just hit exactly same behavior on xenial with 2 Gb RAM, raised it to 3Gb and problem solved.
I came across this symptom but it had nothing to do with a mis-configured kubespray inventory, rather operator error. Ansible caches the state of a host from previous runs in /tmp, so if you change the host IPs in your inventory but keep the same hostnames, ansible will bork the playbook installation, and ultimately fails with this unrelated error message.
facing the same problem with ubuntu16.04 cloud images. allocated ram 8 gb , Still facing the same issue any idea?
logs :
TASK [kubernetes/master : Write kube-apiserver manifest] ************************************
Sunday 26 November 2017 05:54:45 -0700 (0:00:00.037) 0:10:52.327 **
changed: [mastera]
changed: [meniona1]
RUNNING HANDLER [kubernetes/master : Master | Restart apiserver] *********************************
Sunday 26 November 2017 05:54:45 -0700 (0:00:00.585) 0:10:52.913 *
changed: [mastera]
changed: [meniona1]
RUNNING HANDLER [kubernetes/master : Master | Remove apiserver container] ******************************
Sunday 26 November 2017 05:54:46 -0700 (0:00:00.275) 0:10:53.188 *
changed: [mastera]
changed: [meniona1]
RUNNING HANDLER [kubernetes/master : Master | wait for the apiserver to be running] ***************************
Sunday 26 November 2017 05:54:46 -0700 (0:00:00.203) 0:10:53.392 **
FAILED - RETRYING: Master | wait for the apiserver to be running (20 retries left).
FAILED - RETRYING: Master | wait for the apiserver to be running (20 retries left).
FAILED - RETRYING: Master | wait for the apiserver to be running (19 retries left).
FAILED - RETRYING: Master | wait for the apiserver to be running (19 retries left).
FAILED - RETRYING: Master | wait for the apiserver to be running (18 retries left).
FAILED - RETRYING: Master | wait for the apiserver to be running (18 retries left).
FAILED - RETRYING: Master | wait for the apiserver to be running (17 retries left).
FAILED - RETRYING: Master | wait for the apiserver to be running (17 retries left).
FAILED - RETRYING: Master | wait for the apiserver to be running (16 retries left).
FAILED - RETRYING: Master | wait for the apiserver to be running (15 retries left).
FAILED - RETRYING: Master | wait for the apiserver to be running (16 retries left).
FAILED - RETRYING: Master | wait for the apiserver to be running (14 retries left).
FAILED - RETRYING: Master | wait for the apiserver to be running (15 retries left).
FAILED - RETRYING: Master | wait for the apiserver to be running (13 retries left).
FAILED - RETRYING: Master | wait for the apiserver to be running (14 retries left).
FAILED - RETRYING: Master | wait for the apiserver to be running (12 retries left).
FAILED - RETRYING: Master | wait for the apiserver to be running (13 retries left).
FAILED - RETRYING: Master | wait for the apiserver to be running (11 retries left).
FAILED - RETRYING: Master | wait for the apiserver to be running (12 retries left).
FAILED - RETRYING: Master | wait for the apiserver to be running (10 retries left).
FAILED - RETRYING: Master | wait for the apiserver to be running (11 retries left).
FAILED - RETRYING: Master | wait for the apiserver to be running (9 retries left).
FAILED - RETRYING: Master | wait for the apiserver to be running (10 retries left).
FAILED - RETRYING: Master | wait for the apiserver to be running (8 retries left).
FAILED - RETRYING: Master | wait for the apiserver to be running (9 retries left).
FAILED - RETRYING: Master | wait for the apiserver to be running (7 retries left).
FAILED - RETRYING: Master | wait for the apiserver to be running (8 retries left).
FAILED - RETRYING: Master | wait for the apiserver to be running (6 retries left).
FAILED - RETRYING: Master | wait for the apiserver to be running (7 retries left).
FAILED - RETRYING: Master | wait for the apiserver to be running (5 retries left).
FAILED - RETRYING: Master | wait for the apiserver to be running (6 retries left).
FAILED - RETRYING: Master | wait for the apiserver to be running (4 retries left).
FAILED - RETRYING: Master | wait for the apiserver to be running (5 retries left).
FAILED - RETRYING: Master | wait for the apiserver to be running (3 retries left).
FAILED - RETRYING: Master | wait for the apiserver to be running (4 retries left).
FAILED - RETRYING: Master | wait for the apiserver to be running (2 retries left).
FAILED - RETRYING: Master | wait for the apiserver to be running (3 retries left).
FAILED - RETRYING: Master | wait for the apiserver to be running (1 retries left).
FAILED - RETRYING: Master | wait for the apiserver to be running (2 retries left).
fatal: [meniona1]: FAILED! => {"attempts": 20, "changed": false, "connection": "close", "content": "[+]ping ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[-]poststarthook/bootstrap-controller failed: reason withheld\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\n[+]poststarthook/start-kube-apiserver-informers ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-status-available-controller ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[-]autoregister-completion failed: reason withheld\nhealthz check failed\n", "content_length": "750", "content_type": "text/plain; charset=utf-8", "date": "Sun, 26 Nov 2017 12:56:50 GMT", "failed": true, "msg": "Status code was not [200]: HTTP Error 500: Internal Server Error", "redirected": false, "status": 500, "url": "http://127.0.0.1:8080/healthz", "x_content_type_options": "nosniff"}
FAILED - RETRYING: Master | wait for the apiserver to be running (1 retries left).
fatal: [mastera]: FAILED! => {"attempts": 20, "changed": false, "connection": "close", "content": "[+]ping ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\n[+]poststarthook/start-kube-apiserver-informers ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-status-available-controller ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[-]autoregister-completion failed: reason withheld\nhealthz check failed\n", "content_length": "729", "content_type": "text/plain; charset=utf-8", "date": "Sun, 26 Nov 2017 12:56:59 GMT", "failed": true, "msg": "Status code was not [200]: HTTP Error 500: Internal Server Error", "redirected": false, "status": 500, "url": "http://127.0.0.1:8080/healthz", "x_content_type_options": "nosniff"}
to retry, use: --limit @/home/ubuntu/kargo/cluster.retry
PLAY RECAP ***************************************************
localhost : ok=2 changed=0 unreachable=0 failed=0
mastera : ok=260 changed=52 unreachable=0 failed=1
meniona1 : ok=296 changed=67 unreachable=0 failed=1
meniona2 : ok=251 changed=51 unreachable=0 failed=0
kubernetes/node : install | Compare host kubelet with hyperkube container ------------------------------------------------------------------------------------- 153.83s
kubernetes/master : Master | wait for the apiserver to be running --------------------------------------------------------------------------------------------- 133.94s
download : container_download | Download containers if pull is required or told to always pull ---------------------------------------------------------------- 106.28s
download : container_download | Download containers if pull is required or told to always pull ----------------------------------------------------------------- 53.60s
kubernetes/node : Ensure nodePort range is reserved ------------------------------------------------------------------------------------------------------------ 32.83s
etcd : Install | Copy etcdctl binary from docker container ----------------------------------------------------------------------------------------------------- 26.37s
kubernetes/master : Compare host kubectl with hyperkube container ---------------------------------------------------------------------------------------------- 21.07s
etcd : Configure | Join member(s) to cluster one at a time ----------------------------------------------------------------------------------------------------- 20.15s
etcd : Configure | Join member(s) to cluster one at a time ----------------------------------------------------------------------------------------------------- 20.13s
download : container_download | Download containers if pull is required or told to always pull ----------------------------------------------------------------- 19.11s
download : container_download | Download containers if pull is required or told to always pull ----------------------------------------------------------------- 12.96s
etcd : reload etcd --------------------------------------------------------------------------------------------------------------------------------------------- 10.52s
kubernetes/node : install | Copy kubelet from hyperkube container ----------------------------------------------------------------------------------------------- 9.39s
etcd : wait for etcd up ----------------------------------------------------------------------------------------------------------------------------------------- 6.94s
kubernetes/node : Verify if br_netfilter module exists ---------------------------------------------------------------------------------------------------------- 6.20s
kubernetes/secrets : Check certs | check if a cert already exists on node --------------------------------------------------------------------------------------- 3.41s
kubernetes/secrets : Gen_certs | update ca-certificates (Debian/Ubuntu/Container Linux by CoreOS) --------------------------------------------------------------- 2.58s
etcd : Gen_certs | update ca-certificates (Debian/Ubuntu/Container Linux by CoreOS) ----------------------------------------------------------------------------- 2.17s
etcd : Gen_certs | run cert generation script ------------------------------------------------------------------------------------------------------------------- 2.01s
kubernetes/master : Copy kubectl from hyperkube container ------------------------------------------------------------------------------------------------------- 1.89s
root@controller281573:/home/ubuntu/kargo#
I have exactly the same problem.
What looks suspicious, is the removal of the apiserver container right after restart.
When running docker images on the designated master, the apiserver image is not listed.
The mistake I did, was to pre-install kubelet, kubeadm and kubectl on the target nodes. When I left that job to Kubespray, the cluster went up without any problem.
Going through the logs, the problem seemed to be kubelet that runs into certificate issues. It can't find the certs where it expects them to be.
BTW: I was using the kubeadm deployment mechanism.
is there any solution for this? I face the same issue, I do have enough memory on VM.
In our case, we were using a self-signed certificate on our OpenStack environment. After I installed the correct certs on the instances (master, nodes, etc), I was able to successfully run the playbook.
Most helpful comment
Hi, is there any workaround or fix for this? I'm on CentOS and getting similar error", "redirected": false, "status": -1, "url": "http://localhost:8080/healthz"}
FAILED - RETRYING: Master | wait for the apiserver to be running (1 retries left).
fatal: [192.168.122.101]: FAILED! => {"attempts": 20, "changed": false, "content": "", "failed": true, "msg": "Status code was not [200]: Request failed:
Please help!