Is this a BUG REPORT or FEATURE REQUEST? (choose one):
kube-apiserver.manifest metadata.name Invalid
Environment:
printf "$(uname -srm)\n$(cat /etc/os-release)\n"):Linux 3.10.0-327.36.3.el7.x86_64 x86_64
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"
CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"
ansible --version):ansible 2.2.1.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
Kargo version (commit) (git rev-parse --short HEAD):
Network plugin used:
Copy of your inventory file:
node1 ansible_ssh_host=192.168.10.205 ansible_ssh_user=root
node2 ansible_ssh_host=192.168.10.65 ansible_ssh_user=root
node3 ansible_ssh_host=192.168.10.30 ansible_ssh_user=root
[kube-master]
node1
node2
[etcd]
node1
node2
node3
[kube-node]
node2
node3
[k8s-cluster:children]
kube-node
kube-master
Command used to invoke ansible:
ansible-playbook -i inventory/inventory.ini cluster.yml -b -v --private-key=~/.ssh/id_rsa
Output of ansible run:
<!-- We recommend using snippets services like https://gist.github.com/ etc. -->
RUNNING HANDLER [kubernetes/master : Master | wait for the apiserver to be running] ***
Thursday 23 March 2017 10:46:16 +0800 (0:00:00.468) 0:08:32.094 ********
FAILED - RETRYING: HANDLER: kubernetes/master : Master | wait for the apiserver to be running (10 retries left).
FAILED - RETRYING: HANDLER: kubernetes/master : Master | wait for the apiserver to be running (10 retries left).
FAILED - RETRYING: HANDLER: kubernetes/master : Master | wait for the apiserver to be running (9 retries left).
FAILED - RETRYING: HANDLER: kubernetes/master : Master | wait for the apiserver to be running (9 retries left).
FAILED - RETRYING: HANDLER: kubernetes/master : Master | wait for the apiserver to be running (8 retries left).
FAILED - RETRYING: HANDLER: kubernetes/master : Master | wait for the apiserver to be running (8 retries left).
FAILED - RETRYING: HANDLER: kubernetes/master : Master | wait for the apiserver to be running (7 retries left).
FAILED - RETRYING: HANDLER: kubernetes/master : Master | wait for the apiserver to be running (7 retries left).
FAILED - RETRYING: HANDLER: kubernetes/master : Master | wait for the apiserver to be running (6 retries left).
FAILED - RETRYING: HANDLER: kubernetes/master : Master | wait for the apiserver to be running (6 retries left).
FAILED - RETRYING: HANDLER: kubernetes/master : Master | wait for the apiserver to be running (5 retries left).
FAILED - RETRYING: HANDLER: kubernetes/master : Master | wait for the apiserver to be running (5 retries left).
FAILED - RETRYING: HANDLER: kubernetes/master : Master | wait for the apiserver to be running (4 retries left).
FAILED - RETRYING: HANDLER: kubernetes/master : Master | wait for the apiserver to be running (4 retries left).
FAILED - RETRYING: HANDLER: kubernetes/master : Master | wait for the apiserver to be running (3 retries left).
FAILED - RETRYING: HANDLER: kubernetes/master : Master | wait for the apiserver to be running (3 retries left).
FAILED - RETRYING: HANDLER: kubernetes/master : Master | wait for the apiserver to be running (2 retries left).
FAILED - RETRYING: HANDLER: kubernetes/master : Master | wait for the apiserver to be running (2 retries left).
FAILED - RETRYING: HANDLER: kubernetes/master : Master | wait for the apiserver to be running (1 retries left).
FAILED - RETRYING: HANDLER: kubernetes/master : Master | wait for the apiserver to be running (1 retries left).
fatal: [node1]: FAILED! => {"attempts": 10, "changed": false, "content": "", "failed": true, "msg": "Status code was not [200]: Request failed: <urlopen error [Errno 111] Connection refused>", "redirected": false, "status": -1, "url": "http://localhost:8080/healthz"}
fatal: [node2]: FAILED! => {"attempts": 10, "changed": false, "content": "", "failed": true, "msg": "Status code was not [200]: Request failed: <urlopen error [Errno 111] Connection refused>", "redirected": false, "status": -1, "url": "http://localhost:8080/healthz"}
to retry, use: --limit @/home/dev_dean/kargo/cluster.retry
PLAY RECAP *********************************************************************
localhost : ok=3 changed=0 unreachable=0 failed=0
node1 : ok=357 changed=105 unreachable=0 failed=1
node2 : ok=394 changed=122 unreachable=0 failed=1
on the master node:
journalctl -u kubelet | grep -v E0323 | less
Mar 23 10:15:16 vm_10_205_centos kubelet[2267]: I0323 02:15:16.398084 2365 file.go:144] Can't process config file "/etc/kubernetes/manifests/kube-apiserver.manifest": invalid pod: [metadata.name: Invalid value: "kube-apiserver-vm_10_205_centos": must match the regex [a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)* (e.g. 'example.com') spec.nodeName: Invalid value: "vm_10_205_centos": must match the regex [a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)* (e.g. 'example.com')]
i thought here need a hostname check step to report the hostname is invaild
I modified the hostname, but the issue always exists:
the new hostname is node1.k8sp1.local
Mar 23 10:44:55 node1.k8sp1.local kubelet[8230]: I0323 02:44:55.493510 8269 file.go:144] Can't process
config file "/etc/kubernetes/manifests/kube-proxy.manifest": invalid pod: [metadata.name: Invalid value:
"kube-proxy-vm_10_205_centos": must match the regex [a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[
a-z0-9])?)* (e.g. 'example.com') spec.nodeName: Invalid value: "vm_10_205_centos": must match the regex [
a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)* (e.g. 'example.com')]
Finished a add set hostname in top of roles/kubernetes/preinstall/tasks/etchosts.yml
The step auto set all node's hostname by inventory_hostname
- name: Hosts | set hostname
hostname: name={{ hostname }}
and set the variable in roles/kubernetes/preinstall/vars/centos.yml
hostname: "{{ inventory_hostname }}"
@hellwen I have the same issue.
Most helpful comment
Finished a add
set hostnamein top of roles/kubernetes/preinstall/tasks/etchosts.ymlThe step auto set all node's hostname by inventory_hostname
and set the variable in roles/kubernetes/preinstall/vars/centos.yml