Environment:
printf "$(uname -srm)\n$(cat /etc/os-release)\n")::Linux 3.10.0-957.el7.x86_64 x86_64
NAME="Red Hat Enterprise Linux Server"
VERSION="7.6 (Maipo)"
ID="rhel"
ID_LIKE="fedora"
VARIANT="Server"
VARIANT_ID="server"
VERSION_ID="7.6"
PRETTY_NAME="Red Hat Enterprise Linux Server 7.6 (Maipo)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:redhat:enterprise_linux:7.6:GA:server"
HOME_URL="https://www.redhat.com/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 7"
REDHAT_BUGZILLA_PRODUCT_VERSION=7.6
REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux"
REDHAT_SUPPORT_PRODUCT_VERSION="7.6"
ansible --version):ansible 2.8.2
config file = /opt/ansible/ansible.cfg
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /opt/openwatt/local/lib/python2.7/site-packages/ansible
executable location = /opt/openwatt/bin/ansible
python version = 2.7.12 (default, Dec 4 2017, 14:50:18) [GCC 5.4.0 20160609]
python --version):Python 2.7.12
Kubespray version (commit) (git rev-parse --short HEAD):
29cfe2b
Network plugin used: calico
Full inventory with variables (ansible -i inventory/sample/inventory.ini all -m debug -a "var=hostvars[inventory_hostname]"):
[all]
node1
node2
node3
node4
node5
node6
[kube-master]
node1
node2
node3
[etcd]
node1
node2
node3
[kube-node]
node4
node5
node6
[calico-rr]
[k8s-cluster:children]
kube-master
kube-node
calico-rr
Command used to invoke ansible:
ansible-playbook -i inventory/mycluster/inventory.ini --become --become-user=root cluster.yml
Anything else do we need to know:
I ran this on v2.12.7 of kubespray to see if it occurs (which is does) but I initially tried onan older version (v2.12.0) and it's the same. I ran a vanilla deployment without changing any variables, so i guess it should be easily reproducible.
My main scope was: "How to rotate the certificates without actually updating the k8s version"; so i got to manually trying to use kubeadm and thus received the errors in the title.
[root@node1 pki]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
node1 Ready master 61m v1.16.11
node2 Ready master 60m v1.16.11
node3 Ready master 60m v1.16.11
node4 Ready <none> 59m v1.16.11
node5 Ready <none> 59m v1.16.11
node6 Ready <none> 59m v1.16.11
[root@node1 pki]# kubeadm alpha certs check-expiration
failed to load existing certificate apiserver-etcd-client: open /etc/kubernetes/pki/apiserver-etcd-client.crt: no such file or directory
To see the stack trace of this error execute with --v=5 or higher
[root@node1 pki]# kubeadm alpha certs renew all
certificate embedded in the kubeconfig file for the admin to use and for kubeadm itself renewed
certificate for serving the Kubernetes API renewed
failed to load existing certificate apiserver-etcd-client: open /etc/kubernetes/pki/apiserver-etcd-client.crt: no such file or directory
To see the stack trace of this error execute with --v=5 or higher
[root@node1 pki]# ls -la /etc/kubernetes/pki/
total 52
drwxr-xr-x. 2 kube root 288 Jul 6 05:53 .
drwxr-xr-x. 4 kube root 4096 Jul 6 05:57 ..
-rw-r--r--. 1 root root 1574 Jul 6 06:20 apiserver.crt
-rw-------. 1 root root 1679 Jul 6 06:20 apiserver.key
-rw-r--r--. 1 root root 1099 Jul 6 05:53 apiserver-kubelet-client.crt
-rw-------. 1 root root 1675 Jul 6 05:53 apiserver-kubelet-client.key
-rw-r--r--. 1 root root 1025 Jul 6 05:53 ca.crt
-rw-------. 1 root root 1675 Jul 6 05:53 ca.key
-rw-r--r--. 1 root root 1038 Jul 6 05:53 front-proxy-ca.crt
-rw-------. 1 root root 1679 Jul 6 05:53 front-proxy-ca.key
-rw-r--r--. 1 root root 1058 Jul 6 05:53 front-proxy-client.crt
-rw-------. 1 root root 1675 Jul 6 05:53 front-proxy-client.key
-rw-------. 1 root root 1679 Jul 6 05:53 sa.key
-rw-------. 1 root root 451 Jul 6 05:53 sa.pub
Dear all,
My kub cluster v1.15.0 certificate has expired today.
I was aware with kubelet services logs
Jul 08 17:47:33 km1 kubelet[14277]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See
Jul 08 17:47:33 km1 kubelet[14277]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See
Jul 08 17:47:33 km1 kubelet[14277]: I0708 17:47:33.956822 14277 server.go:425] Version: v1.15.0
Jul 08 17:47:33 km1 kubelet[14277]: I0708 17:47:33.958324 14277 plugins.go:103] No cloud provider specified.
Jul 08 17:47:33 km1 kubelet[14277]: I0708 17:47:33.958429 14277 server.go:791] Client rotation is on, will bootstrap in background
Jul 08 17:47:33 km1 kubelet[14277]: E0708 17:47:33.969183 14277 bootstrap.go:263] Part of the existing bootstrap client certificate is expired: 2020-06-23 06:27:20 +0000
Jul 08 17:47:33 km1 kubelet[14277]: F0708 17:47:33.969303 14277 server.go:273] failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-
Jul 08 17:47:33 km1 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a
Jul 08 17:47:33 km1 systemd[1]: kubelet.service: Unit entered failed state.
Jul 08 17:47:33 km1 systemd[1]: kubelet.service: Failed with result 'exit-code'
Then renewed the certificate and got the error as /etc/kubernetes/pki/apiserver-etcd-client.crt file is not found.
ubuntu@km1:~$ sudo kubeadm alpha certs renew all
certificate embedded in the kubeconfig file for the admin to use and for kubeadm itself renewed
certificate for serving the Kubernetes API renewed
failed to load existing certificate apiserver-etcd-client: open /etc/kubernetes/pki/apiserver-etcd-client.crt: no such file or directory
Could you please help to solve it ?
Try adding the config file as an argument :
[root@node1 pki]# kubeadm alpha certs renew all --config="/etc/kubernetes/kubeadm-config.yaml"
certificate embedded in the kubeconfig file for the admin to use and for kubeadm itself renewed
certificate for serving the Kubernetes API renewed
apiserver-etcd-client is not a valid certificate for this cluster
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Addressed in this MR : https://github.com/kubernetes-sigs/kubespray/pull/6403
Most helpful comment
Try adding the config file as an argument :
[root@node1 pki]# kubeadm alpha certs renew all --config="/etc/kubernetes/kubeadm-config.yaml"certificate embedded in the kubeconfig file for the admin to use and for kubeadm itself renewedcertificate for serving the Kubernetes API renewedapiserver-etcd-client is not a valid certificate for this cluster