Is this a BUG REPORT or FEATURE REQUEST? (choose one):
FEATURE
If this is a FEATURE REQUEST, please:
Please add a method to update and/or configure the SubjectAltName on the certificate of the k8s api server.
I use kubespray to deploy kubernetes to a custom baremetal environment. I then create a hostname record pointing to the k8s api. I would like to use this hostname in my KUBECONFIG.
Environment:
OS (printf "$(uname -srm)\n$(cat /etc/os-release)\n"):
Linux 4.4.0-137-generic x86_64
NAME="Ubuntu"
VERSION="16.04.5 LTS (Xenial Xerus)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 16.04.5 LTS"
VERSION_ID="16.04"
HOME_URL="http://www.ubuntu.com/"
SUPPORT_URL="http://help.ubuntu.com/"
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"
VERSION_CODENAME=xenial
UBUNTU_CODENAME=xenial
Version of Ansible (ansible --version):
ansible 2.6.1
config file = /home/user/kubespray/ansible.cfg
configured module search path = [u'/home/user/kubespray/library']
ansible python module location = /usr/local/lib/python2.7/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 2.7.15rc1 (default, Apr 15 2018, 21:51:34) [GCC 7.3.0]
Kubespray version (commit) (git rev-parse --short HEAD):
tag:v2.7.0
05dabb7e
Network plugin used:
weave
Copy of your inventory file:
n/a
Command used to invoke ansible:
ansible-playbook -i ./inventory/cluster/hosts.ini ./cluster.yml
Output of ansible run:
n/a
Anything else do we need to know:
My issue manifests itself when i use a hostname in my KUBECONFIG.
Here is the error i see:
kubectl get pod
Unable to connect to the server: x509: certificate is valid for kubernetes, kubernetes.default, kubernetes.default.svc, kubernetes.default.svc.cluster.local, localhost, master-1, master-2, master-3, lb-apiserver.kubernetes.local, not api.k8s.example.com
I have added my hostname value to kube_cert_alt_names found in roles/kubernetes/secrets/tasks/gen_certs_vault.yml, i then deleted the apiserver certificates, as instructed here: https://github.com/kubernetes-incubator/kubespray/issues/2164#issuecomment-367142343
and then re-run cluster.yml and it has fixed my problem, but i dont think this is the right way to do it.
Hello,
modifying kube_cert_alt_names did not work for me.
I had to modify in kubeadm | aggregate all SANs in roles/kubernetes/master/tasks/kubadm-setup.yml.
Also I had to restart pods in the kube-system namespace:
for i in $(docker ps | egrep 'admin|controller|scheduler|api|fron|proxy' | rev | awk '{print $1}' | rev); do docker stop $i; done
As you said, it's not really flexible but at least it solves it for now.
actually, there is supplementary_addresses_in_ssl_keys in the k8s-cluster.yml
Thanks for suggestion @zzzuzik, but unfortunately it didn't work for me.
Used config:
supplementary_addresses_in_ssl_keys:
- k8s.example.com
For some reason it adds IP address before DNS name which results in 172.31.3.209k8s.example.com instead of k8s.example.com.
I end up using combination of @cbluth and @lefeverd solutions.
@valerius257
Try to re-run the cluster.yml playbook with extra arguments below after setting the supplementary_addresses_in_ssl_keys and inventory variable kubeadm_enabled: false in all.yml
ansible-playbook -i inventory/mycluster/hosts.ini cluster.yml
Unfortunately it does not work.
Using command
ansible-playbook -i inventory/mycluster/hosts.ini cluster.yml --extra-vars "gen_certs=true,sync_certs=true, gen_master_certs=true"
I'm getting error:
TASK [etcd : Gen_certs | run cert generation script] **********************************************************************************************************************************************************************************task path: /mnt/d/@Work/DWCH/DWCH_SRC/DSN-9922/sparay/kubespray-2.8.3/roles/etcd/tasks/gen_certs_script.yml:54
Thursday 04 April 2019 19:24:54 +0300 (0:00:02.272) 0:01:54.166 ********
Using module file /home/valery_navakolski/.local/lib/python2.7/site-packages/ansible/modules/commands/command.py
<node1> ESTABLISH SSH CONNECTION FOR USER: ubuntu
<node1> SSH: EXEC ssh -vvv -o ControlMaster=auto -o ControlPersist=30m -o ConnectionAttempts=100 -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o 'IdentityFile="/home/valery_navakolski/DWCH_standalonelinuxserver.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ubuntu -o ConnectTimeout=10 -o StrictHostKeyChecking=no -o ControlPath=/home/valery_navakolski/.ansible/cp/0c7861752b node1 '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-zwbitkxewzyspqffjswpvxmvmihcxqcm; MASTERS='"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"' '"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"' HOSTS='"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"' '"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"' /usr/bin/python'"'"'"'"'"'"'"'"' && sleep 0'"'"'' Escalation succeeded
<node1> (1, '\n{"changed": true, "end": "2019-04-04 16:24:54.759824", "stdout": "", "cmd": ["bash", "-x", "/usr/local/bin/etcd-scripts/make-ssl-etcd.sh", "-f", "/etc/ssl/etcd/openssl.conf", "-d", "/etc/ssl/etcd/ssl"], "failed": true, "delta": "0:00:00.008081", "stderr": "+ set -o errexit\\n+ set -o pipefail\\n+ (( 4 ))\\n+ case \\"$1\\" in\\n+ CONFIG=/etc/ssl/etcd/openssl.conf\\n+ shift 2\\n+ (( 2 ))\\n+ case \\"$1\\" in\\n+ SSLDIR=/etc/ssl/etcd/ssl\\n+ shift 2\\n+ (( 0 ))\\n+ \'[\' -z /etc/ssl/etcd/openssl.conf \']\'\\n+ \'[\' -z /etc/ssl/etcd/ssl \']\'\\n++ mktemp -d /tmp/etcd_cacert.XXXXXX\\n+ tmpdir=/tmp/etcd_cacert.7mcjX7\\n+ trap \'rm -rf \\"${tmpdir}\\"\' EXIT\\n+ cd /tmp/etcd_cacert.7mcjX7\\n+ mkdir -p /etc/ssl/etcd/ssl\\n+ \'[\' -e /etc/ssl/etcd/ssl/ca-key.pem \']\'\\n+ cp /etc/ssl/etcd/ssl/ca.pem /etc/ssl/etcd/ssl/ca-key.pem .\\n+ \'[\' -n \' \' \']\'\\n+ \'[\' -n \' \' \']\'\\n+ \'[\' -e /etc/ssl/etcd/ssl/ca-key.pem \']\'\\n+ rm -f ca.pem ca-key.pem\\n+ mv \'*.pem\' /etc/ssl/etcd/ssl/\\nmv: cannot stat \'*.pem\': No such file or directory\\n+ rm -rf /tmp/etcd_cacert.7mcjX7", "rc": 1, "invocation": {"module_args": {"warn": true, "executable": null, "_uses_shell": false, "_raw_params": "bash -x /usr/local/bin/etcd-scripts/make-ssl-etcd.sh -f /etc/ssl/etcd/openssl.conf -d /etc/ssl/etcd/ssl", "removes": null, "argv": null, "creates": null, "chdir": null, "stdin": null}}, "start": "2019-04-04 16:24:54.751743", "msg": "non-zero return code"}\n', 'OpenSSH_7.9p1 Debian-6, OpenSSL 1.1.1a 20 Nov 2018\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: Applying options for *\r\ndebug2: resolve_canonicalize: hostname node1 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 28030\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 1\r\n')
<node1> Failed to connect to the host via ssh: OpenSSH_7.9p1 Debian-6, OpenSSL 1.1.1a 20 Nov 2018
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: Applying options for *
debug2: resolve_canonicalize: hostname node1 is address
debug1: auto-mux: Trying existing master
debug2: fd 3 setting O_NONBLOCK
debug2: mux_client_hello_exchange: master version 4
debug3: mux_client_forwards: request forwardings: 0 local, 0 remote
debug3: mux_client_request_session: entering
debug3: mux_client_request_alive: entering
debug3: mux_client_request_alive: done pid = 28030
debug3: mux_client_request_session: session request sent
debug3: mux_client_read_packet: read header failed: Broken pipe
debug2: Received exit status from master 1
fatal: [node1 -> node1]: FAILED! => {
"changed": true,
"cmd": [
"bash",
"-x",
"/usr/local/bin/etcd-scripts/make-ssl-etcd.sh",
"-f",
"/etc/ssl/etcd/openssl.conf",
"-d",
"/etc/ssl/etcd/ssl"
],
"delta": "0:00:00.008081",
"end": "2019-04-04 16:24:54.759824",
"invocation": {
"module_args": {
"_raw_params": "bash -x /usr/local/bin/etcd-scripts/make-ssl-etcd.sh -f /etc/ssl/etcd/openssl.conf -d /etc/ssl/etcd/ssl",
"_uses_shell": false,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"warn": true
}
},
"msg": "non-zero return code",
"rc": 1,
"start": "2019-04-04 16:24:54.751743",
"stderr": "+ set -o errexit\n+ set -o pipefail\n+ (( 4 ))\n+ case \"$1\" in\n+ CONFIG=/etc/ssl/etcd/openssl.conf\n+ shift 2\n+ (( 2 ))\n+ case \"$1\" in\n+ SSLDIR=/etc/ssl/etcd/ssl\n+ shift 2\n+ (( 0 ))\n+ '[' -z /etc/ssl/etcd/openssl.conf ']'\n+ '[' -z /etc/ssl/etcd/ssl ']'\n++ mktemp -d /tmp/etcd_cacert.XXXXXX\n+ tmpdir=/tmp/etcd_cacert.7mcjX7\n+ trap 'rm -rf \"${tmpdir}\"' EXIT\n+ cd /tmp/etcd_cacert.7mcjX7\n+ mkdir -p /etc/ssl/etcd/ssl\n+ '[' -e /etc/ssl/etcd/ssl/ca-key.pem ']'\n+ cp /etc/ssl/etcd/ssl/ca.pem /etc/ssl/etcd/ssl/ca-key.pem .\n+ '[' -n ' ' ']'\n+ '[' -n ' ' ']'\n+ '[' -e /etc/ssl/etcd/ssl/ca-key.pem ']'\n+ rm -f ca.pem ca-key.pem\n+ mv '*.pem' /etc/ssl/etcd/ssl/\nmv: cannot stat '*.pem': No such file or directory\n+ rm -rf /tmp/etcd_cacert.7mcjX7",
"stderr_lines": [
"+ set -o errexit",
"+ set -o pipefail",
"+ (( 4 ))",
"+ case \"$1\" in",
"+ CONFIG=/etc/ssl/etcd/openssl.conf",
"+ shift 2",
"+ (( 2 ))",
"+ case \"$1\" in",
"+ SSLDIR=/etc/ssl/etcd/ssl",
"+ shift 2",
"+ (( 0 ))",
"+ '[' -z /etc/ssl/etcd/openssl.conf ']'",
"+ '[' -z /etc/ssl/etcd/ssl ']'",
"++ mktemp -d /tmp/etcd_cacert.XXXXXX",
"+ tmpdir=/tmp/etcd_cacert.7mcjX7",
"+ trap 'rm -rf \"${tmpdir}\"' EXIT",
"+ cd /tmp/etcd_cacert.7mcjX7",
"+ mkdir -p /etc/ssl/etcd/ssl",
"+ '[' -e /etc/ssl/etcd/ssl/ca-key.pem ']'",
"+ cp /etc/ssl/etcd/ssl/ca.pem /etc/ssl/etcd/ssl/ca-key.pem .",
"+ '[' -n ' ' ']'",
"+ '[' -n ' ' ']'",
"+ '[' -e /etc/ssl/etcd/ssl/ca-key.pem ']'",
"+ rm -f ca.pem ca-key.pem",
"+ mv '*.pem' /etc/ssl/etcd/ssl/",
"mv: cannot stat '*.pem': No such file or directory",
"+ rm -rf /tmp/etcd_cacert.7mcjX7"
],
"stdout": "",
"stdout_lines": []
}
NO MORE HOSTS LEFT ********************************************************************************************************************************************************************************************************************
to retry, use: --limit @/mnt/d/@Work/DWCH/DWCH_SRC/DSN-9922/sparay/kubespray-2.8.3/cluster.retry
And when I use command
ansible-playbook -i inventory/mycluster/hosts.ini cluster.yml --extra-vars "sync_certs=true, gen_master_certs=true"
Resulting certificate is incorrect, It still has IP and DNS combined 172.31.9.161api-k8s.example.com
Certificate:
Data:
Version: 3 (0x2)
Serial Number: 6264719283834587766 (0x56f0c0ff2a800a76)
Signature Algorithm: sha256WithRSAEncryption
Issuer: CN = kubernetes
Validity
Not Before: Apr 4 17:06:51 2019 GMT
Not After : Apr 3 17:06:51 2020 GMT
Subject: CN = kube-apiserver
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
Public-Key: (2048 bit)
Modulus:
00:a3:fb:e6:a6:0a:dc:f2:f5:ba:80:93:3a:44:3f:
f8:33:cf:92:60:15:34:8a:88:31:bc:de:c9:b7:9f:
00:da:94:31:3d:03:a5:be:e3:71:49:df:99:a4:00:
a1:fd:50:da:ed:43:37:b4:ff:6d:cb:1f:b7:3a:4e:
3a:cc:ed:9c:2a:bd:9d:d9:1e:bd:8a:55:38:d5:43:
34:bb:2e:3c:e6:6d:9b:49:d8:ee:14:fa:51:9a:86:
0a:54:f4:6a:85:56:2c:34:bf:54:24:a6:32:25:8a:
9e:57:86:12:46:d4:3b:82:4e:35:aa:4c:25:05:00:
32:85:9b:9e:06:56:70:dc:6d:9d:80:fd:0b:5a:16:
95:cb:1a:cc:d6:35:2f:42:db:43:8d:90:fd:32:f1:
7c:0e:6e:51:9c:65:cc:d5:82:90:f1:4d:00:5b:e4:
a5:8f:33:19:a9:6e:f2:b3:32:32:7d:3f:ad:84:77:
58:a8:1e:b7:aa:d7:e3:60:14:b8:9b:1e:27:c7:c6:
9d:10:ea:19:dc:33:e3:4c:27:3b:13:14:59:53:5b:
05:f9:c2:34:05:75:18:d0:48:5d:97:c2:99:60:c0:
72:77:c0:0a:c0:d5:fc:8c:03:e1:1f:b4:af:f6:1f:
6a:15:3f:d5:80:15:70:c7:9e:5c:cf:81:9c:9a:09:
a9:29
Exponent: 65537 (0x10001)
X509v3 extensions:
X509v3 Key Usage: critical
Digital Signature, Key Encipherment
X509v3 Extended Key Usage:
TLS Web Server Authentication
X509v3 Subject Alternative Name:
DNS:node1, DNS:kubernetes, DNS:kubernetes.default, DNS:kubernetes.default.svc, DNS:kubernetes.default.svc.cluster.local, DNS:kubernetes, DNS:kubernetes.default, DNS:kubernetes.default.svc, DNS:kubernetes.default.svc.cluster.local, DNS:localhos
t, DNS:node1, DNS:172.31.9.161api-k8s.example.com, IP Address:10.233.0.1, IP Address:172.31.9.161, IP Address:172.31.9.161, IP Address:10.233.0.1, IP Address:127.0.0.1
Signature Algorithm: sha256WithRSAEncryption
cb:93:0a:48:b9:26:0f:17:37:33:0d:98:53:0f:ca:60:e3:79:
7f:e8:ca:cc:4f:a0:e8:10:91:f6:e4:81:c9:b0:1d:d5:fb:13:
e0:3d:44:21:ff:df:38:8d:a1:32:79:eb:15:f7:54:6e:29:99:
6e:fe:d4:2c:18:fe:ef:82:b8:d7:ae:c2:13:b2:1b:8c:7c:97:
32:b4:23:85:ab:7a:33:0f:59:cb:68:33:28:88:e0:72:23:56:
ea:d5:a4:65:a5:b5:95:46:69:ba:91:f3:e2:10:c8:96:dd:98:
c8:75:dc:13:53:18:e3:2a:36:49:72:89:3c:78:fd:a8:1a:77:
c9:9f:d5:50:05:94:e7:93:26:c1:48:d5:89:9b:7f:2f:72:60:
9f:67:05:43:bc:14:87:d8:e9:bc:26:01:c4:87:9d:82:9a:0d:
05:94:04:0d:ed:28:3c:2b:c3:ee:9f:bb:b8:62:64:26:6f:4c:
00:87:07:5e:3e:3d:4b:38:1a:ea:da:cc:cc:b6:b3:34:49:97:
9c:27:ac:fc:94:f0:d4:05:a7:4f:ad:65:a8:36:49:6c:24:91:
7f:11:30:62:c3:d2:fc:11:db:e2:2b:9e:05:a9:9b:85:a5:55:
3d:ba:7e:e4:1d:0f:1e:65:c6:96:60:d0:db:fb:05:d0:e5:5c:
e6:7d:a4:ad
-----BEGIN CERTIFICATE-----
MIIEADCCAuigAwIBAgIIVvDA/yqACnYwDQYJKoZIhvcNAQELBQAwFTETMBEGA1UE
AxMKa3ViZXJuZXRlczAeFw0xOTA0MDQxNzA2NTFaFw0yMDA0MDMxNzA2NTFaMBkx
FzAVBgNVBAMTDmt1YmUtYXBpc2VydmVyMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8A
MIIBCgKCAQEAo/vmpgrc8vW6gJM6RD/4M8+SYBU0iogxvN7Jt58A2pQxPQOlvuNx
Sd+ZpACh/VDa7UM3tP9tyx+3Ok46zO2cKr2d2R69ilU41UM0uy485m2bSdjuFPpR
moYKVPRqhVYsNL9UJKYyJYqeV4YSRtQ7gk41qkwlBQAyhZueBlZw3G2dgP0LWhaV
yxrM1jUvQttDjZD9MvF8Dm5RnGXM1YKQ8U0AW+SljzMZqW7yszIyfT+thHdYqB63
qtfjYBS4mx4nx8adEOoZ3DPjTCc7ExRZU1sF+cI0BXUY0Ehdl8KZYMByd8AKwNX8
jAPhH7Sv9h9qFT/VgBVwx55cz4GcmgmpKQIDAQABo4IBTjCCAUowDgYDVR0PAQH/
BAQDAgWgMBMGA1UdJQQMMAoGCCsGAQUFBwMBMIIBIQYDVR0RBIIBGDCCARSCBW5v
ZGUxggprdWJlcm5ldGVzghJrdWJlcm5ldGVzLmRlZmF1bHSCFmt1YmVybmV0ZXMu
ZGVmYXVsdC5zdmOCJGt1YmVybmV0ZXMuZGVmYXVsdC5zdmMuY2x1c3Rlci5sb2Nh
bIIKa3ViZXJuZXRlc4ISa3ViZXJuZXRlcy5kZWZhdWx0ghZrdWJlcm5ldGVzLmRl
ZmF1bHQuc3ZjgiRrdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWyC
CWxvY2FsaG9zdIIFbm9kZTGCHzE3Mi4zMS45LjE2MWFwaS1rOHMuZXhhbXBsZS5j
b22HBArpAAGHBKwfCaGHBKwfCaGHBArpAAGHBH8AAAEwDQYJKoZIhvcNAQELBQAD
ggEBAMuTCki5Jg8XNzMNmFMPymDjeX/oysxPoOgQkfbkgcmwHdX7E+A9RCH/3ziN
oTJ56xX3VG4pmW7+1CwY/u+CuNeuwhOyG4x8lzK0I4WrejMPWctoMyiI4HIjVurV
pGWltZVGabqR8+IQyJbdmMh13BNTGOMqNklyiTx4/agad8mf1VAFlOeTJsFI1Ymb
fy9yYJ9nBUO8FIfY6bwmAcSHnYKaDQWUBA3tKDwrw+6fu7hiZCZvTACHB14+PUs4
GurazMy2szRJl5wnrPyU8NQFp0+tZag2SWwkkX8RMGLD0vwR2+IrngWpm4WlVT26
fuQdDx5lxpZg0Nv7BdDlXOZ9pK0=
-----END CERTIFICATE-----
@ykfq I can confirm @valerius257 findings, the extra-vars don't help
only modification roles/kubernetes/master/tasks/kubadm-setup.yml resolve the issue
@zzzuzik Your tasks failed duto no pem file generated, I've found the reason and post it here: https://github.com/kubernetes-sigs/kubespray/issues/2343#issuecomment-479359624
@cbluth @valerius257 supplementary_addresses_in_ssl_keys should be the right way to do this. The issue you noticed with concatenated names like "172.31.9.161api-k8s.example.com" should be fixed in master and the 2.8 branch by #4435 and #4478 respectively (and 2.9 I think?).
supplementary_addresses_in_ssl_keys after reset.yml did the trick in my case with kubespray 2.10.1
my original issue is solved, thanks @rptaylor
Most helpful comment
@cbluth @valerius257 supplementary_addresses_in_ssl_keys should be the right way to do this. The issue you noticed with concatenated names like "172.31.9.161api-k8s.example.com" should be fixed in master and the 2.8 branch by #4435 and #4478 respectively (and 2.9 I think?).