Kubespray: Can not connect k8s cluster from out side

Created on 20 Aug 2020  路  7Comments  路  Source: kubernetes-sigs/kubespray

I copy /root/.kube/config from master node to another machine out side k8s cluster

and change
server: https://192.168.0.148:6443
to public IP
server: https://139.9.x.x:6443

when run kubectl get node, I get error
Unable to connect to the server: x509: certificate is valid for 10.233.0.1, 192.168.0.148, 192.168.0.148, 10.233.0.1, 127.0.0.1, 192.168.0.148, 192.168.0.212, not 139.9.x.x

How to fix it?

kinbug lifecyclstale

Most helpful comment

how to fix this issue after cluster installation?

you can change /etc/kubernetes/kubeadm-config.yaml , add the 139.9.x.x , like this

certSANs:
  - xxxxxxx
  - 139.9.x.x 

then:
(1) backup the /etc/kubernetes/pki/apiserver.crt and apiserver.key , such as mv /etc/kubernetes/pki/apiserver.* ../
(2) kubeadm init phase certs apiserver --config={{ kube_config_dir }}/kubeadm-config.yaml

All 7 comments

You need to set the ansible variable "supplementary_addresses_in_ssl_keys" to indicate the IP adress you want to use.
Example:
supplementary_addresses_in_ssl_keys: [ "139.9.x.x" ]

In case you want to point a domain towards the IP adress you need to add the domain as well.
Example:
supplementary_addresses_in_ssl_keys: [ "139.9.x.x", cluster.example.com ]

how to fix this issue after cluster installation?

how to fix this issue after cluster installation?

you can change /etc/kubernetes/kubeadm-config.yaml , add the 139.9.x.x , like this

certSANs:
  - xxxxxxx
  - 139.9.x.x 

then:
(1) backup the /etc/kubernetes/pki/apiserver.crt and apiserver.key , such as mv /etc/kubernetes/pki/apiserver.* ../
(2) kubeadm init phase certs apiserver --config={{ kube_config_dir }}/kubeadm-config.yaml

@bl0m1

How to set the ansible variable

supplementary_addresses_in_ssl_keys: [ "139.9.x.x" ]

when I run
ansible-playbook -i inventory/mycluster/hosts.yml --become --become-user=root cluster.yml

to install?

@bl0m1

How to set the ansible variable

supplementary_addresses_in_ssl_keys: [ "139.9.x.x" ]

when I run
ansible-playbook -i inventory/mycluster/hosts.yml --become --become-user=root cluster.yml

to install?

When running the playbook the config will be written but you manually need to run kubeadm on all masters after ansible has completed. the command can be seen in Leos comment above ^^ (step 2)

@bl0m1

But how to setup 139.9.x.x before before running playbook?

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Was this page helpful?
0 / 5 - 0 ratings