Kubeadm: join w/ master dns name writes ip to kubelet config

Created on 24 Aug 2017  路  24Comments  路  Source: kubernetes/kubeadm

i'm using kubeadm in a corporate cloud environment with enforced node recycling, one of the challenges of using kubeadm is that join will always use the ip address of the master instead of a passed in dns name (master cert also with subject alt to include dns name). the join ends up writing out the master ip to the kubelet.conf which complicates rotation/restoration of the master. given the symmetric secret, and the ca hash work being done in
https://docs.google.com/document/d/1SP4P7LJWSA8vUXj27UvKdVEdhpo5Fp0QHNo4TXvLQbw/edit?ts=5971498a#heading=h.5bejulk96xxi

is there any reason why kubeadm couldn't support using the actual value passed (ip or dns per user choice) instead of forcing ip usage?

kubeadm version - 1.7.4

prioritbacklog triaged

Most helpful comment

Running into this issue myself. 3 questions here:

  1. Will this be enhanced in a future release?
  2. What is the current workaround?
  3. Can I help with fixing, documenting or testing?

All 24 comments

related #338

@timothysc @mattmoyer ^

+1, this seems reasonable to allow a hostname and validate the certificate against that hostname rather than pre-resolving to an IP address.

I was confused because I thought this already worked, and I just tested to confirm and it seems to:

# on my master node
$ kubeadm init --apiserver-cert-extra-sans k8s-api.myservice.foo.local
[...]
You can now join any number of machines by running the following on each node
as root:

  kubeadm join --token 7bb4c3.25fe05a25378e218 10.0.2.15:6443 --discovery-token-ca-cert-hash sha256:19402aab9dd795f0bfb74e1a7f2dcf215392222b2020510bdab5e87dda19526a
# on my worker node
$ kubeadm join --token 7bb4c3.25fe05a25378e218 k8s-api.myservice.foo.local:6443 --discovery-token-ca-cert-hash sha256:19402aab9dd795f0bfb74e1a7f2dcf215392222b2020510bdab5e87dda19526a
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[preflight] Running pre-flight checks
[preflight] Starting the kubelet service
[discovery] Trying to connect to API Server "k8s-api.myservice.foo.local:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://k8s-api.myservice.foo.local:6443"
[discovery] Requesting info from "https://k8s-api.myservice.foo.local:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "k8s-api.myservice.foo.local:6443"
[discovery] Successfully established connection with API Server "k8s-api.myservice.foo.local:6443"

Node join complete:
* Certificate signing request sent to master and response
  received.
* Kubelet informed of new secure connection details.

Run 'kubectl get nodes' on the master to see this machine join.

In this case I have a (local) DNS entry for k8s-api.myservice.foo.local that resolves to my API server IP. The only missing thing right now is that there's no way to get kubeadm init to print out the right kubeadm join command with a hostname. However, if you substitute in the hostname yourself it should all work.

The API server TLS certificate common name/SANs will be validated against whatever name you pass to kubeadm join, so you need to make sure to add it with --apiserver-cert-extra-sans when you run kubeadm init.

The issue is when i examine the contents of /etc/kubernetes/kubelet.conf its the ip address of the master not the dns name thats configured. If we join with dns name, i would expect to see the dns name here.

join

$ kubeadm join --token 5bb4c3.25fe05a25378e218 bdata-dev.example.com:6443
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[preflight] Skipping pre-flight checks
[discovery] Trying to connect to API Server "bdata-dev.example.com:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://bdata-dev.example.com:6433"
[discovery] Cluster info signature and contents are valid, will use API Server "https://bdata-dev.example.com:6443"
[discovery] Successfully established connection with API Server "bdata-dev.example.com:6443"
[bootstrap] Detected server version: v1.7.4
[bootstrap] The server supports the Certificates API (certificates.k8s.io/v1beta1)
[csr] Created API client to obtain unique certificate for this node, generating keys and certificate signing request
[csr] Received signed certificate from the API server, generating KubeConfig...
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"

Node join complete:
* Certificate signing request sent to master and response
  received.
* Kubelet informed of new secure connection details.

Run 'kubectl get nodes' on the master to see this machine join.

kubelet config file that gets generated

$ cat /etc/kubernetes/kubelet.conf | grep http
server: https://192.168.10.8:6433

Ah, I see what you mean. The initial kubeadm discovery process works but then the bootstrap kubelet config (/etc/kubernetes/bootstrap-kubelet.conf) ends up configured incorrectly with the IP. I'll take a look at where this happens.

fixing it there has a minor ripple to generating the api server pod's cli params to use the netutil.ChooseBindAddress. i.e. the root reason afaics for the early rewrite of the configured value was its use in both the client config as well as there server bind addresses, separating the two uses allows the use of a dns entry for the client.

fwiw, I have fixes against 1.7 release branch here https://github.com/kapilt/kubernetes/tree/kubeadm-use-advertised

I assume I should replant to 1.8 trunk, is this something that is appropriate for a pr to the extant release branch?

@kapilt great! We'll definitely want it rebased onto master/trunk for 1.8. Once we get a PR set up for 1.8 we can talk to @wojtek-t and others to decide if it's something we should cherry pick into the 1.7.x series.

@mattmoyer can we close this?

Running into this issue myself. 3 questions here:

  1. Will this be enhanced in a future release?
  2. What is the current workaround?
  3. Can I help with fixing, documenting or testing?

To confirm, I just bootstrapped a master node on a HypriotOS raspberry pi for my pi-cluster and can confirm that this is still happening on 1.8.

If you need any info from me feel free to let me know below is version info of the kubeadm used :

$ kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.2", GitCommit:"bdaeafa71f6c7c04636251031f93464384d54963", GitTreeState:"clean", BuildDate:"2017-10-24T19:38:10Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/arm"}

Thanks for the great work!

I recreated my master with @kapilt version of the command passing in the apiserver cert param above below is the output if it helps.

HypriotOS/armv7: root@whitewalker in ~
$ kubeadm init --pod-network-cidr 10.244.0.0/16 --apiserver-cert-extra-sans whitewalker --token-ttl 0
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.8.2
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks
[preflight] WARNING: docker version is greater than the most recently validated version. Docker version: 17.05.0-ce. Max validated version: 17.03
[preflight] Starting the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [whitewalker kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local whitewalker] and IPs [10.96.0.1 192.168.1.40]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] This often takes around a minute; or longer if the control plane images have to be pulled.
[apiclient] All control plane components are healthy after 80.021132 seconds
[uploadconfig]聽Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[markmaster] Will mark node whitewalker as master by adding a label and a taint
[markmaster] Master whitewalker tainted and labelled with key/value: node-role.kubernetes.io/master=""
[bootstraptoken] Using token: 375934.88791734e663b665
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: kube-dns
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run (as a regular user):

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  http://kubernetes.io/docs/admin/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join --token 375934.88791734e663b665 192.168.1.40:6443 --discovery-token-ca-cert-hash sha256:f57426e7c384ea2d0a40baf1d66d31a08ba8525c843de9eba16727f15bbf49d5

also happening on 1.9.1

fwiw, the mechanics underlying this issue also cause the master to write out its ip to on kubeadm init to config files on disk and to the configmap it initializes.

/cc @craigtracey @chuckha

I'd like to take this on, will post back updates as I progress through.

@stevesloka - just poke me on review and I can help guide it through.

@timothysc Is the best place to get the dns name from --apiserver-cert-extra-sans? What should happen if multiple sans are specified? Use the first?

@stevesloka As someone who uses this heavily, I think it should be a configurable parameter. This means you can use things like ELB addresses.

Currently the advertise address is used I believe, maybe a flag like --apiserver-dns-name which automatically gets added to the extra-sans?

@jaxxstorm that sounds good to me, I like the idea of a new flag.

@jaxxstorm would it make sense to just use --apiserver-advertise-address? This way we don't need a new flag.

@stevesloka doesn't that get used as the listen address for the apiserver manifest? Would a hostname be acceptable there?

yup you are correct @jaxxstorm I got ahead of myself and forgot API server requires an IP address.

Was this page helpful?
0 / 5 - 0 ratings