Kubeadm: apiserver-etcd-client certificates differ

Created on 3 Aug 2018  路  10Comments  路  Source: kubernetes/kubeadm

Is this a BUG REPORT or FEATURE REQUEST?

BUG REPORT

Versions

kubeadm version (use kubeadm version):
kubeadm version: &version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.0", GitCommit:"91e7b4fd31fcd3d5f436da26c980becec37ceefe", GitTreeState:"clean", BuildDate:"2018-06-27T20:14:41Z", GoVersion:"go1.10.2", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • Kubernetes version (use kubectl version):
    Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.0", GitCommit:"91e7b4fd31fcd3d5f436da26c980becec37ceefe", GitTreeState:"clean", BuildDate:"2018-06-27T20:17:28Z", GoVersion:"go1.10.2", Compiler:"gc", Platform:"linux/amd64"}
  • Cloud provider or hardware configuration:
  • OS (e.g. from /etc/os-release):
    Ubuntu 16.04
  • Kernel (e.g. uname -a):
    Linux 4.4.0-130-generic #156-Ubuntu SMP Thu Jun 14 08:53:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
  • Others:

What happened?

With external etcd cluster each node has unique apiserver-etcd-client.crt/key as created by:
kubeadm alpha phase certs apiserver-etcd-client --config=/tmp/${HOST0}/kubeadmcfg.yaml
kubeadm alpha phase certs apiserver-etcd-client --config=/tmp/${HOST1}/kubeadmcfg.yaml
kubeadm alpha phase certs apiserver-etcd-client --config=/tmp/${HOST2}/kubeadmcfg.yaml

as described in documentation step 4:
https://kubernetes.io/docs/setup/independent/setup-ha-etcd-with-kubeadm/

yet a single unique cert/key is required to be copied to each master as described in setting up HA cluster:
https://kubernetes.io/docs/setup/independent/high-availability/#external-etcd

What you expected to happen?

A single unique apiserver-etcd-client.crt/key should be created and shared on etcd and master nodes

How to reproduce it (as minimally and precisely as possible)?

Reproduce as described above

Anything else we need to know?

areHA kinbug kindocumentation prioritimportant-longterm

All 10 comments

So this is a feature of HA cert copying that we've talked about for HA, but will likely still be manual for a while.

/assign @fabriziopandini @timothysc

OK, thanks, wasn't sure if I was missing something. Perhaps the documentation should be updated to be less confusing, i.e. only create apiservier-etcd-client once like the initial step to create shared ca.crt/key.

it's ok to generate the client cert per master. Just needs to be issued by the etcd-ca

we should decide which way is preferred and update the docs to be consistent.

it's odd that we document having a shared apiserver-etcd-client and individual etcd-healthcheck-client's.

A single cert is easier to manage IMO.
There's also no specific host information in these identities IIRC which probably violates some RFC since these certs will have the same CN.
( I think we do this exact thing for the peer and server certs though, so maybe that's not a problem. )

The con of a single shared identity for these purposes is that revocation is less granular, but we may require code updates to support hostnames being baked into the certs.

arguably a private key should not be reused in the case of a client connection like this. Each client that would use this as a form of authentication should have it's own keypair.

From the mgmt perspective while it's certainly easier to issue a single client cert with the etcd-ca for all apiservers. It's not best practice.

We will have to consider how this certificate will be rotated at some point as well. Is it in the scope of kubeadm to issue a cert per if the etcd-ca key pair is available?

expanding the SAN list for peer and server listening certs to include all etcd nodes is also a bit strange.

I think you're right.
We should update the docs to stop implying that client certs are copied between masters.


Different topic:

We will have to consider how this certificate will be rotated at some point as well. Is it in the scope of kubeadm to issue a cert per if the etcd-ca key pair is available?

kubeadm does issue new cert pairs on upgrade if the existing certs are old enough.
We've thrown around several ideas in regard to cert rotations and ca renewal. Self-hosting was supposed to resolve a significant portion of this management issue.
Recently @timothysc has been proposing a "sentinel" daemon for managing nodes.

I'm not up to date on whether a clear solution has been decided on yet.

Can we punt this out of the milestone? I don't have the bandwidth at the moment. If someone else wants to make the suggested changes though that would be 馃憤

So it seems the actionable item is to update https://kubernetes.io/docs/setup/independent/high-availability/#external-etcd to generate a unique cert&key per node.

I don't believe this issue exists anymore with the latest experimental control plane join behavior. If there is an issue please feel free to reopen.

Was this page helpful?
0 / 5 - 0 ratings