'kubeadm alpha certs', 'certs renew', 'kubeadm alpha'
1.16.4
kubeadm version (use kubeadm version): 1.16.4
Environment:
kubectl version): 1.16.4uname -a): 3.10.0-1062.4.3.el7.x86_64kubeadm alpha certs renew <item> one by one for all certificates and conf files with the new CA in a separate --cert-dir.CONF files must be updated with new CA (base64)content instead of the old CA.
kubeadm alpha certs renew <item> one by one for all certificates and conf files with the new CA along with --cert-dir=\< temp dir >.awk '/certificate-authority-data:/ {print $2}' admin.conf| base64 -d|openssl x509 -noout -dates and openssl x509 -in /etc/kubernetes/pki/ca.crt -noout -datesnot related to #1518, #1361 has some ref but not solution
the problem is updating certs would work as long as one is not updating the CA itself not the other way round.
thanks for the report. i will try to reproduce the problem.
@abhiTamrakar
here is a PR for this:
https://github.com/kubernetes/kubernetes/pull/88052
some notes:
kubelet.conf is not managed by kubeadm (by the kubelet instead). the CA in there is embedded right after TLS bootstrap and it will not change even if the KCM now knows about the new singing CA and even after a kubelet restart. this means you have to manually update this kubeconfig file.cc @fabriziopandini
/lifecycle active
Thanks @neolit123 I agree with some of the factors, my opinion inline.
@abhiTamrakar
here is a PR for this:
kubernetes/kubernetes#88052some notes:
- the PR might not pass with consensus from the maintainers because CA rotation is technically _not supported_ by kubeadm.
_CA rotation has to be done at some point, if even we consider 10 years. From security point of view we have a mandate to keep Intermediate CA of lesser validity <=1yr. It becomes a hard stop for us. This is a kind of feature even most of the managed providers can't provide. I am not sure how many such organization would be there but one day at least everyone has to rotate CA.
I guess it won't hurt as long as kubernetes updates CA with the content of ca.crt file in the default certificate directory.
There shall also be a documentation clear enough on this context, though._
- by rotating CA you can put the stability of your cluster at risk and kubeadm cannot be to blame.
Agreed, kubeadm or any kubernetes component cannot be blamed, but there's even a calculated security risk by not rotating the CA after some good amount of time..
- service account public/private keys are also signed for 10 years and you might want to rotate those too. unfortunately this can end up more complicated.
I guess as long as they are separate entities altogether from the CA certificate, we should be still good. right?
I referred this https://github.com/kubernetes/kubernetes/blob/d19b7242aaf02d5f0cec52638fb746c1007dd17f/cmd/kubeadm/app/phases/certs/certs.go#L69
- the file
kubelet.confis not managed by kubeadm (by the kubelet instead). the CA in there is embedded right after TLS bootstrap and it will not change even if the KCM now knows about the new singing CA and even after a kubelet restart. this means you have to manually update this kubeconfig file.
Yes, we have to handle that part in our automation.
cc @fabriziopandini
_CA rotation has to be done at some point, if even we consider 10 years. From security point of view we have a mandate to keep Intermediate CA of lesser validity <=1yr. It becomes a hard stop for us. This is a kind of feature even most of the managed providers can't provide. I am not sure how many such organization would be there but one day at least everyone has to rotate CA.
OpenShift does this the way it should be done sequentially (as it is written by the folks who wrote the auth in k8s). it's using the operator pattern and it's quite complex.
there was a talk about it at KubeCon some time ago.
maybe it was this one: https://github.com/openshift/service-ca-operator
I guess it won't hurt as long as kubernetes updates CA with the content of ca.crt file in the default certificate directory.
kubernetes itself is quite unfriendly to CA rotation. the sequence of the OpenShift operator was quite strict in terms of how/when components are made aware of a rotated CA.
There shall also be a documentation clear enough on this context, though._
+1, this seems like something that SIG Auth should document and not the kubeadm maintainers.
Agreed, kubeadm or any kubernetes component cannot be blamed, but there's even a calculated security risk by not rotating the CA after some good amount of time..
that is true, the only reason we don't recommend it to users is that they can destroy their cluster and or have long downtimes due to complications.
quite frankly, if someone has the infrastructure at their disposal, they can export all workloads and data store, create a new cluster and import the old data. then scrap the old cluster.
the "cluster-replace" upgrade pattern is the safest one!
I guess as long as they are separate entities altogether from the CA certificate, we should be still good. right?
the same argument for CA rotation applies to service account key pair rotation.
the private key signs _all_ service account tokens in the cluster:
https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/#token-controller
which makes it "difficult" to rotate, to say the least.
https://github.com/kubernetes/kubernetes/issues/20165
@neolit123 any place I can raise issue to request to put some documentation around CA rotation?
I checked kubernetes-sigs but unsure thats the right place.
worth to mention that this PR would actually help people who would want to rotate CA on their own, ofcourse there is a risk but having correct steps in place will help. I am trying on my own and have seen good progress, once I am sure those steps work, I might as well contribute to documentation and send it to kubernetes maintainers technical review.
the documentation should reside in the k8s.io website. the repository that holds it is https://github.com/kubernetes/website.
e.g. on this page:
https://kubernetes.io/docs/tasks/tls/certificate-rotation/
you can create a tracking issue there, tag it with /sig auth and link to this issue.
to get feedback from SIG Auth you can try discussing this in their meeting or slack channel:
https://github.com/kubernetes/community/tree/master/sig-auth
the documentation in question should be deployer (e.g. kubeadm) agnostic.
@abhiTamrakar
a question, did you have to manually update the cluster-info config map in the kube-public namespace to include the new CA PEM?
kubectl get cm -n kube-public cluster-info -o yaml
@neolit123 Yes, I patched it.
Just because I don't see it mentioned anywhere above, I think the key step in smoothly rotating the cluster CA is to append both the old and the new CA certs to /etc/kubernetes/pki/ca.crt. Unfortunately this also breaks lots of kubeadm which assumes this file only contains a single cert.
@anguslees
technically, the singing cert/key for a KCM trust bundle can be made distinct with an extra flag for the KCM that defaults to a sane order value that the user can control (or maybe the user can also tell it to auto-detect).
i think the cert management process is already complicated enough on the user side to have a separate CA file for the KCM that is not a bundle... https://github.com/kubernetes/kubeadm/issues/1350 feels like more of a KCM issue that has to be first discussed with SIG Auth.
somebody did send a PR to support CA bundles across kubeadm here:
https://github.com/kubernetes/kubernetes/pull/86833
but we hit the same issue.
I don't know how far I am correct here with this, still reading through the codebase to understand what all can be done because this is a hard stop for us.
As @anguslees mentioned, I did found the ca.key (private key) can have multiple private keys but again the issue in #1350 seems to be coming in the way. I even saw a 2-3 months old PR raised by someone handling CA rotation, lost that link.
Ideally for people want to rotate CA there shall be a switch --rotate-ca-certificates kind of which should handle updating all entities with new CA in a rolling fashion.
@neolit123 is there a possibility of getting it back ported to one of 1.16.x and 1.17.x?
Also, ca.crt in service account tokens also doesn't get updated, any way to do that or it has to be manual for now?
Hi. Sadly no, we only backport critical fixes (e.g panics and security
issues).
Also, yes the service accounts need manual update AFAIK. Sig auth should
confirm.
I experienced this recently with a client who was locked out of their cluster despite expecting all certs to renew. They were able to get back in by using the /etc/kubernetes/admin.conf which was updated by the command. Hope that helps if anyone in dire need until they can regenerate (manually it seems).
Hi @neolit123 @abhiTamrakar
Following steps to CA rotation, I have create new CA and at this step "Run kubeadm alpha certs renew
I'm running this command like kubeadm alpha certs renew apiserver --cert-dir = /tmp/pki/ to create apiserver certificate. But certificate is not generating in /tmp/pki folder instead it is generating in default (/etc/kubernetes/pki) folder signing with old CA. Please suggest if I'm doing anything wrong
Hi @neolit123 @abhiTamrakar
Following steps to CA rotation, I have create new CA and at this step "Run kubeadm alpha certs renew
- one by one for all certificates and conf files with the new CA along with --cert-dir=< temp dir >."
I'm running this command like kubeadm alpha certs renew apiserver --cert-dir = /tmp/pki/ to create apiserver certificate. But certificate is not generating in /tmp/pki folder instead it is generating in default (/etc/kubernetes/pki) folder signing with old CA. Please suggest if I'm doing anything wrong
@ravirajshankarl kubeadm uses existing ca certificates to sign underlying certs, which is why you are seeing certs signed by old CA.
If you want to sign using new CA, the catch is to replace existing old ca, with new one but before doing that you might want to take all necessary backup and confirm control plane components are running as static pod.
Please note CA rotation is a tedious process, do not attempt it on a live cluster unless you understand all aspects.
Otherwise, the recommended way is to follow https://kubernetes.io/docs/tasks/tls/manual-rotation-of-ca-certificates/.
Hi @neolit123 @abhiTamrakar
Following steps to CA rotation, I have create new CA and at this step "Run kubeadm alpha certs renew one by one for all certificates and conf files with the new CA along with --cert-dir=< temp dir >."
I'm running this command like kubeadm alpha certs renew apiserver --cert-dir = /tmp/pki/ to create apiserver certificate. But certificate is not generating in /tmp/pki folder instead it is generating in default (/etc/kubernetes/pki) folder signing with old CA. Please suggest if I'm doing anything wrong@ravirajshankarl kubeadm uses existing ca certificates to sign underlying certs, which is why you are seeing certs signed by old CA.
If you want to sign using new CA, the catch is to replace existing old ca, with new one but before doing that you might want to take all necessary backup and confirm control plane components are running as static pod.
Please note CA rotation is a tedious process, do not attempt it on a live cluster unless you understand all aspects.
Otherwise, the recommended way is to follow https://kubernetes.io/docs/tasks/tls/manual-rotation-of-ca-certificates/.
@abhiTamrakar
Thank you very much for response. I got confusion with your response
"If you want to sign using new CA, the catch is to replace existing old ca, with new one"
I have create newCA and placed it in tmp folder to generate certificates. Do I need to keep newCA in /etc/kubernetes/pki folder also?
And also my signed certificates are not going to mentioned folder(/tmp/pki) kubeadm alpha certs renew apiserver --cert-dir = /tmp/pki/ . Please can you clarify above doubts to me. Thanks
Hi @neolit123 @abhiTamrakar
Following steps to CA rotation, I have create new CA and at this step "Run kubeadm alpha certs renew one by one for all certificates and conf files with the new CA along with --cert-dir=< temp dir >."
I'm running this command like kubeadm alpha certs renew apiserver --cert-dir = /tmp/pki/ to create apiserver certificate. But certificate is not generating in /tmp/pki folder instead it is generating in default (/etc/kubernetes/pki) folder signing with old CA. Please suggest if I'm doing anything wrong@ravirajshankarl kubeadm uses existing ca certificates to sign underlying certs, which is why you are seeing certs signed by old CA.
If you want to sign using new CA, the catch is to replace existing old ca, with new one but before doing that you might want to take all necessary backup and confirm control plane components are running as static pod.
Please note CA rotation is a tedious process, do not attempt it on a live cluster unless you understand all aspects.
Otherwise, the recommended way is to follow https://kubernetes.io/docs/tasks/tls/manual-rotation-of-ca-certificates/.@abhiTamrakar
Thank you very much for response. I got confusion with your response
"If you want to sign using new CA, the catch is to replace existing old ca, with new one"I have create newCA and placed it in tmp folder to generate certificates. Do I need to keep newCA in /etc/kubernetes/pki folder also?
Yes.
And also my signed certificates are not going to mentioned folder(/tmp/pki) kubeadm alpha certs renew apiserver --cert-dir = /tmp/pki/ . Please can you clarify above doubts to me. Thanks
I do not remember this being an issue. Might have to try it. If you think it is an issue you might want to open one.
Hi, sorry for being late. Answered inline.