Kops: Provide full secret rotation

Created on 30 Nov 2016  路  35Comments  路  Source: kubernetes/kops

We should provide a way to rotate all our secrets: usernames, tokens, CA key, SSH key.

Feature Request aresecurity lifecyclfrozen

Most helpful comment

Regarding the rotate-secrets manual: we tested the described steps and it more or less worked as described with a downtime of like 15 minutes (due to we had to force cluster update twice).
The only step which we had a problem was the line:

kops get secrets | grep ^Keypair | awk '{print $2}' | xargs -I {} kops delete secret keypair {}

which caused an error in the current kops version. But we could delete the pki directly from the s3 bucket with aws cli:
aws s3 rm s3://<your_bucket>.com/pki/issued --recursive
aws s3 rm s3://<your_bucket>.com/pki/private --recursive

Regarding proxy: We didn't investigate so far further in that direction due to time reasons...

All 35 comments

@justinsb

Curious do we want 1 command to rotate them all, or do we need individual execution paths for all of our secrets?

Should we also include a way to generate these secrets within kops automatically?

@kris-nova I am thinking that we do a rolling update of the cluster. All components to need a restart and the kubeconfig is going to change. I am pretty sure kops already generates the secrets, we just need to be able to roll and update. Also we need to plugin to 3rd part cert sources.

Secretes:

  • TLS Certs
  • Admin ssh key
  • Admin password

^ What am I missing??

I was hoping kops could generate an ssh key for the user if they wanted one ad-hoc - do we do this yet?

Nope we require an ash key

Is this still on a roadmap somewhere?

Is there a workaround I can follow in the meantime? We need a procedure for rotating credentials.

At this point creating a new secret and applying acrolling update is your option.

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

/remove-lifecycle rotten
/reopen

Hi guys,

I'm opening this issue again, as we have the same challenge currently.
As the issue is already a bit old, I want to ask, if by now maybe something in that direction happend.
I found a sketched procedure here: https://github.com/kubernetes/kops/blob/master/docs/rotate-secrets.md but this one does not work without downtime.

Does maybe someone got some ideas / experiences how to solves this?

Thanks a lot!

Chris

/reopen

@Christian-Schmid: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Does maybe someone got some ideas / experiences how to solves this?

@Christian-Schmid Did you find any non-downtime solutions to this? We're looking at the same thing now.

On behalf of @Christian-Schmid

/reopen

@mikesplain: Reopened this issue.

In response to this:

On behalf of @Christian-Schmid

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Hi @philwhln
We're still looking for a "nice" solution to rotate the secrets.
One reason why we want to do the rotation is that we want to control external access to the kubernets api.
One option we were evaluating was to put an nginx reverse proxy in front of the kube api.
Like this we wouldn't really have to rotate the certificates, and still could control the external api access with other means of authentication.

But if this workaround helps, depends on the use case why you want to rotate :-)

Hi @Christian-Schmid ,

In the short-term, we're looking to rotate out the keys we used in early development of our clusters as these were used by people who have left the company. In the longer-term, we see this as good practice. Interesting idea with the reverse proxy. Would this work?

We're not overly confident in https://github.com/kubernetes/kops/blob/master/docs/rotate-secrets.md since it seems to have been written 18 months ago with little review or updates. That said, we're going to dig into it and test it out. @justinsb, I'm interested in your thoughts on this, since you wrote that doc and also opened this ticket :)

Regarding the rotate-secrets manual: we tested the described steps and it more or less worked as described with a downtime of like 15 minutes (due to we had to force cluster update twice).
The only step which we had a problem was the line:

kops get secrets | grep ^Keypair | awk '{print $2}' | xargs -I {} kops delete secret keypair {}

which caused an error in the current kops version. But we could delete the pki directly from the s3 bucket with aws cli:
aws s3 rm s3://<your_bucket>.com/pki/issued --recursive
aws s3 rm s3://<your_bucket>.com/pki/private --recursive

Regarding proxy: We didn't investigate so far further in that direction due to time reasons...

@Christian-Schmid . Thanks for this info!

The only step which we had a problem was the line:

We hit the same problem and decided not proceed. Good to know that deleting in S3 worked and you were able to complete the process. We had considered this too. Downtime is not great though :)

+1

@justinsb

It doesn't look like kubernetes supports the usage of multiple certs that would make a zero downtime rotation possible. I would appreciate if you could point me to the sig responsible for pki or maybe point me on if and how I could start working on this myself? This issue seems pretty critical and judging by the response on it doesn't seem likely that waiting for 10 years for someone to implement a solution would be sufficient. If somehow our secrets get leaked then also there is no way to rotate the certificates unless we accept the hefty downtime.

@chrislovecnm

At this point creating a new secret and applying acrolling update is your option.

Could you explain what you mean by this? You mean the current documented method that involves deleting all pki related data on S3 or something else?

https://github.com/kubernetes/kubeadm/issues/581
https://github.com/kubernetes/kubeadm/issues/1361

Maybe kubernetes and kubeadm supports renewal but lacks the docs, certainly for HA master nodes judging by the comments on the above mentioned issues. After those are addressed maybe the parts of the code can be also ported over from kubeadm.

BTW @philwhln @Christian-Schmid for your use cases about external access, maybe the best approach is to use OIDC or something like AWS authenticator instead of x509. There's good support for these approaches already.

And one way to rotate the certificates with minimum downtime would be to spin up a warm standby cluster when the certificates are about to expire and move over the traffic to this other cluster.

use OIDC

@tushar00jain We already do use this, with dex (similar to https://thenewstack.io/kubernetes-single-sign-one-less-identity/), but this doesn't remove the need for Kubernetes to have certificates that need rotating.

move over the traffic to this other cluster.

This does seem like the only solution right now, but there's a cost to this and something we don't think we should have to do.

but this doesn't remove the need for Kubernetes to have certificates that need rotating.

yes at least the downtime should be in seconds if at all but the approach mentioned by deleting the pki folder on S3 is just not feasible

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Would using offline root CAs with long lifetimes, and then using intermediate and subordinate CAs for the cluster CA, help any in a zero-downtime rolling-update? It obviously wouldn鈥檛 help if the root CA must be rotated, if any of the CAs must be revoked, or with username or token rotation. I鈥檓 curious if it would at least help in non-revocation forward rolling updates. I suppose this would assume that new nodes wouldn鈥檛 be able to join the cluster until the new cluster CA certificates were added to the master nodes, and a new initial Kubelet client certificate was in place. SSH on the nodes could trust certificates signed by the root CA chain.

/remove-lifecycle stale

I'm very interested in this feature as well. We should be able to rotate secrets without downtime.

I'm trying to use article https://github.com/kubernetes/kops/blob/master/docs/rotate-secrets.md and faced the issue with etcdv3 cluster, not it reporting the next:

2019-12-14 15:44:51.789763 I | embed: rejected connection from "172.20.51.200:57144" (error "remote error: tls: bad certificate", ServerName "etcd-events-2.internal.<redacted>")
2019-12-14 15:44:51.793287 I | embed: rejected connection from "172.20.51.200:57148" (error "remote error: tls: bad certificate", ServerName "etcd-events-2.internal.<redacted>")

Need to add the section to doc https://github.com/kubernetes/kops/blob/master/docs/rotate-secrets.md

After changing Etcd-v3 CA, need to trigger issuing new certs on masters.

Log in to each master via ssh and run:

sudo find /mnt/ -name server.* | xargs -I {} sudo rm {}
sudo find /mnt/ -name me.* | xargs -I {} sudo rm {}

It will erase the old peer and client certificate. Then roll masters again, and on startup, Etcd will issue a valid certificate from a new CA.

It will erase the old peer and client certificate. Then roll masters again, and on startup, Etcd will issue a valid certificate from a new CA.

@kuzaxak Have you gotten this to work? I've tried doing this manually, and it does not seem to be so simple. In our setup, we are providing our own custom CAs for everything by uploading them to the kops state bucket, and we set them up to be rotated every 6 months. What I did was:

  • Replace the Etcd CAs
  • Allow etcd-manager to mount the volumes (with the old certs)
  • Delete the existing client/server/peer certs on the etcd volumes
  • Restart etcd-manager to force certificate issuance.
    The problem is that after doing this, peer authentication still seems to be broken. After a few hours of debugging I was pretty much stumped. I haven't had any more time to debug that since.

Basically this has forced us to use the current strategy of replacing clusters entirely when their CAs expire, which is far from ideal. I was curious if anyone else had encountered this issue.

Updating the docs in #8948

I stumbled on this issue and the docs after running into certificate problems with etcd after a cluster upgrade. I just have to say that I'm very thankful I found them and every single line worked as expected.

Big thank you for writing these docs! :heart:

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

/remove-lifecycle stale
/lifecycle frozen

Was this page helpful?
0 / 5 - 0 ratings

Related issues

drewfisher314 picture drewfisher314  路  4Comments

georgebuckerfield picture georgebuckerfield  路  4Comments

rot26 picture rot26  路  5Comments

DocValerian picture DocValerian  路  4Comments

pluttrell picture pluttrell  路  4Comments