Kops: Add support for multiple ssh keys

Created on 12 Apr 2017  ·  34Comments  ·  Source: kubernetes/kops

It would be a great addition to be able to create more than one public key that works on the nodes brought up by kops. The current setup is to have a secret with the admin ssh key, it would be great to be able to add new sshpublickey secrets with a different name as and have kops add them to the authorized_keys during node installation.
eg: Create a new keykops create secret --name <clustername> sshpublickey <username>, and add it by using the EC2 userdata scripts, while keeping the admin key as the "official" key in the AWS console.

lifecyclrotten

Most helpful comment

Still interested in this.
/remove-lifecycle stale

All 34 comments

@bcorijn This is great feature indeed.

I am currently exploring kubernetes and have set up a deployment on azure container service.

One thing i am trying to figure out is how to add another SSH public key so a colleague can administer it as well as me (it automatically added mine during setup).

Can i assume from this issue that it is not possible to do this yet?

Any workaround for this?

bhack, there is no workaround for this as of this time. This is because the
way AWS works that it is tied to 1 ssh key. The only solution is to rotate
the keys using kops secret command. This will replace the old instances
with newer ones with the new ssh key.

On Tue, Aug 8, 2017 at 8:54 AM, bhack notifications@github.com wrote:

Any workaround for this?


You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/kubernetes/kops/issues/2348#issuecomment-320819835,
or mute the thread
https://github.com/notifications/unsubscribe-auth/ABhWefNvtzwtoVyW973H2cAn-4QTrEqkks5sV7HcgaJpZM4M7GLx
.

--
Eduardo D. Bergavera, Jr.
Linux Admin
Email: [email protected]
OpenID: https://launchpad.net/~edbergavera
Github: https://github.com/edbergavera

Why? Is it not simply adding the ssh key to the authorized key file on the node?

That will work but you have to manually do this everytime you add a new
node or if rolling update replaces existing nodes with new ones.

On Tue, 8 Aug 2017 at 9:46 AM bhack notifications@github.com wrote:

Why? Is it not simply adding the ssh key to the authorized key file on the
node?


You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/kubernetes/kops/issues/2348#issuecomment-320826668,
or mute the thread
https://github.com/notifications/unsubscribe-auth/ABhWeY-UkP4ZPsI8Lk-cuGkMmnw8ef39ks5sV73ggaJpZM4M7GLx
.

>

Eduardo D. Bergavera, Jr.
Linux Admin
Email: [email protected]
OpenID: https://launchpad.net/~edbergavera
Github: https://github.com/edbergavera

Can an hook overwrite the authorized key file?

I haven't taken a look at this but looks promising though.

On Tue, Aug 8, 2017 at 12:17 PM, bhack notifications@github.com wrote:

Can an hook overwrite the authorized key file?


You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/kubernetes/kops/issues/2348#issuecomment-320845695,
or mute the thread
https://github.com/notifications/unsubscribe-auth/ABhWeXptDcwWOSTngz46qIAcMpQM1eCHks5sV-FngaJpZM4M7GLx
.

--
Eduardo D. Bergavera, Jr.
Linux Admin
Email: [email protected]
OpenID: https://launchpad.net/~edbergavera
Github: https://github.com/edbergavera

Yes but another obstacle is behind the corner https://github.com/kubernetes/kops/issues/2690

@bhack a (dirty) workaround that might work for you:

Since kops spawns Kubernetes clusters, you could use a DaemonSet to update you /home/defaultuser/.authorized_keys... since DaemonSets run on all nodes, it automates the process of putting you Team members' public SSH keys on your Kubernetes nodes.

I understand it might not be convenient to have this DaemonSet running all the time, but if you don't mind getting really hacky, you can also run your script as an initContainer for another DaemonSet you're already deploying (node-problem-detector for instance). This way, it won't stay up until its job is done...

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

Still interested in this.
/remove-lifecycle stale

another way to hack this (not tested though) is to have a user-data to pull the right set of keys from somewhere online like S3 bucket.

This is not ideal but it might help

Following https://github.com/kubernetes/kops/blob/master/docs/security.md#ssh-access , public keys are already available on S3 through the folder pki/ssh/public/<username>/<hash_of_pub>. The contents of the file is the public key.

It could be an easy enhancement to build a cloud-init script, launch configuration or something that could pull the key from there. Of course it would still not be live-reload of keys if you add a new one, etc, but at least it's better than nothing.

What about a file which simply holds multiple public keys. I tried to deploy this right now, but it only uses the first line (first key). Can you simply allow to use a file that holds multiple public keys?

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

it's a cool feature to have
/remove-lifecycle stale

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

/remove-lifecycle stale

This is being worked on in #5978 🙏

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

/remove-lifecycle stale

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

/remove-lifecycle stale

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

/remove-lifecycle stale

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

/remove-lifecycle stale

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Related/alternative: #8830

/remove-lifecycle stale

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Was this page helpful?
0 / 5 - 0 ratings