I would like to give access to developers to the dashboard in ReadOnly mode, i know that for RBAC i'll have to wait until 1.6 release but actually i can't even find a way to add user accounts using Token or Certificate.
Maybe i'm missing something but reading through the documentation didn't help me.
It's not clear how you should add users to the cluster with kops, kops create secret lets you create only a sshkey user to access the host machine.
In a desperate attempt i addedd manually created certs to the pki folder in the bucket but it didn't work.
So it's possible to explain how we are supposed to add users?
I've found the following to work on Amazon Web Services with a caveat:
These files are simply extension-less JSON documents containing {"Data": "TOKEN GOES HERE"} and named as a user.
When the master ASG nodes come online as part of their bootup sequence they copy these files out and transform them into /srv/kubernetes/known_tokens.csv.
So, to change or add new users:
rebootThe caveat is that if a username (token file filename) contains a period it seems to cause extraneous newlines in the /srv/kubernetes/known_tokens.csv file which require manual cleanup before the API Server will start. Manual ssh and then service kubelet restart works in that case.
Hopefully it just works well for you! :)
Thanks for the insights @r4j4h . Is there any trick I missed in order to create valid tokens? I thought this could be a perfect solution for our current authentication needs, but as soon as try to add some new tokens, the master fails to start (the new json file has a single line, no line feed, no extension in the filename, which is only [a-z]{8} in this test case, encoded in UTF-8 (I tried to think of any gotcha, but I guess I missed something about those tokens)
edit: as it often happens, writing this cause some rubber-duck debugging effect, so here's some help if anyone else come across this and have issues with bearer token generation for kops:
This generates a valid token:
dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64 | tr -d "=+/" | dd bs=32 count=1 2>/dev/null (then | base64 it before to put it in the newuser extension-less json file)
You can also use the CA capabilities now built into Kubernetes to achieve this. For example, I have a k8s v1.8 cluster running in AWS and I wanted to add another cluster admin. I generated a new RSA key and certificate signing request for the new user using openssl, making sure to set the organization to system:masters when generating the CSR. I then submitted the CSR via a CertificateSigningRequest resource, inspected it with kubectl get csr/<name> -o yaml, and signed it with kubectl certificate approve <name>. Once approved, I pulled the certificate out of the CSR resource and provided it to the new user, along with their key.
Useful commands are available here, though I opted to use openssl instead of cfssl.
@activeshadow you mind adding that to docs? We need to start a cluster hardening / best practices doc.
However keep in mind that the certificates cannot be revoked, so any access you grant using those is permanent (until the cert expires). Sadly https://github.com/kubernetes/kubernetes/pull/33519 (that fixes at least parts of this) has not been applied.
OY! Good point @vainu-arto, thanks!
Perhaps one way of getting around this would be to create new [cluster]
role binding(s) that reference a new user or group name that is used as
the common name (CN) and/or organization (O) values in the certificate
signing request?
On Mon, Dec 11, 2017 at 11:08:59PM -0800, vainu-arto wrote:
However keep in mind that the certificates cannot be revoked, so any access you
grant using those is permanent (until the cert expires). Sadly PR #33519 (that
fixes at least parts of this) has not been applied.
SSH to master node(s) and issue a reboot
Pending a way to manage tokens from kops, is there a convenient way to carry out this step? (We have three masters.)
@emwalker kops rolling-update --force --instance-group <masters-ig-name> should do it
@SharpEdgeMarshall thank you. I learned last night that kops rolling-update --force (without --instance-group <masters-ig-name>) _rebuilds_ the nodes. Does the --instance-group flag cause the nodes just to be rebooted?
no this flag just limits the command to some ig.
IMHO your instances should be immutable: rebuilding == rebooting
These commands will reboot the master nodes and result in the updating of the /srv/kubernetes/known_tokens.csv file:
$ masters=$(kubectl get nodes -l kubernetes.io/role=master -o json \
| jq --raw-output '.items | map(.spec.externalID) | join(" ")')
$ aws --region eu-west-1 ec2 reboot-instances --instance-ids $masters
This assumes you have copied a user file to the S3 bucket as described in this comment. (Also, as has been mentioned, the token in the S3 user file must be base64 encoded.)
With these commands, the masters will be unavailable while they reboot. If you stagger out the reboots, there will be a little bit of time when you may get a bad connection. There's probably a way to cordon off a master while it reboots so that the connections always work.
@chrislovecnm which docs would you like me to add this to?
@activeshadow you probably know better than me. Just pic a doc that makes sense or create a new one.
@activeshadow Thanks for your advice above on adding new cluster admins through certs. I'm currently struggling to get a new user, with newly generated certs, into the 'system:masters' group. Any help you can offer to point in the right direction would be very welcome!
Here's my openssl request:
openssl req -new \
-key $CLIENT_KEY_PATH \
-out $CLIENT_CSR_PATH \
-subj "/CN=$NAME/0=system:masters"
When I make any kubectl command using these certs I am forbidden. If the cluster is set to allow all requests rather than rbac the new certificate authenticates and authorizes and kubectl commands work fine. But with rbac I'm forbidden with the new certs - so I'm assuming they aren't putting the user into the system:masters group somehow.
Frustratingly, I can't work out a way to see what groups a user is in. Do you know if that's possible?
The kubeconfig file looks like this with some info redacted, pointing to the new certificates:
apiVersion: v1
clusters:
- cluster:
certificate-authority: k8s-crts/edge/ca.crt
server: https://api.<name-of-cluster>
name: <name-of-cluster>
contexts:
- context:
cluster: <name-of-cluster>
user: <name-of-cluster>
name: <name-of-cluster>
kind: Config
preferences: {}
users:
- name: <name-of-cluster>
user:
as-user-extra: {}
client-certificate: k8s-crts/edge/client.crt
client-key: k8s-crts/edge/client.key
Many thanks if you can help!
Wow, a minute after I sent that I realised I was sending zero instead of O in the openssl request.
@ollieh-m glad you got it working!
FWIW, since currently certificates cannot be revoked, I would suggest avoiding putting users in the system:masters group in their certificates and instead just creating a new RBAC role binding that binds them to the cluster-admin role via their username directly. This way, if you need to revoke their access, you can simply remove them from the RBAC role binding you created.
This apparently no longer works in kops 1.9, the secrets are not read from S3 even though "kops get secrets" lists them just fine.
It's not working for me either.
Kops 1.9.0 and cluster 1.9.6 (updated from cluster 1.8.11)
In my understanding support for this was removed by PR #3835.
I didn't see this coming. Maybe should be reflected in the release notes of kops 1.9 as a breaking change. I'm strugling to create certificates for the users, using @activeshadow comments, and in the meanwhile, I've done the changes manually in the .csv to keep everyone happy until I manage a problem I'm having with the certificates.
Currently, the api server rejects the user's certificate I've created with an error:
E0416 10:27:51.791225 1 authentication.go:64] Unable to authenticate the request due to an error: [x509: certificate signed by unknown authority, x509: certificate specifies an incompatible key usage]
The steps I've followed are:
```openssl genrsa -out user1.key 2048
openssl req -CA ca.crt -new -key user1.key -out user1-csr.pem -subj "/CN=user1/O=company-users"
cat <
kind: CertificateSigningRequest
metadata:
name: user1
spec:
groups:
kubectl get csr user1 -o jsonpath='{.status.certificate}' \
| base64 -d > user1.crt```
Ok, solved, the problem was in the - server auth line, which should be - client auth.
Just for anyone referencing it, the script should be something like:
openssl req -CA ca.crt -new -key user1.key -out user1-csr.pem -subj "/CN=user1/O=company-users"
cat <<EOF | kubectl create -f -
apiVersion: certificates.k8s.io/v1beta1
kind: CertificateSigningRequest
metadata:
name: user1
spec:
groups:
- system:authenticated
request: $(cat user1-csr.pem | base64 | tr -d '\n')
usages:
- digital signature
- key encipherment
- client auth
EOF
kubectl describe csr user1
kubectl certificate approve user1
kubectl get csr
#NAME AGE REQUESTOR CONDITION
#user1 2m admin Approved,Issued
kubectl get csr user1 -o jsonpath='{.status.certificate}' \
| base64 -d > user1.crt
Just a note: feel free to remove the user group "company-users", or add new ones, just changing the line:
openssl req -CA ca.crt -new -key user1.key -out user1-csr.pem -subj "/CN=user1/O=company-users"
to:
openssl req -CA ca.crt -new -key user1.key -out user1-csr.pem -subj "/CN=user1"
or:
openssl req -CA ca.crt -new -key user1.key -out user1-csr.pem -subj "/CN=user1/O=company-users/O=anothergroup"
Just take into account that those groups are tied to the certificate, and thus, not revokable without revoking the full certificate, so think it through before adding any group there.
Just closing up: as stated before, it would be nice that the secrets change was reflected in the release notes of kops 1.9 as a breaking change.
@vicenteampliffy glad you figured out the server/client auth requirement. Can you elaborate on how the change to secrets management is a breaking change?
@activeshadow well, kops 1.8 populates the /srv/kubernetes/known_tokens.csv with the files stored in the s3 secrets directory, but with kops 1.9 it's no longer so, as @vainu-arto has pointed before:
In my understanding support for this was removed by PR #3835.
@activeshadow All previously created user accounts in the cluster are now gone, and the method used to create them is no longer supported. Authentication needs to be redone from the ground up. I would consider it reasonable to mention this in the release notes...
@vicenteampliffy @vainu-arto got it, thanks! So to be clear, all user accounts previously created by adding them to the known_tokens.csv file no longer exist when upgrading to 1.9, and instead need to be added via the CSR method?
I would think users added via the CSR method before the upgrade to 1.9 would still be operational, but I suspect you were adding all of them via the known_tokens.csv file instead of via the CSR method.
Indeed, I should have qualified that statement. "All previously created token user accounts" or something like that. For someone using token auth and not using cert auth the end result is the same, though.
Just for completeness, I understand you may gain some of the previous functionality using this kops feature and just maintaining your known_tokens.csv file yourself:
https://github.com/kubernetes/kops/blob/master/docs/cluster_spec.md#fileassets
But I don't feel it's the way to go: if you mess it up you may end with a non-functional cluster, and everything seems to be pushing for Certificates.
This really needed to go into the release notes. Rolling kops 1.9 out to our nonprod cluster and all the users are gone. Lots of moaning developers.
We use tokens rather than certs so the developers can use the kube-dashboard.
Hi @justinsb , maybe you can add this to the release notes for kops 1.9 as a breaking change?
@corlettb I just want to reemphasize your ability to (re)grant your developers' access to your 1.9 cluster fairly easily via the CSR resources in Kubernetes. This doesn't help the fact that their access got taken away, but it can be added back pretty quickly and easily.
As for giving them access to the kube-dashboard, that can be done by generating a service account for each developer (or developer groups), granting the service accounts the necessary RBAC access, and having them use the service account tokens present in the token secrets to log into the dashboard.
Instead of using Token Auth, i would rather recommend using guard. - We use it to allow developers to access our cluster read only. - There are also some kops docs how to use it (https://github.com/appscode/guard/blob/master/docs/setup/install-kops.md). - Each github user for example gets assigned to rbac groups == github groups. This allows fine grained access. - Even static files are possible if you want.
This is from my perspective the better solution as it allows you to change access rights without rolling update of your master instances.
If you have any problems just ping me in the kubernetes slack #kops-user group and i will update docs to solve your problems.
Greetings, Thomas
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue with/reopen.
Mark the issue as fresh with/remove-lifecycle rotten.Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Most helpful comment
Ok, solved, the problem was in the
- server authline, which should be- client auth.Just for anyone referencing it, the script should be something like:
Just a note: feel free to remove the user group "company-users", or add new ones, just changing the line:
openssl req -CA ca.crt -new -key user1.key -out user1-csr.pem -subj "/CN=user1/O=company-users"to:
openssl req -CA ca.crt -new -key user1.key -out user1-csr.pem -subj "/CN=user1"or:
openssl req -CA ca.crt -new -key user1.key -out user1-csr.pem -subj "/CN=user1/O=company-users/O=anothergroup"Just take into account that those groups are tied to the certificate, and thus, not revokable without revoking the full certificate, so think it through before adding any group there.