Dashboard: [tracking] Security, auth and logging in

Created on 27 Jun 2016  ยท  55Comments  ยท  Source: kubernetes/dashboard

Goals: Make the UI work with IAM (identity and access management). The UI should fail soft on things that user cannot see and/or cannot edit. Also, provide a login screen and make the UI work on behalf of a user.
Bonus goals: Integrate with IAM provider to proactively disable views/actions/buttons that user cannot see/modify.

Work estimate: 2 engs + UX desing for a quater

kinfeature

Most helpful comment

The flow I'd like to see for authentication is that each user just runs kubectl proxy on their local machine and browses to http://127.0.0.1:8001/ui, with the dashboard adjusting its display to whatever the authenticated user is allowed to access according to RBAC roles and bindings. This way the normal apiserver authentication is used and the dashboard doesn't need its own login mechanism.

All 55 comments

Me and @floreks are willing to work on it, should be interesting :)

How are we moving forward with this? Should we talk on Monday?

Okay, so after some delay I think it's time to take a closer look on this topic. I'd like to start discussion about it, so we can move forward after we establish some proposal. I've read documentation from https://github.com/kubernetes/kubernetes/blob/release-1.2/docs/design/access.md, but still it's not so clear what is the best way to implement following features for the dashboard:

  • login screen
  • making UI to act on behalf of a user (access control etc.)
  • access control management (modifying access rights etc. - only for certain users, i.e. admins)

I'll continue investigation and post some proposals. Please post your own ideas too.

@romlein I'd like to ask you for help with designing login screen when we'll finish discussion there.

CC @floreks @bryk @erictune @cheld

@bryk are you looking for a single solution which will work on both GKE and OSS, or is it okay if the solution we decide on here is only for OSS?

Login Screen (authentication)

If we can somehow get the user's Kubernetes cluster token (that kubectl uses and is in the .kube/config) into the browser, via cut-and-paste or some kind of oauth flow, then the browser could send that the Dashboard, and the dashboard could send the token to the about-to-be-added tokenreviews endpoint on the master. (https://github.com/kubernetes/kubernetes/pull/28788) The master will translate this token to an authenticated username, or else say it is nobody.

Acting on behalf of user

Dashboard has its own credentials that it presents to the apiserver, e.g. as user "serviceaccount:kube-system:dashboard", when talking to the API server, but it sets the "Impersonate-User: alice" header in all its requests, which causes the API server to authorize the request as "alice".

Note that the Dashboard needs to List objects every time a user visits the page, as that user, or if it caches or watches stuff, it needs to have per-user caches or watches. It also needs to act as the user each time the user mutates something.

Access Control Management

I think editing ACLs might be out of scope for dashboard. On GKE, users will do this via the IAM tab of console.cloud.google.com. If they are using another hosted system, it will happen somewhere else. If they are running on OpenStack, then they might need to modify permissions via Keystone. If they are using Kubernetes RBAC (which is optional), then they need to use the CLI. At the moment, none of the Authz options provides a way to put ACLs on specific objects anyways, so it is not yet a common operation.

Happy to set up a call to do a deeper dive on this. You can find my email on my github profile.

Sounds good @maciaszczykm! Keep me posted. I'm happy to jump into designing the login screen.

@bryk are you looking for a single solution which will work on both GKE and OSS, or is it okay if the solution we decide on here is only for OSS?

This should work on both. I assume that on GKE the login part will be skipped or automatically connected to your Google account. But once you're logged in, everything stays the same, no matter whether you're on GKE or bare metal cluster.

The tokenreview may not work on GKE. I sent you a doc about that. We may need a different login path. I'm not sure what that will be.

With 1.3 I have SSO into the dashboard working great with a reverse proxy and OIDC/OAuth2. I wouldn't create an explicit login screen, piggy back off of the RBAC model and the Auth model that is already supported. It would be great to have something that says who the logged in user is though.

Marc, can you elaborate on your setup?

On Wed, Aug 17, 2016 at 9:48 AM, Marc Boorshtein [email protected]
wrote:

With 1.3 I have SSO into the dashboard working great with a reverse proxy
and OIDC/OAuth2. I wouldn't create an explicit login screen, piggy back off
of the RBAC model and the Auth model that is already supported. It would be
great to have something that says who the logged in user is though.

โ€”
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/kubernetes/dashboard/issues/964#issuecomment-240473494,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AHuudtV5XLwWhvRBxfpjWwNGolWdskWlks5qgztEgaJpZM4I_Aea
.

@erictune sure. Here's the diagram:
oidc_login

Actors:

  1. OpenUnison/ScaleJS - Hosts the user's main point of entry (and other identity services for RBAC) and reverse proxy in front of the API server
  2. KeyCloak - OpenID Connect Identity Provider
  3. API Server
  4. User

The flow:

  1. From a browser, the user accesses ScaleJS on OpenUnison
  2. User is redirected to KeyCloak to authenticate
  3. KeyCloak sends the user back to OpenUnison with an access token; OpenUnison presents users with link for the dashboard based on their permissions
  4. user clicks on the link, is sent to the /api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard/ URL on the reverse proxy and for every request to the api server, OpenUnison creates a header for "Authorization: bearer ..."
  5. The API server takes the bearer token and verifies it against KC, getting the user's JWT

If the user wants to use kubectl instead then (#6) the user clicks on the OAuth2 Token link to get their current token and (#7) the user uses it with kubectl.

Here's a quick video of the whole thing in action: https://vimeo.com/179220012

@mlbiam Is this something generic that you can contribute to the project? I'd love to run something like open unision as a sidecar container and have all this that you explained here. This would be a first step before we can get user's details as you described in https://github.com/kubernetes/kubernetes/issues/30784

@bryk @johnzinvalid

OpenUnison is already open source. The integration with k8s is still a
work in progress but once it's done I'll publish a quick start that should
make it really easy to integrate and run on k8s as a pod. Here's the blog
with more details about the poc. All feedback is welcome:

https://www.tremolosecurity.com/kubernetes-idm-part-i/

I need to clarify things in my head. Can I run OpenUnison as a sidecar container in Dashboard UI pod, then expose OpenUnison to the internet and reverse-proxy to Dashboard UI running on localhost?

@bryk

The short version is "yes" - Correct me if I'm wrong, but what I think you are looking for is just the SSO with the dashboard and the ability to get your token from the web browser? If so I can prototype a container that would take a number of environment variables pretty quickly. Then it would just be a matter of creating a java keystore and mounting it as a secret in k8s and adding the container as a sidecar to the dashboard pod (with the secret and environment variables).

The details:

OpenUnison is a war file that can be deployed into any java servlet container, including one in a docker container. My thought was to:

  1. Create a quick start like https://github.com/TremoloSecurity/openunison-qs-s2i that would have environment variables for the host, the OIDC idp information, the api server, etc.
  2. Create a java keystore that contains the certs for the api server and odic idp and mount it as a secret to /etc/openunison in the pod
  3. Use our source2image container on dockerhub (https://hub.docker.com/r/tremolosecurity/openunisons2idocker/) and deploy the image into an accessible repo (like dockerhub)
  4. Reference the new image in your pod definition including the values for the environment variables defined in your unison.xml and myvd.conf

I suppose if all you wanted was the scalejs dashboard for the oauth2 token viewer and sso integration with the dashboard, but no user provisioning or access request management, we could create a standard image and you could skip 1 & 3 and it would just be create the keystore and the pod definition. For just SSO we wouldn't need directory access, we would just rely on the OIDC idp. The issue at that point would be that you still need a database for the ScaleJS pieces, but we can add a feature that would allow it to run sso only functions without a db (ie seeing the links get generated).

OK, so after rambling through this I see two different scenarios:

SSO + Token Only

OpenUnison is configured as an OIDC client through environment variables. The ScaleJS Token app would be deployed so you could get your access token out of the oidc idp. There would be no dependency on a relational database or directory so the steps to deploy would be:

  1. create a keystore with the correct certificates and add it to a secret that is referenced in the pod
  2. Add the continer on dockerhub we will create as a sidecar
  3. add the correct environment variables to the pod

This will let users SSO into the dashboard and get their token. There would be no authorizations, that would be up to a combination of the dashboard and the oidc idp. Since ScaleJS Main (thats what generates the links for the oauth2 token and dashboard in the video) won't be there, we'd need to have a static html page in place for users to have something to click on, but thats easy.

Kubernetes Identity Manager

Where we started with this idea was to create an identity manager for Kubernetes. Something that would let you define workflows and approvers for access to k8s via the RBAC model. That would need to be more dynamic since the workflows are in the unison.xml file. Thats a larger scope though then I think what you're asking for.

@bryk @johnzinvalid

I have a prototype image working that will do what I think you're looking for: https://hub.docker.com/r/mlbiam/openunison-k8s-dashboard/

I removed all the requirements for user provisioning and stripped it down to just:

  1. reverse proxy
  2. integration with openid connect
  3. display the user's access token
  4. simple links page

I'm working on getting it running inside of k8s right now. Would appreciate any feedback you have. Once I get this working, I'll clean up the front page and can remove the TS branding. I was also thinking of having a "whoami" page that shows the claims from the JWT.

@mlbiam thanks for this explanation. Makes a lot of sense. Let us take some time to digest this :)

Has anyone tried using the dashboard with RBAC enabled? It doesn't look like the dashboard service account is being recognized in the RBAC system. Looking in the pod, the dashboard is using "default" but once I added default to a cluster admin role i get the following in the logs:

Starting HTTP server on port 9090
Creating API server client for https://10.3.0.1:443
Error while initializing connection to Kubernetes apiserver. This most likely means that the cluster is misconfigured (e.g., it has invalid apiserver certificates or service accounts configuration) or the --apiserver-host param points to a server that does not exist. Reason: the server does not allow access to the requested resource

Did you add the string "default" or the string
"system:service-accounts:$NAMESPACEOFDASHBOARD:default".

On Thu, Aug 25, 2016 at 9:48 AM, Marc Boorshtein [email protected]
wrote:

Has anyone tried using the dashboard with RBAC enabled? It doesn't look
like the dashboard service account is being recognized in the RBAC system.
Looking in the pod, the dashboard is using "default" but once I added
default to a cluster admin role i get the following in the logs:

Starting HTTP server on port 9090
Creating API server client for https://10.3.0.1:443
Error while initializing connection to Kubernetes apiserver. This most likely means that the cluster is misconfigured (e.g., it has invalid apiserver certificates or service accounts configuration) or the --apiserver-host param points to a server that does not exist. Reason: the server does not allow access to the requested resource

โ€”
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/kubernetes/dashboard/issues/964#issuecomment-242459636,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AHuudpuFzEwx-X4OojDAZsJe680Oy9kpks5qjcdQgaJpZM4I_Aea
.

@erictune just default. let me try the scoped string

You could also look at the apiserver logs to see if you got a 401 or a 403
from the IP of the dashboard pod. If 403, then problem is rbac. If 401,
problem is service account authn.

On Thu, Aug 25, 2016 at 9:55 AM, Eric Tune [email protected] wrote:

Did you add the string "default" or the string "system:service-accounts:$
NAMESPACEOFDASHBOARD:default".

On Thu, Aug 25, 2016 at 9:48 AM, Marc Boorshtein <[email protected]

wrote:

Has anyone tried using the dashboard with RBAC enabled? It doesn't look
like the dashboard service account is being recognized in the RBAC system.
Looking in the pod, the dashboard is using "default" but once I added
default to a cluster admin role i get the following in the logs:

Starting HTTP server on port 9090
Creating API server client for https://10.3.0.1:443
Error while initializing connection to Kubernetes apiserver. This most likely means that the cluster is misconfigured (e.g., it has invalid apiserver certificates or service accounts configuration) or the --apiserver-host param points to a server that does not exist. Reason: the server does not allow access to the requested resource

โ€”
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/kubernetes/dashboard/issues/964#issuecomment-242459636,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AHuudpuFzEwx-X4OojDAZsJe680Oy9kpks5qjcdQgaJpZM4I_Aea
.

In kube, usernames are global but service accounts are namespace scoped.

On Thu, Aug 25, 2016 at 9:56 AM, Marc Boorshtein [email protected]
wrote:

@erictune https://github.com/erictune just default. let me try the
scoped string

โ€”
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/kubernetes/dashboard/issues/964#issuecomment-242462351,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AHuudgF7EjQzsyx_9p3-xterPaZx9t-lks5qjckggaJpZM4I_Aea
.

@erictune Here's my role binding:

kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1alpha1
metadata:
  name: admin-binding
subjects:
- kind: Group
  name: admin
- kind: ServiceAccount
  name: system:service-accounts:kube-system:default
  namespace: default
- kind: ServiceAccount
  name: openunison
  namespace: default
roleRef:
  kind: ClusterRole
  name: admin-role

kubectl doesn't like it:

$ kubectl create -f ~/Documents/projects/oukube/kube/cluster-admin-role.yaml 
clusterrole "admin-role" created
The ClusterRoleBinding "admin-binding" is invalid.
subjects[1].name: Invalid value: "system:service-accounts:kube-system:default": must match the regex [a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)* (e.g. 'example.com')

The openunison account and its secret work great for another use case without the scoping.

OK, I think I found the issue. This yaml works:

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1alpha1
metadata:
  name: admin-role
rules:
- apiGroups: ["*"]
  resources: ["*"]
  verbs: ["*"]
  nonResourceURLs: ["*"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1alpha1
metadata:
  name: admin-binding
subjects:
- kind: Group
  name: admin
- kind: ServiceAccount
  name: default
  namespace: kube-system
- kind: ServiceAccount
  name: openunison
  namespace: default
roleRef:
  kind: ClusterRole
  name: admin-role

i changed the namespace from default to kube-system and now i can login to the dashboard!

Yeah, sorry I gave you bad advice about qualifying the name.

On Thu, Aug 25, 2016 at 11:48 AM, Marc Boorshtein [email protected]
wrote:

OK, I think I found the issue. This yaml works:

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1alpha1
metadata:
name: admin-role
rules:

  • apiGroups: ["_"]
    resources: ["_"]
    verbs: ["_"]
    nonResourceURLs: ["_"]

kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1alpha1
metadata:
name: admin-binding
subjects:

  • kind: Group
    name: admin
  • kind: ServiceAccount
    name: default
    namespace: kube-system
  • kind: ServiceAccount
    name: openunison
    namespace: default
    roleRef:
    kind: ClusterRole
    name: admin-role

i changed the namespace from default to kube-system and now i can login to
the dashboard!

โ€”
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/kubernetes/dashboard/issues/964#issuecomment-242497969,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AHuudlywoiehYQY-plHVcInFk5yHock5ks5qjeNrgaJpZM4I_Aea
.

I am new to microservices. My impression is that it should be fairly easy to slot a container into the stack. What I do with traditional services in my network is front them with an Apache proxy doing mod_auth. This would appear to support any number of authentication methods. Locally, we just go to Unix passwd authentication which, via pam, goes back to an LDAP server.

If the authentication mechanism for the web UI can be as flexible as Apache auth, that would be a huge win for the Kubernetes ecosystem, imho.

@dannyman You can certainly do this kind of auth. Just put a auth container inside dashboard UI pod and configure. All should work.

Such feature is not there by default yet, but I hope that eventually it'll be there.

How come the dashboard wasn't written as a Single Page Application calling the k8s api directly from the browser? Then the dashboard could be locked down with any reverse proxy and authentication combination. I'm thinking something like https://github.com/saturnism/gcp-live-k8s-visualizer

The dashboard _is_ an SPA. And if you have basic auth enabled, you can access it through the service proxy just as you describe already.

The flow I'd like to see for authentication is that each user just runs kubectl proxy on their local machine and browses to http://127.0.0.1:8001/ui, with the dashboard adjusting its display to whatever the authenticated user is allowed to access according to RBAC roles and bindings. This way the normal apiserver authentication is used and the dashboard doesn't need its own login mechanism.

I share Jimmy's view.

On Nov 9, 2016 4:36 AM, "Jimmy Cuadra" [email protected] wrote:

The flow I'd like to see for authentication is that each user just runs kubectl
proxy on their local machine and browses to http://127.0.0.1:8001/ui,
with the dashboard adjusting its display to whatever the authenticated user
is allowed to access according to RBAC roles and bindings. This way the
normal apiserver authentication is used and the dashboard doesn't need its
own login mechanism.

โ€”
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/kubernetes/dashboard/issues/964#issuecomment-259404998,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AHuudqpxkGv5f5bX33v04qVtRaCipBWYks5q8b5GgaJpZM4I_Aea
.

  1. How do you audit that?
  2. How do you perform authorization?
  3. How do you access it from something that isn't a laptop?

On Nov 9, 2016 9:36 AM, "Eric Tune" [email protected] wrote:

I share Jimmy's view.

On Nov 9, 2016 4:36 AM, "Jimmy Cuadra" [email protected] wrote:

The flow I'd like to see for authentication is that each user just runs
kubectl
proxy on their local machine and browses to http://127.0.0.1:8001/ui,
with the dashboard adjusting its display to whatever the authenticated
user
is allowed to access according to RBAC roles and bindings. This way the
normal apiserver authentication is used and the dashboard doesn't need
its
own login mechanism.

โ€”
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
964#issuecomment-259404998>,
or mute the thread
AHuudqpxkGv5f5bX33v04qVtRaCipBWYks5q8b5GgaJpZM4I_Aea>
.

โ€”
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/kubernetes/dashboard/issues/964#issuecomment-259475062,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AH3fw8JZ-WO7H4LbcoSbgscBo4VTy_q4ks5q8gSxgaJpZM4I_Aea
.

  1. Audit: when the dashboard calls to the apiserver on your behalf to CRUD
    resources, it should impersonate you, thus creating an audit log on the
    apiserver of your actions.
  2. Authorization: when the dashboard calls to the apiserver on your behalf
    to CRUD resources, it should impersonate you, thus creating an audit log on
    the apiserver of your actions. The dashboard is just a sort of proxy,
    in the sense that it only does operations that you could do with the CLI,
    but makes them easier. It doesn't provide new types of operations. If
    it provided new operations, then my answer would be different.
  3. I agree with you about #3. People who need mobile access should run
    something like OpenUnison in front of the ui that does the authentication,
    and proxies that to the apiserver, which proxies it to the dashboard. I
    think doing the audit and the authz in the apiserver still makes sense in
    this use case.

On Wed, Nov 9, 2016 at 9:53 AM, Marc Boorshtein [email protected]
wrote:

  1. How do you audit that?
  2. How do you perform authorization?
  3. How do you access it from something that isn't a laptop?

On Nov 9, 2016 9:36 AM, "Eric Tune" [email protected] wrote:

I share Jimmy's view.

On Nov 9, 2016 4:36 AM, "Jimmy Cuadra" [email protected] wrote:

The flow I'd like to see for authentication is that each user just runs
kubectl
proxy on their local machine and browses to http://127.0.0.1:8001/ui,
with the dashboard adjusting its display to whatever the authenticated
user
is allowed to access according to RBAC roles and bindings. This way the
normal apiserver authentication is used and the dashboard doesn't need
its
own login mechanism.

โ€”
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
964#issuecomment-259404998>,
or mute the thread
AHuudqpxkGv5f5bX33v04qVtRaCipBWYks5q8b5GgaJpZM4I_Aea>
.

โ€”
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
964#issuecomment-259475062>,
or mute the thread
WO7H4LbcoSbgscBo4VTy_q4ks5q8gSxgaJpZM4I_Aea>

.

โ€”
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/kubernetes/dashboard/issues/964#issuecomment-259479150,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AHuudp9uX3gZmr0PXq1jpo7xXTnVjUW5ks5q8gitgaJpZM4I_Aea
.

Hi,
I haven't contributed to the project so far, but I think it would be great to have UI components for access control management. I would love to spend some time working on this feature. I've already read the "getting started" part for developers, but it would be great if you guys could give me some guidance on where to start :)

I haven't contributed to the project so far, but I think it would be great to have UI components for access control management. I would love to spend some time working on this feature. I've already read the "getting started" part for developers, but it would be great if you guys could give me some guidance on where to start :)

That's a great thing to hear! If you're unsure yet where you'd like to contribute, it is best to meet with the team and we'll introduce you and help get started :) If you already have a specific task in mind, crate or claim an issue for it.

Join us on slack to chat on this: https://kubernetes.slack.com/messages/sig-ui/details/

We at Zalando would also love to see the "dashboard impersonates user" feature :smile:

In our case it would simply need to pass through an OAuth Bearer token to the API server (we use an OAuth proxy in front of the Kubernetes dashboard which would inject the user's OAuth Bearer access token).

@hjacobs There's a PR for this already: https://github.com/kubernetes/dashboard/pull/1539

Issue list:

  • Dashboard shouldn't be super-user,
  • Implement full authorization or just fully-functional token forwarding?
  • What about cookies and login page?

my $0.02:

+1 to no super user dashboard.

token forwarding. full authentication is a large amount of work, duplicated from other places, and security code so risky. Helm and Grafana, and other services that act on behalf of the user also need some kind of authentication code if its not shared. Lets share it, benefiting everyone.

@kfox1111 I saw the issue, where you mentioned, that we are talking about continuing to work on authentication. We would gladly accept any kind of help there :) Can you add anything to action points, that I have mentioned in above comment? Let's keep in touch to push this forward.

Login functionality is too diversity, user can supply token, username/password, certificates, etc, It is not easy to come up with a general solution with dashboard, but at least, we should be able to make it easy to register plugins. That would basically means we can design several pages and each of them could be implemented with different ways:

  1. password: https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apiserver/pkg/authentication/authenticator/interfaces.go#L42: Basic auth: username combination with password;
  2. token: https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apiserver/pkg/authentication/authenticator/interfaces.go#L28: Bearer token;
  3. request: https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apiserver/pkg/authentication/authenticator/interfaces.go#L35: Certificates;

Impersonation is a complementary part, The dashboard should only take the responsibility of authentication, and once it succeeds, it should leverage the impersonation to do the rest.

Already done.

Where can we get information about this? AFAIK, anyone who has access to dashboard in fact has "root" access to a cluster despite of RBAC settings.

@nailgun Check 1.7.0 release and recently introduced wiki pages & landing page. The Dashboard uses a minimal set of privileges by default at the moment. To make it work with existing cluster you will probably need to remove old SA and create new resources as it is described in our guides.

The Wiki doesn't mention granting the "impersonate" verb to the dashboard's service account via a _ClusterRoleBinding_. Is the dashboard using the user impersonation facility to restrict its capabilities when talking to the API server?

Is the dashboard using the user impersonation facility to restrict its capabilities when talking to the API server?

Definitely not. Granting the ability to impersonate arbitrary users is an expansion of power, not a restriction

How, then, is the Dashboard using the credentials submitted by users through its login dialog to restrict what each user is allowed to do once logged in?

We are simply using given credentials to create kubernetes client structure. It is created on every request.

Interesting. @liggitt, isn't that what you had advised us not to do for the Helm project?

@liggitt, isn't that what you had advised us not to do for the Helm project?

for x509 credentials, absolutely (echoed here)

for token/basic auth, the user should authenticate to systems other than the API server (like the dashboard) with a credential scoped to that particular audience (and possibly scoped in power). the best way to do that would be to have the dashboard initiate an oauth flow to obtain an API token for the user. However, not all clusters have an oauth server available or the ability to create scoped tokens, so the dashboard made the choice to just take the user's basic or bearer credential. If you consider the dashboard part of the same auth domain as the API server, that's somewhat tolerable. If you consider the dashboard to be no different than any other application running on top of the cluster (as I do), then we should still be pushing to enable the delegated/scoped token, rather than asking the user for their credentials. The dashboard adding an option to use an OAuth flow for clusters which have that available would go a long way.

The dashboard adding an option to use an OAuth flow for clusters which have that available would go a long way.

It's on our list. Still it's possible to configure an OAuth proxy that will take care of retrieving and passing token to Dashboard. We are also checking for token header presence.

In this model, @liggit, would the token retrieved via OAuth and submitted to the dashboard (and, transitively, to the API server) authenticate as the user (and his groups, etc.), or as some other identity with permissions even more restrictive than the user's normal permissions within the cluster?

I'm sorry that I missed #2206 while it was underway. I thought it was all still far in the future, and hadn't been paying close enough attention to this project.

This model is no different than using kubectl with token set in kubeconfig file. Token is passed to API server and all roles are applied that are related to "user" tied to this token.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

kairen picture kairen  ยท  4Comments

shu-mutou picture shu-mutou  ยท  3Comments

puja108 picture puja108  ยท  5Comments

Eddman picture Eddman  ยท  4Comments

kasunsjc picture kasunsjc  ยท  3Comments