https://github.com/kubernetes/kubernetes/pull/25634
kubernetes/kubernetes#26753
kubernetes/kubernetes.github.io#643
itβs astonishing how much checklists on github make me want to do whatever it takes to complete them
An update on this issue for alpha.
TODO before alpha release
@erictune As we can't update this issue I think you have to check the boxes based on @ericchiang's comment https://github.com/kubernetes/features/issues/2#issuecomment-222202448
Ping @erictune cc @ericchiang
Integration tests here: https://github.com/kubernetes/kubernetes/pull/26753
Docs here: https://github.com/kubernetes/kubernetes.github.io/pull/631
Also there were a couple small fixes that need lgtms:
The API group name "rbac.authorization.k8s.io/v1alpha1" is really long. Perhaps for Beta or GA we can shorten it somehow.
For beta, we need to:
A couple other notes (not necessarily for beta)
system:masters
superuser group to bypass escalation checksNot sure I'd consider choosing a default role for service accounts in rbac a breaking change. Authorizer settings have always been able to limit service account permissions.
Hi @erictune will RBAC in 1.3 have the ability to make policy decisions based on the attributes in the requested API object? Such as specifying what secrets can a pod mount as a volume? Looking over the code seems the answer is no, but I'd like to confirm. Thanks!
@guoshimin no. For alpha RBAC will only deal with top level objects. No field level authorization.
@olegshaldybin has a large proposal in kubernetes/kubernetes#27330. You may want to take a look at the discussion there.
no, RBAC is object-level, not field-level.
Also realized that we don't have garbage collection (cleaning up role bindings to deleted roles). This would probably be something good to add to the beta release.
I'd like to see an intent-based, idempotent API for binding roles to subjects. Something where I can post: "bind role/foo to user/bar" or "delete role/foo from group/bar" and not have to worry about the details of exactly which named binding needs to be mutated.
I think we should also consider a way to select which roles and subjects a given subject is allowed to bind. Right now, Jane in namespace J can forcibly add David to her namespace and there's nothing he can do about it. In addition, a permission granting role has value since the person handing out permissions may not want to have all the powers he's granting.
An intent-based API for managing policy sounds great. Just out of curiosity - Are there existing intent-based APIs in K8s?
An intent-based API for managing policy sounds great. Just out of curiosity - Are there existing intent-based APIs in K8s?
The existing bindings endpoint comes to mind. I can't actually think of other resources where the primary usage is through mutation of existing resources instead of creation of new ones. Maintaining individual tuples doesn't seem like it would be a good idea though.
Request for a non-flag way to turn off bootstrapping mode (https://github.com/kubernetes/kubernetes/pull/25634#issuecomment-231486319) @jimmycuadra
I'm a little confused by that comment.
@jimmycuadra why do you need to turn this off the bootstrapping flag in the first place? I don't actually see anything in that post which explained that.
In our setup, the systemd unit file that starts apiserver is immutable, so we don't necessarily want the user specified by --authorization-rbac-super-user
to have unlimited access for the foreseeable future. We want a way to bootstrap RBAC roles and bindings and then disable the initial user's unlimited access without having to change the set of arguments that the apiserver is started with.
@jimmycuadra And you couldn't use systemd's EnvironmentFile pointing to a file that is mutable to change the flags? That achieves the immutable systemd unit file with the flags without changing the scope of this feature. Could that work?
There are workarounds, but ultimately we'd like a way to do it without having to mutate files on the host system. Moving the flags to an environment file doesn't make it any better for us, we still have to modify files on the host which otherwise would've been provisioned when the system was first launched and then not changed again. The one exception we have right now is for updating TLS certs/keys, and we have a special mechanism for that, but we'd rather not make that more complicated by adding more exceptions to the rule if possible.
(We should continue this discussion on an issue on the kubernetes repo so as not to add more noise to this thread.)
@erictune @ericchiang as this feature's marked with Milestone 1.5, should we change the "alpha-in-1.3" label to another one?
@idvoretskyi it was alpha in 1.3, this is currently tracking targets for beta.
@ericchiang just to confirm - does this feature target beta for 1.5?
@ericchiang just to confirm - does this feature target beta for 1.5?
It's going to be close. I'm actively working on it, but here's the list:
@kubernetes/sig-auth Anyone want to add/remove from the list?
I'd adjust 3:
Kube-controller-manager can be root, which is the case today for ABAC.
We would need to do something to tweak the perms for the default service
account in kube-system so that addons have the permissions
they need. Read-all-namespace perms would be fine as a start.
I'd like 4 to cover most GCE e2e tests.
We should have an idea of how to announce that RBAC is the default for
kube-up and ask the minikube people to consider having an option for it.
On Thu, Oct 13, 2016 at 7:47 AM, David Eads [email protected]
wrote:
@ericchiang https://github.com/ericchiang just to confirm - does this
feature target beta for 1.5?It's going to be close. I'm actively working on it, but here's the list:
- Evaluate performance impact. I think the last pull needed before a
check is kubernetes/kubernetes#34047
https://github.com/kubernetes/kubernetes/pull/34047.
@ericchiang/CoreOS signed up for the testing.- Make the user experience for it reasonable. Default roles and
bindings, admin level access for bootstrapping. This is moving forward and
will be helped by 3.- Use RBAC for at least some controllers to prove the principle. This
will be relying on some authentication privileged bootstrap identities
based on certificates. @dims https://github.com/dims is helping with
this as part of the "shut off the insecure port" work.- Turn RBAC on an e2e run in CI. This can't realistically be done
until 3 is complete.@kubernetes/sig-auth https://github.com/orgs/kubernetes/teams/sig-auth
Anyone want to add/remove from the list?β
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/kubernetes/features/issues/2#issuecomment-253535360,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AHuudpXq9QAvUhl-9lqGza1Nm7Tygy5iks5qzkSZgaJpZM4IWkom
.
@deads2k @erictune thank you.
We at eBay have set up a RBAC Policy list for the complete cluster controllers.
https://github.com/uruddarraju/kubernetes-rbac-policies/tree/master
Thought it might be helpful to add a few here.
And also, from my experience after coming up with all the policies needed, I think there is a need for explicitDeny
on a particular set of Resources with in an APIGroup
.
This way as an admin, I will be able to give access to all resources with in a namespace except for resourceQuotas
to a user deploying their applications within the namespace. This gives me enough control over the cluster utilization.
Otherwise, my policy would be very tough to express, and I have to explicitly list down all resources except resourceQuotas, similar to what i did here: https://github.com/uruddarraju/kubernetes-rbac-policies/blob/master/roles/reader-all.yaml
I wanted the resourceQuotas to be at the Cluster level for this reason.
On Thu, Oct 13, 2016 at 12:03 PM, Uday Ruddarraju [email protected]
wrote:
And also, from my experience after coming up with all the policies needed,
I think there is a need for explicitDeny on a particular set of Resources
with in an APIGroup.This way as an admin, I will be able to give access to all resources with
in a namespace except for resourceQuotas to a user deploying their
applications within the namespace. This gives me enough control over the
cluster utilization.Otherwise, my policy would be very tough to express, and I have to
explicitly list down all resources except resourceQuotas, similar to what i
did here: https://github.com/uruddarraju/kubernetes-rbac-
policies/blob/master/roles/reader-all.yamlβ
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/kubernetes/features/issues/2#issuecomment-253607573,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AHuudjoCfcevq7Acmqj8p5N_SpY4dpBHks5qzoCCgaJpZM4IWkom
.
I think enumerated policy is almost always the right way to go. The meaning of * - foo
changes as new resources are added, without any decision on the policy maker's part. The fact that a resource is being omitted means the role isn't intended to grant unlimited access, but a "everything except foo" rule does just that for every future resource, no matter how powerful.
If we want to allow user-installed thirdpartyresources then some kind of
wildcard is needed?
On Oct 13, 2016 3:52 PM, "Jordan Liggitt" [email protected] wrote:
I think enumerated policy is almost always the right way to go. The meaning
of * - foo changes as new resources are added, without any decision on the
policy maker's part. The fact that a resource is being omitted means the
role isn't intended to grant unlimited access, but a "everything except
foo" rule does just that for every future resource, no matter how powerful.
β
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/kubernetes/features/issues/2#issuecomment-253661686,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AHuudppANx75POGVm_1e2MBLAsYlUY_Vks5qzrYjgaJpZM4IWkom
.
If we want to allow user-installed thirdpartyresources then some kind of
wildcard is needed?
ThirdPartyResources
are cluster-scoped objects which change the discovery doc and affect every API client of the cluster. Any user with the power to create a thirdpartyresource ought to have the power to create ClusterRoles
to go with them. That allows a TPR provider to give a yaml file will which create both so that we can have strict enumerations.
Also, you have no way of knowing ahead of time which TPRs are privileged and which are not.
But there were discussions about a namespaced version of TPR. I think we
will need one. We will need an authz story for it.
On Oct 20, 2016 8:21 AM, "David Eads" [email protected] wrote:
If we want to allow user-installed thirdpartyresources then some kind of
wildcard is needed?ThirdPartyResources are cluster-scoped objects which change the discovery
doc and affect every API client of the cluster. Any user with the power to
create a thirdpartyresource ought to have the power to create ClusterRoles
to go with them. That allows a TPR provider to give a yaml file will which
create both so that we can have strict enumerations.Also, you have no way of knowing ahead of time which TPRs are privileged
and which are not.β
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/kubernetes/features/issues/2#issuecomment-255137977,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AHuudk4udgzgWmBEa_5zAWojBrEb4t9Mks5q14cLgaJpZM4IWkom
.
types that only exist within one namespace? that's... concerning. TPR defines a type cluster-wide, the instances of which are already namespaced. we should move this discussion to an issue to keep the traffic here low.
But there were discussions about a namespaced version of TPR. I think we
will need one. We will need an authz story for it.
I think that would actually be implemented as a separate API server running inside of the namespace. Project admins shouldn't be able to mutate discovery docs and having different discovery docs per namespace is weird.
what is the story as a client for what you described, @deads2k - is this api server federated with the main api?
what is the story as a client for what you described, @deads2k - is this api server federated with the main api?
A project-admin cannot be allowed to modify the main API server. They would need to create a route which pointed to their new API server and selectively choose which one they wanted to talk to. We can talk about trying to make a smarter client that can do this, but a project-admin should never have the power to get the main API server to proxy requests or discovery information to a pet project.
Do users on openshift write custom controllers that use TPR for storage and
are accessible via Kubectl?
On Thu, Oct 20, 2016 at 8:28 AM, David Eads [email protected]
wrote:
But there were discussions about a namespaced version of TPR. I think we
will need one. We will need an authz story for it.I think that would actually be implemented as a separate API server
running inside of the namespace. Project admins shouldn't be able to mutate
discovery docs and having different discovery docs per namespace is weird.β
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/kubernetes/features/issues/2#issuecomment-255140123,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AHuuds1iecFnuCswHHcq8WeKnMj0oUjgks5q14i2gaJpZM4IWkom
.
Do users on openshift write custom controllers that use TPR for storage and
are accessible via Kubectl?
Custom controllers, yes. We have a couple "add-ons" related to our online offering that don't live in-tree with origin and so they have to define their own roles and the like.
TPRs, no. Knowing how fragile they are, we've avoided using them they work well. See https://github.com/kubernetes/features/issues/95 for a consolidated list.
Types are still alpha for 1.5
Ping. Is there a docs PR for this?
Another ping. Docs?
docs for the current state already exist at http://kubernetes.io/docs/admin/authorization/
I don't see the new default roles in the documentation. Default roles were mentioned in the 1.5 release notes.
@ericchiang @deads2k which stage does this feature target for 1.6 (alpha/beta/stable)?
@ericchiang @deads2k which stage does this feature target for 1.6 (alpha/beta/stable)?
beta
@jimmycuadra docs for 1.6 in progress at https://github.com/kubernetes/kubernetes.github.io/pull/2618 (preview at https://deploy-preview-2618--kubernetes-io-master-staging.netlify.com/docs/admin/authorization/rbac/)
@liggitt @ericchiang @deads2k please, provide us with the release notes and documentation PR or link at the features spreadsheet.
@erictune will RBAC be default in 1.8 clusters onwards as it moves to stable?
@cjcullen
@apsinha when we tried to turn on RBAC in kube-up, that change was reverted due to concerns around backward compatibility. Turning on RBAC means no API access by default, and that breaks any tutorials or instructions that target old clusters which historically haven't turned on API authorization.
We're working with groups like SIG apps to get more apps to ship with RBAC profiles, and many install tools, such as kubeadm, already deploy RBAC enabled clusters, but I image there will still be the same concerns about making it the default.
@ericchiang that means - RBAC won't be enabled by default in 1.8, right?
@ericchiang What about enabling RBAC by default but disabling it in the cases where you have to?
There is a pain threshold when enabling it, but I think it's worse to not enable this in the future.
There are easy commands to effectively turn it off but still have it enabled on cluster-level.
That is the approach I would prefer; default on, document how to make a permissive binding for workloads running in-cluster (the primary error source) and still let folks opt-out if they want.
TL;DR; This is IMO the best time to switch generally. kubeadm did this in v1.6 and it has worked pretty well.
It's on by default in some deployments (gke/gce, kubeadm). It should continue to be up to individual deployments to enable.
Supposed we made RBAC on by default. What would the instructions look that
we would give users who wanted to simulate the old behavior for backwards
compatibility reasons.
Is it really just as simple as this?:
simulate old pre-RBAC behavior.
#
nses=$(kubectl get namespaces -o name | cut -f 2 -d "/")
for ns in $nses
do
sas=$(kubectl get sa -o name | cut -f 2 -d "/")
for sa in $sas
do
kubectl create clusterrolebinding backcompat-$RANDOM
--clusterrole=cluster-admin --serviceaccount=$ns:$sa
done
done
On Mon, Aug 14, 2017 at 7:09 AM, Jordan Liggitt notifications@github.com
wrote:
It's on by default in some deployments (gke/gce, kubeadm). It should
continue to be up to individual deployments to enable.β
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/kubernetes/features/issues/2#issuecomment-322200101,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AHuuduJStX7or18To3iT5Qrva2llFJ7Cks5sYFUGgaJpZM4IWkom
.
@ericchiang @deads2k @liggitt @kubernetes/sig-auth-feature-requests so, is this feature delivered as Beta or Stable for 1.8?
If it's not enabled by default, it can't be "stable" - we have to update the stage/* label.
If it's not enabled by default, it can't be "stable" - we have to update the stage/* label.
"enabled by default" by which deployment mechanisms? The API is on by default, but the choice of what particular authorizer to use is a decision left up to the deployment mechanisms themselves.
In this case, its on in kubeadm
and kube-up
(https://github.com/kubernetes/kubernetes/pull/51367), so it is on, but moving forward you'll probably want to consider how to handle per-deployment choices with respect to stable.
If it's not enabled by default, it can't be "stable"
That is incorrect. Optional != unstable. The RBAC API is at v1, and the RBAC authorizer has been tested for correctness and at scale for performance, so the feature can accurately be described as stable.
hat is incorrect. Optional != unstable. The RBAC API is at v1, and the RBAC authorizer has been tested for correctness and at scale for performance, so the feature can accurately be described as stable.
@liggitt sorry about the wording confusion. I'm speaking about "stable" status as the feature definition, not about code stability.
We can't declare the feature as "stable" or "GA" if it's not available by default.
We can't declare the feature as "stable" or "GA" if it's not available by default.
It is available by default. It is not enabled by default (if you start apiserver with no flags, RBAC is not the default authorization mode, nor will it be for compatibility reasons). It is the default for most deployments (kube-up on GCE/GKE, kubeadm, etc), but it remains the decision of a particular deployment whether to use the RBAC authorizer or not.
I don't think "enabled by default" has anything to do with stability.
@smarterclayton again, it's not about the "code stability". Let's read " a stable feature" as "a GA feature".
This is an optional, GA and stable feature :)
(Thanks to @liggitt & the rest of sig-auth's great work!)
On 20 Sep 2017, at 15:41, Ihor Dvoretskyi notifications@github.com wrote:
@smarterclayton again, it's not about the "code stability". Let's read " a stable feature" as "a GA feature".
β
You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or mute the thread.
@luxas good, let's do it so.
There are multiple valid authorization options:
https://kubernetes.io/docs/admin/authorization/#authorization-modules
released as v1 and stable in 1.8, closing
Most helpful comment
I don't see the new default roles in the documentation. Default roles were mentioned in the 1.5 release notes.