Ingress-nginx: [feature request] rbac role manifest

Created on 13 Feb 2017  路  14Comments  路  Source: kubernetes/ingress-nginx

It's unclear what privileges the ingress controllers require.

help wanted

Most helpful comment

fwiw, I am working on an audit2rbac (shamelessly taken from audit2allow) webhook authorizer that you can run that will gather API requests per user and generate RBAC roles that would cover the requests that were made. Adding that to your authz chain and exercising an app would make it easier to develop those roles (though rarely used permissions might still be tricky to trigger/capture)

All 14 comments

Couldn't agree more @jhorwit2. Ideally all cluster services should ship sample RBAC roles, or at least a manifest of API resources and verbs they rely on.

It's the old chestnut from SELinux and App Armor, if the software authors don't publish the security requirements information, it requires every security conscious end-user to almost do a code audit to engineer a suitable least-privilege policy.

Is this something the automated build, test tools could do? E.g. monitor API calls during testing and spit out a manifest or a generated RBAC policy that is just what the tests need to run? Could be both a good leg-up for writing RBAC policies and useful as an audit tool.

As k8s matures it seems (to me) especially important for the core project to set a positive, secure-by-default example. Right now I see lots of early k8s users who are surprised and sometimes shocked when you tell them every one of their containers is getting, by default, an unrestricted, admin-strength Service Account token. Container break-outs make news, it is probably only a matter of time before k8s gets unnecessary negative PR for the default security stance.

Platforms like Tectonic are already doing RBAC by default and having to utilize a global very-permissive cluster role to make things like nginx-ingress and kube-lego work is kind of worrying. Would love to see more effort on this and eventually, an included nginx-rbac.yaml example for people deploying this, as well as documentation, which I can help with!

Maybe helpful as a starting point for people ending here after googling: This set of RBAC policies seems to work for me with the nginx controller 0.9.0-beta.2.

apiVersion: rbac.authorization.k8s.io/v1alpha1
kind: ClusterRole
metadata:
  name: ingress
rules:
- apiGroups:
  - ""
  - "extensions"
  resources:
  - configmaps
  - secrets
  - services
  - endpoints
  - ingresses
  - nodes
  verbs:
  - list
  - watch
- apiGroups:
  - "extensions"
  resources:
  - ingresses
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - events
  verbs:
  - create
- apiGroups:
  - "extensions"
  resources:
  - ingresses/status
  verbs:
  - update
---
apiVersion: rbac.authorization.k8s.io/v1alpha1
kind: Role
metadata:
  name: ingress-ns
  namespace: nginx-ingress
rules:
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - list
- apiGroups:
  - ""
  resources:
  - services
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - endpoints
  verbs:
  - get
  - create
  - update
---
apiVersion: rbac.authorization.k8s.io/v1alpha1
kind: RoleBinding
metadata:
  name: ingress-ns-binding
  namespace: nginx-ingress
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: ingress-ns
subjects:
  - kind: ServiceAccount
    name: default
    namespace: nginx-ingress
---
apiVersion: rbac.authorization.k8s.io/v1alpha1
kind: ClusterRoleBinding
metadata:
  name: ingress-binding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: ingress
subjects:
  - kind: ServiceAccount
    name: default
    namespace: nginx-ingress

In my case all ingress-related resources are in the namespace 'nginx-ingress', similar to how https://github.com/jetstack/kube-lego/tree/master/examples/nginx/nginx is structured.

(EDIT: Locked down access to resources inside the nginx-ingress namespace as well)

After some more painful debugging:

diff --git a/nginx-ingress-controller.yml b/nginx-ingress-controller.yml
index 06e3c77..29f2502 100644
--- a/nginx-ingress-controller.yml
+++ b/nginx-ingress-controller.yml
@@ -14,14 +14,23 @@ rules:
   resources:
   - configmaps
   - secrets
-  - services
   - endpoints
   - ingresses
   - nodes
+  - pods
   verbs:
   - list
   - watch
 - apiGroups:
+  - ""
+  resources:
+  - services
+  verbs:
+  - list
+  - watch
+  - get
+  - update
+- apiGroups:
   - extensions
   resources:
   - ingresses

Especially the 'update services' was hard to understand why, ultimately it is only needed when using named ports. See https://github.com/kubernetes/contrib/pull/766#issuecomment-210206052 for details.

Thank you for working this out @ankon 鉂わ笍, I know how painful it is to reverse engineer permissions from code. I hope we can get this into the Ingress Controller documentation or examples - could you make a PR to add it and see what @aledbf thinks?

Especially the 'update services' was hard to understand why, ultimately it is only needed when using named ports. See kubernetes/contrib#766 (comment) for details.

The Ingress Controller needs update permissions to Services just to use annotations to duplicate the named port information from the Pod spec. This apparently stems from a pragmatic workaround for the limitation that EndPoints don't have a place to store the named port name specified in the Pod.

https://github.com/kubernetes/contrib/pull/766/commits/d455524b7bb09b8de95334c2d015e27e4096cf23#r59898626

screenshot 2017-03-14 at 4 35 08 pm

This workaround also may be a broken approach? Since the annotation appears to assume that all Pods for a Service will use the same numeric port number for a particular named port. A Service label selector could easily encompass Pods from two Deployments, one where e.g. named port 'foo' is numeric port 80 and another where port 'foo' is numeric port 8080 (e.g. a blue/green deployment scenario). It might also break the Ingress during regular Deployment rolling updates, if the numeric port for a Pod named port is changed in the update. It seems like this Service annotation approach might fail in these cases? Or am I missing something @aledbf?

Or am I missing something @aledbf?

No. I need to review this and remove the annotation

could you make a PR to add it and see what @aledbf thinks?

:+1 to open a PR

fwiw, I am working on an audit2rbac (shamelessly taken from audit2allow) webhook authorizer that you can run that will gather API requests per user and generate RBAC roles that would cover the requests that were made. Adding that to your authz chain and exercising an app would make it easier to develop those roles (though rarely used permissions might still be tricky to trigger/capture)

Hi

Thanks very much for this! I had to add the following to the ClusterRole in order to allow the ingress controller to read its configmap: -

~~~

  • apiGroups:

    • ""

      resources:

    • configmaps

      verbs:

    • get

      ~~~

Just following up on this a bit. I ran into this issue over the weekend and saw that there were mentions of creating a pull request for an example, but none had been made or accepted. I created an example and pull request #747 based on what I found here and my own testing. I'd appreciate any feedback on what I put together.

Question - is this closable now that we have an example? Or does another step need to be taken?

Closing. Here there is an example to configure RBAC

fwiw, I am working on an audit2rbac (shamelessly taken from audit2allow)

For anyone interested, an initial version is now available at https://github.com/liggitt/audit2rbac

In case you are using helm charts and the nginx-ingress chart you just need to enable rbac as defined in the configuration section.

_Just leaving this comment because it was not clear to me and I first tried to implement the example files on my own. Maybe it's helpful to someone else!_

Was this page helpful?
0 / 5 - 0 ratings