Charts: [stable/nginx-ingress] #9637 Breaks installing nginx with rbac without cluster-level access

Created on 25 Mar 2019  路  6Comments  路  Source: helm/charts

Is this a BUG REPORT or FEATURE REQUEST? (choose one): Bug

Version of Helm and Kubernetes:
Helm 2.13, all versions of k8s I tried

Which chart:
nginx-ingress as of 1.0.1, from #9637

What happened:

Cluster is setup with a tiller in a namespace, without cluster access (namespace myproject, this is openshift, but that's irrelevant). Tiller has a role that grants it permission only to modify things within the namespace (i.e. no cluster level resources like clusterrole)

  1. If you set rbac.create=true as well as controller.scope.enabled=true, then you get the new set of cluster role and binding + the role and rolebinding. This fails
  2. If you set rbac.create=true and controller.scope.enabled=false, you'll get the previous set of cluster permissions + the role and rolebinding. This fails
  3. If you set rbac.create=false and controller.scope.enabled=false then you get no roles set at all, cluster or otherwise. This makes it so nginx-ingress can't talk to k8s and also fails.

What you expected to happen:

  1. No cluster roles
  2. Role and role binding
  3. nginx-ingress able to communicate with k8s API within its namespace

How to reproduce it (as minimally and precisely as possible):

kubectl apply -f the following file in the myproject namespace:

kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: tiller
rules:
- apiGroups:
  - "*"
  resources:
  - "*"
  verbs:
  - "*"
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: tiller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: tiller
subjects:
- kind: ServiceAccount
  name: tiller

And install helm with: helm init --service-account tiller --upgrade --tiller-namespace=myproject

Then when you install nginx-ingress, add TILLER_NAMESPACE=myproject on the front, and set rbac.create=true controller.scope.enabled=true

Anything else we need to know:

Most helpful comment

There are two different issue tickets describing this problem, original #11033 and later this one, #12510.

With the problem (that didn't even warrant the change introduced in #9637 as it amounted to as much as extra logs noise) being long fixed upstream (https://github.com/kubernetes/ingress-nginx/pull/3887), the latest nginx-ingress chart still ships the useless extra cluster roles that break scoped deployments, for about five months now.

Does anyone actually read the follow up issues or maintain the nginx-ingress chart?
@norwoodj @unguiculus

All 6 comments

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.

There are two different issue tickets describing this problem, original #11033 and later this one, #12510.

With the problem (that didn't even warrant the change introduced in #9637 as it amounted to as much as extra logs noise) being long fixed upstream (https://github.com/kubernetes/ingress-nginx/pull/3887), the latest nginx-ingress chart still ships the useless extra cluster roles that break scoped deployments, for about five months now.

Does anyone actually read the follow up issues or maintain the nginx-ingress chart?
@norwoodj @unguiculus

What we've ended up having to do on our end is just have our own modified copy of the chart, without this broken behaviour in it. This is not the first and only thing that's forced us to maintain our own copy of the chart, so at this point we're pretty far removed from the chart contained in this repository because of this. Multiple PRs have gone stale, as have issues, due to inactivity.

I believe the governance process around charts ownership should be revisited. I already have other PRs on this repo that are waiting approval.
Hopefully, as this is an issue raised by more people, it will be reviewed.

just wanted to echo the comments on this issue, users with security concerns would deploy tiller as non cluster-admin as suggested here: https://github.com/helm/helm/blob/master/docs/rbac.md#example-deploy-tiller-in-a-namespace-restricted-to-deploying-resources-only-in-that-namespace

We currently workaround this by removing the additional cluster roles from the helm chart which is far from being desirable from a deployment perspective.
```helm fetch --untar --untardir ./charts stable/nginx-ingress
mkdir ./charts/nginx-ingress/manifests
cd charts/nginx-ingress/

helm template --values $VALUES_URL --output-dir ./manifests . --set controller.scope.enabled=true
rm -f manifests/nginx-ingress/templates/scoped-clusterrole*

kubectl apply -f manifests/nginx-ingress/templates

those who come here, this is fixed in stable/nginx.ingress chart's 1.33.0 and above versions:

helm install --name nginx stable/nginx-ingress --debug --dry-run  --set rbac-create=true --set rbac.scope=true --version 1.33.0 | grep ^kind: | grep Role
kind: Role
kind: RoleBinding
Was this page helpful?
0 / 5 - 0 ratings