Prometheus-operator: kube-prometheus: RBAC questions

Created on 22 May 2018  路  2Comments  路  Source: prometheus-operator/prometheus-operator

#1)

prometheus-operator/helm/ has rbacEnable - e.g. in prometheus-operator/values.yaml.

Can/should kube-prometheus also have such functionality (to optionally specify that RBAC isn't enabled)?


#2)

When I run kube-prometheus v0.19.0 on AWS in a K8s cluster where RBAC is not enabled, the deploy script prints Error from server (Forbidden): error when creating errors - e.g.:
Error from server (Forbidden): error when creating "example-dist/kubeadm/manifests/node-exporter/node-exporter-cluster-role.yaml": clusterroles.rbac.authorization.k8s.io "node-exporter" is forbidden: attempt to grant extra privileges: [PolicyRule{Resources:["tokenreviews"], APIGroups:["authentication.k8s.io"], Verbs:["create"]} PolicyRule{Resources:["subjectaccessreviews"], APIGroups:["authorization.k8s.io"], Verbs:["create"]}] user=&{kube-admin [kube-aws system:authenticated] map[]} ownerrules=[] ruleResolutionErrors=[]

That type of error is printed for each of the following 9 files:

~/prometheus-operator/contrib/kube-prometheus/example-dist/kubeadm/manifests
$ find . -type f | xargs egrep "^kind: ClusterRole$"
./kube-state-metrics/kube-state-metrics-cluster-role.yaml:kind: ClusterRole
./node-exporter/node-exporter-cluster-role.yaml:kind: ClusterRole
./prometheus-k8s/prometheus-k8s-cluster-role.yaml:kind: ClusterRole
./prometheus-operator/prometheus-operator-cluster-role.yaml:kind: ClusterRole
~/prometheus-operator/contrib/kube-prometheus/example-dist/kubeadm/manifests
$ find . -type f | xargs egrep "^kind: Role$"
./kube-state-metrics/kube-state-metrics-role.yaml:kind: Role
./prometheus-k8s/prometheus-k8s-role-config.yaml:kind: Role
./prometheus-k8s/prometheus-k8s-role-default.yaml:kind: Role
./prometheus-k8s/prometheus-k8s-role-kube-system.yaml:kind: Role
./prometheus-k8s/prometheus-k8s-role-namespace.yaml:kind: Role

That error seems to be benign - all 3 kube-prometheus apps (prometheus & alertmanager & grafana) still work as expected in my K8s cluster on AWS; so I'm just ignoring the error. If I'm not going to enable RBAC, then I believe the only way to get those errors to not occur would be if we had some sort of functionality to tell kube-prometheus that RBAC isn't enabled?


#3)

Regarding the prometheus-operator/create-minikube.sh script...

Here's the command I'm doing to start minikube on my Mac (I'm using minikube v0.25.2):
minikube start --kubernetes-version v1.9.4 --bootstrapper=kubeadm --extra-config=kubelet.authentication-token-webhook=true --extra-config=kubelet.authorization-mode=Webhook --extra-config=scheduler.address=0.0.0.0 --extra-config=controller-manager.address=0.0.0.0

prometheus-operator/getting-started.md has a link to the Prometheus Operator RBAC guide, which has a link to K8s Admin Authorization, which says "To enable RBAC, start the apiserver with --authorization-mode=RBAC".

So I believe this means RBAC should not be enabled on my minikube, right? (Because my minikube start command specifies Webhook.)

Note that I'm not doing the kubectl apply in create-minikube.sh for minikube-rbac.yaml.


Side note: regarding another RBAC issue, I added some comments today to #1324.


#4) (I don't think this one is RBAC related, but still lumping it in here.)

kube-prometheus v0.18.0 had minikube & self-hosted.

But kube-prometheus v0.19.0 has kubeadm & bootkube.

Can someone please confirm:

  • whether bootkube is the new equivalent for self-hosted?
  • what is the purpose of the Service created by bootkube/kube-prometheus.jsonnet? How can I tell whether I need that Service (when I'm running kube-prometheus in a K8s cluster on AWS)?

Most helpful comment

thank you very much @brancz for the info.

Your points are well taken - we're enabling RBAC on our clusters. :)

fyi for anyone else about #3) :

  • I did some more digging and found that starting minikube v0.26.0 the default bootstrapper is kubeadm - which enables RBAC by default.
  • With minikube v0.28.0 I did minikube start --extra-config=kubelet.authentication-token-webhook=true --extra-config=kubelet.authorization-mode=Webhook --extra-config=scheduler.address=0.0.0.0 --extra-config=controller-manager.address=0.0.0.0 and then I did:
$ kubectl get pods -n kube-system kube-apiserver-minikube -o yaml | grep mode
    - --authorization-mode=Node,RBAC 

All 2 comments

Can/should kube-prometheus also have such functionality (to optionally specify that RBAC isn't enabled)?

I believe no, because the objects can still successfully be created, and anyone using Kubernetes without RBAC is running a cluster that is effectively already compromised, you're allowing anyone and anything to create root containers on any host. Security is already hard enough to get right, I don't want to make it easier or encouraged in the first place to do the wrong thing.

whether bootkube is the new equivalent for self-hosted?

Yes, in fact we changed this because whenever we said self-hosted we actually meant bootkube (back in the day bootkube was the only tool that created self-hosted clusters so then it was truly equivalent, nowadays other tools are starting to do the same thing).

what is the purpose of the Service created by bootkube/kube-prometheus.jsonnet? How can I tell whether I need that Service (when I'm running kube-prometheus in a K8s cluster on AWS)?

Ideally we have docs on the platform you are running on, if not we would very much like to add them :slightly_smiling_face: . This is typically specific to the tool that you created your cluster with, as most tools use different labeling schemes for the Kubernetes components.

thank you very much @brancz for the info.

Your points are well taken - we're enabling RBAC on our clusters. :)

fyi for anyone else about #3) :

  • I did some more digging and found that starting minikube v0.26.0 the default bootstrapper is kubeadm - which enables RBAC by default.
  • With minikube v0.28.0 I did minikube start --extra-config=kubelet.authentication-token-webhook=true --extra-config=kubelet.authorization-mode=Webhook --extra-config=scheduler.address=0.0.0.0 --extra-config=controller-manager.address=0.0.0.0 and then I did:
$ kubectl get pods -n kube-system kube-apiserver-minikube -o yaml | grep mode
    - --authorization-mode=Node,RBAC 
Was this page helpful?
0 / 5 - 0 ratings