The ability to set --authorization-mode=Webhook for kubelet in the cluster specs.
Currently, setting anonymous-auth=false for kubelet switches it to cert auth. We need --authorization-mode=Webhook in order to allow serviceaccount tokens to communicate with kubelet.
This would for example fix the Prometheus kubelet exporter, which currently returns server returned HTTP status 401 Unauthorized on a Kops cluster with anonymous-auth=false.
I see there is already a flag for this https://github.com/kubernetes/kops/blob/release-1.9/pkg/apis/kops/componentconfig.go#L28
But this is not really supported by kops yet, many things would break.
The remedy for kubelet is to switch to scrape metrics from https to http port which does not requires authorization (for /metrics endpoint).
Equivalent prometheus-operator chart value is exporter-kubelets.https=false.
Regarding the authorization-mode flag as you mentioned it is available from kops 1.9.0. The missing peace could be in in out-of-the-box RBAC rules. Users had reported lack of below permissions within system:node:
- apiGroups:
- ""
resources:
- nodes/proxy
verbs:
- create
- get
source: https://github.com/kubernetes/kops/issues/3891#issuecomment-346117423
Thanks @krogon , switching to http fixed prometheus for now.
@jeyglk Also you can pass configuration argument to kubelet --authentication-token-webhook=true This flag enables, that a ServiceAccount token can be used to authenticate against the kubelet(s). to resolve this issue. But this flag is supported in next version of kops (higher than v1.9.6)
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/remove-lifecycle stale
I've found that I still have to bind the appropriate nodes permissions to the kubelet-api user in order to resolve this. A ClusterRoleBinding of the system:kubelet-api-admin role already provided by Kubernetes to the kubelet-api user seems to be sufficient, though.
I'm using the following config for kubelet:
kubelet:
anonymousAuth: false
authenticationTokenWebhook: true
authorizationMode: Webhook
kops version 1.10.0.
I think it would make sense to just bind kubelet-api to system:kubelet-api-admin as part of kops since that role is already provided and appears to be for that purpose, and everything just works once I do that manually (including that it denies access to default serviceaccounts and unauthenticated users to the kubelet, as I want)
same as you @devyn I'm using the following configuration:
kubelet:
anonymousAuth: false
authorizationMode: Webhook
authenticationTokenWebhook: true
together with kube-prometheus.
I have the ClusterRole system:kubelet-api-admin:
$ kubectl get clusterrole system:kubelet-api-admin -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
creationTimestamp: 2018-12-13T10:45:49Z
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:kubelet-api-admin
resourceVersion: "60"
selfLink: /apis/rbac.authorization.k8s.io/v1/clusterroles/system%3Akubelet-api-admin
uid: 40f9fe23-fec4-11e8-bad8-0ed9e9ae5b3c
rules:
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- proxy
- apiGroups:
- ""
resources:
- nodes/log
- nodes/metrics
- nodes/proxy
- nodes/spec
- nodes/stats
verbs:
- '*'
however when I try to proxy like this I get an error:
$ kubectl port-forward svc/grafana 3000 \
error: error upgrading connection: unable to upgrade connection: Forbidden (user=kubelet-api, verb=create, resource=nodes, subresource=proxy)
@mazzy89 You're close, you also need a ClusterRoleBinding. Here's what I use to get logs/exec to work:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: kubelet
rules:
- apiGroups:
- ""
resources:
- nodes/stats
- nodes/metrics
- nodes/log
- nodes/spec
- nodes/proxy
verbs:
- create
- get
- update
- patch
- delete
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubelet-api
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kubelet
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: kubelet-api
I had to simply add the ClusterRoleBInding. now it works. Thank you @markine
a bit off kilter here, but kops 1.11 recommends to set anonymousAuth: false but before I do I would like to review what _is_ currently authenticating anonymously.... Any good way to do this? There is this issue with Prom cool, but I wonder what else will get broken.
@jurgenweber : I'm not sure there is a good way to audit anon access. That was one of the things that was mentioned in relation to the CVE that prompted the recommendation to disable anon auth, was that there was little visibility into what might have exploited it.
Main impact as I saw was around metrics-server's API aggregation needing to authenticate, and the need to enable this webhook auth mode, and related RBAC stuff which I think the metrics-server helm chart now incorporates.
@jhohertz Could you please to share what exactly related RBAC have you added ? I also meet this problem with metrics-server. Thanks.
Best Regards,
VietNC
@vietwow: So there were two bits done... one was adding this (as seen in chart now): https://github.com/helm/charts/blob/master/stable/metrics-server/templates/aggregated-metrics-reader-cluster-role.yaml
And the other was to give kubelet-api access, per: https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-authentication-authorization/
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: kubelet
rules:
- apiGroups:
- ""
resources:
- nodes/stats
- nodes/metrics
- nodes/log
- nodes/spec
- nodes/proxy
verbs:
- create
- get
- update
- patch
- delete
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubelet-api
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kubelet
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: kubelet-api
The initial request for the --authorization-mode flag was added with PR #4924
/close
@rdrgmnzs: Closing this issue.
In response to this:
The initial request for the
--authorization-modeflag was added with PR #4924/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Most helpful comment
/remove-lifecycle stale
I've found that I still have to bind the appropriate
nodespermissions to the kubelet-api user in order to resolve this. A ClusterRoleBinding of thesystem:kubelet-api-adminrole already provided by Kubernetes to the kubelet-api user seems to be sufficient, though.I'm using the following config for kubelet:
kops version 1.10.0.
I think it would make sense to just bind
kubelet-apitosystem:kubelet-api-adminas part of kops since that role is already provided and appears to be for that purpose, and everything just works once I do that manually (including that it denies access to default serviceaccounts and unauthenticated users to the kubelet, as I want)