Kind: RBAC role for running kubectl exec in a pod in kind

Created on 8 Jul 2020  路  10Comments  路  Source: kubernetes-sigs/kind

I'm trying to run kubectl exec from inside a pod in a kind cluster. The pod in question has an ubuntu container with the kubectl executable, and a kube-proxy sidecar.

Since kubectl in the ubuntu container is talking to the kube-proxy sidecar container, there is no special certificate or auth in my ~/.kube/config inside the ubuntu container.

I am able to bind the built in ClusterRole of cluster-admin to the service account associated with the pod and use most kubectl commands from inside the ubuntu container in the pod such as deleting other pods in the kind cluster, deploying with helm, inspecting pod logs, kubectl top to talk to metrics-server, etc all work.

Unfortunately, I am unable to kubectl exec from inside the pod on any other pods in kind. I get:

error: unable to upgrade connection: Forbidden

From outside of the pod (i.e. on my Mac), I can use kubectl exec on any pods running in my kind cluster with no issues.

What I am missing about the necessary RBAC permissions for kubectl exec? I thought cluster-admin was the most permissive possible RBAC.

I have also tried to create a custom ClusterRole, but with the same results:

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: workspace
rules:
- apiGroups:
  - ""
  - extensions
  - policy
  - apps
  - batch
  - rbac.authorization.k8s.io
  - roles.rbac.authorization.k8s.io
  - apiextensions.k8s.io
  - admissionregistration.k8s.io
  - apiservices.apiregistration.k8s.io
  - apiregistration.k8s.io
  - metrics.k8s.io
  - authorization.k8s.io  
  resources: ["*"]
  verbs: ["create", "get", "list", "watch", "update", "patch", "delete", "deletecollection", "use", "bind", "escalate", "impersonate"]

Thank you!

kinsupport

All 10 comments

what exactly is a "kube-proxy sidecar" ??

what versions are kubectl and the cluster? kind?

kubernetes itself has no default roles, but kubeadm does have the cluster-admin role which should be maximally permissive.

The example below binds a custom ClusterRole but I've also tried cluster-admin.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: workspace
spec:
  replicas: 1
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
      maxSurge: 1
  selector:
    matchLabels:
      name: workspace
  template:
    metadata:
      labels:
        name: workspace
    spec:
      serviceAccountName: workspace-proxy
      containers:
        - name: kubectl-proxy
          image: bitnami/kubectl
          resources:
            requests:
              memory: "64Mi"
              cpu: "10m"
            limits:
              memory: "256Mi"
              cpu: "200m"
          args:
            - proxy
            - "-p"
            - "8080"        
        - name: workspace
          image: ...
          imagePullPolicy: Always
          securityContext:
            privileged: true          
          volumeMounts:
            - name: docker-sock-volume
              mountPath: /var/run/docker.sock
          resources:
            requests:
              memory: 100Mi
              cpu: 100m
            limits:
              memory: 16000Mi
              cpu: 8000m
      volumes:
        - name: docker-sock-volume
          hostPath:
            # location on host
            path: /var/run/docker.sock
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: workspace-proxy
rules:
- apiGroups:
  - ""
  - extensions
  - policy
  - apps
  - batch
  - rbac.authorization.k8s.io
  - roles.rbac.authorization.k8s.io
  - apiextensions.k8s.io
  - admissionregistration.k8s.io
  - apiservices.apiregistration.k8s.io
  - apiregistration.k8s.io
  - metrics.k8s.io
  - authorization.k8s.io  
  resources: ["*"]
  verbs: ["create", "get", "list", "watch", "update", "patch", "delete", "deletecollection", "use", "bind", "escalate", "impersonate"]
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: workspace-proxy
  namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: workspace-proxy
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: workspace-proxy
subjects:
  - kind: ServiceAccount
    name: workspace-proxy
    namespace: default

The .kube/config for this looks like:

apiVersion: v1
clusters:
- cluster:
    server: http://127.0.0.1:8080
  name: kind-kind
contexts:
- context:
    cluster: kind-kind
    user: kind-kind
  name: kind-kind
current-context: kind-kind
kind: Config
preferences: {}
users:
- name: kind-kind
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.5", GitCommit:"e6503f8d8f769ace2f338794c914a96fc335df0f", GitTreeState:"clean", BuildDate:"2020-06-26T03:47:41Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.2", GitCommit:"52c56ce7a8272c798dbc29846288d7cd9fbae032", GitTreeState:"clean", BuildDate:"2020-04-30T20:19:45Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}

and

% kind --version
kind version 0.8.1

Finally, the purpose of all this is I'm using kind to run a local development Kubernetes cluster and I have a "workspace" container that I'm using with Visual Studio Code SSH Remote development. So I can build and work inside of a container running in kind with access to all the pods and services in development.

IT'S AMAZING!

Would be great to be able to kubectl exec from the "workspace" container into the pods/containers under development to aid in debugging, etc.

Oh, and to enable docker in the "workspace" container, I mount the /var/run/docker.sock from the Mac into kind, and then mount it again in the "workspace" container:

cat <<EOF | kind create cluster --config=-
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
  extraMounts:
  - hostPath: /var/run/docker.sock
    containerPath: /var/run/docker.sock
containerdConfigPatches:
- |-
  [plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:${reg_port}"]
    endpoint = ["http://${reg_name}:${reg_port}"]
EOF
      containers:
        - name: kubectl-proxy
          image: bitnami/kubectl

Ah, based on the yaml above you have a kubectl proxy sidecar, _not_ a kube-proxy sidecar.

I don't think kubectl proxy supports exec, because that's websocket, not HTTP, kubectl proxy proxies the HTTP API. That's an upstream limitation, unrelated to kind, and unrelated to RBAC.

Would be great to be able to kubectl exec from the "workspace" container into the pods/containers under development to aid in debugging, etc.

you can. if you're on 0.8+ you need to make sure the container w/ kubectl is on the kind network, and then you'll want to use the "internal" kind config (not the one destined for the host) with kind get kubeconfig --internal which gives you the config to talk to the api server from within the cluster network

Oh cool! SUPER helpful, thank you!

Sorry for the confusion on kubectl proxy vs. kube-proxy.

Is the "unable to upgrade" a "unable to upgrade HTTP to websocket" error? I assumed it was permissions related.

Really loving everything about this project (technically speaking) but this is the most amazing response time I have ever experienced on a Github issue on an open source project. Thank you again!

Also using kind get kubeconfig --internal from inside the ubuntu container inside the kind cluster to generate my ~/.kube/config works great! kubectl exec is now working like a charm from my "workspace" container inside a pod inside the kind cluster.

As a nice bonus, I can remove the kubectl proxy sidecar container.

So cool!

Is the "unable to upgrade" a "unable to upgrade HTTP to websocket" error? I assumed it was permissions related.

I think it may be, kubectl exec and kubectl logs stream over websocket IIRC, and websockets are upgraded from HTTP.
I'm pretty sure kubectl proxy only supports HTTP, but not absolutely certain.

Glad you got it working :-)

For completeness, it actually appears that these APIs are filtered from kubectl proxy by default and you need kubectl proxy to run with --disable-filter

https://stackoverflow.com/questions/50041250/kubectl-exec-does-not-work-with-kubectl-proxy

kubectl proxy --help excerpt:

Options:
      --accept-hosts='^localhost$,^127\.0\.0\.1$,^\[::1\]$': Regular expression for hosts that the proxy should accept.
      --accept-paths='^.*': Regular expression for paths that the proxy should accept.
      --address='127.0.0.1': The IP address on which to serve on.
      --api-prefix='/': Prefix to serve the proxied API under.
      --disable-filter=false: If true, disable request filtering in the proxy. This is dangerous, and can leave you
vulnerable to XSRF attacks, when used with an accessible port.
      --keepalive=0s: keepalive specifies the keep-alive period for an active network connection. Set to 0 to disable
keepalive.
  -p, --port=8001: The port on which to run the proxy. Set to 0 to pick a random port.
      --reject-methods='^$': Regular expression for HTTP methods that the proxy should reject (example
--reject-methods='POST,PUT,PATCH'). 
      --reject-paths='^/api/.*/pods/.*/exec,^/api/.*/pods/.*/attach': Regular expression for paths that the proxy should
reject. Paths specified here will be rejected even accepted by --accept-paths.
  -u, --unix-socket='': Unix socket on which to run the proxy.
  -w, --www='': Also serve static files from the given directory under the specified prefix.
  -P, --www-prefix='/static/': Prefix to serve static files under, if static file directory is specified.
Was this page helpful?
0 / 5 - 0 ratings