Kind: Unable to mount docker socket for DooD config

Created on 23 Jan 2020  Â·  4Comments  Â·  Source: kubernetes-sigs/kind

What happened:

I'm deploying Jenkins using the standard Helm chart using the Kubernetes plugin.
Our config uses Docker out of Docker (DooD) (i.e. the build pods have a shared mount to the underlying docker socket.)

What you expected to happen:

I'm expecting the build pod to have access to the docker socket through the mounted socket and to use that for docker builds.

How to reproduce it (as minimally and precisely as possible):

cat <<'EOF' | kubectl create -f -
apiVersion: v1
kind: Pod
metadata:
  name: probe-socket
  labels:
    purpose: debug
spec:
  containers:
  - name: probe-docker
    image: docker:latest
    imagePullPolicy: IfNotPresent
    command: ['sleep', '600' ]
    securityContext:
      privileged: true
    volumeMounts: 
        - mountPath: /var/run 
          name: docker-sock 
  volumes: 
    - name: docker-sock 
      hostPath: 
        path: /var/run 
EOF

After it has been created and is ready:

kubectl exec probe-socket -- docker ps

Expecting to see a listing of some pods, instead seeing:

kubectl exec probe-socket -- docker ps
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
command terminated with exit code 1

Anything else we need to know?:

Environment:

  • kind version: (use kind version): kind v0.7.0 go1.13.6 linux/amd64
  • Kubernetes version: (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2019-12-07T21:20:10Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2020-01-14T00:09:19Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"}
  • Docker version: (use docker info):
docker info
Client:
 Debug Mode: false

Server:
 Containers: 25
  Running: 15
  Paused: 0
  Stopped: 10
 Images: 200
 Server Version: 19.03.5
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Native Overlay Diff: true
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: b34a5c8af56e510852c35414db4c1f4fa6172339
 runc version: 3e425f80a8c931f88e6d94a8c831b9d5aa481657
 init version: fec3683
 Security Options:
  apparmor
  seccomp
   Profile: default
 Kernel Version: 4.15.0-74-generic
 Operating System: Ubuntu 18.04.1 LTS
 OSType: linux
 Architecture: x86_64
 CPUs: 4
 Total Memory: 15.58GiB
 Name: silent
 ID: LMGF:3PJH:KS54:DGCH:QVNU:O7OL:AU46:GQU3:SKE6:PQ5F:4WEJ:E4DY
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false

WARNING: No swap limit support
  • OS (e.g. from /etc/os-release):
cat /etc/os-release
NAME="Ubuntu"
VERSION="18.04.1 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04.1 LTS"
VERSION_ID="18.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=bionic
UBUNTU_CODENAME=bionic
kinsupport

Most helpful comment

For anyone else trying to do this.

(And yes, this is unsanitary and a big security risk)

This config works:


cluster.yaml:

# three node (two workers) cluster config
# note the dot in `docker.socket`
# we're mounting the actual docker socket
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
  extraMounts:
  - hostPath: /var/run/docker.sock
    containerPath: /var/run/docker.sock
- role: worker
  extraMounts:
  - hostPath: /var/run/docker.sock
    containerPath: /var/run/docker.sock
- role: worker
  extraMounts:
  - hostPath: /var/run/docker.sock
    containerPath: /var/run/docker.sock

create the cluster with:

kind create cluster --config cluster.yaml

then after creating the pod with:

cat <<'EOF' | kubectl create -f -
apiVersion: v1
kind: Pod
metadata:
  name: probe-socket
  labels:
    purpose: debug
spec:
  containers:
  - name: probe-docker
    image: docker:latest
    imagePullPolicy: IfNotPresent
    command: ['sleep', '600' ]
    securityContext:
      privileged: true
    volumeMounts: 
        - mountPath: /var/run 
          name: docker-sock 
  volumes: 
    - name: docker-sock 
      hostPath: 
        path: /var/run 
EOF

waiting for it to come up:

kubectl get pods -w

The following does show pods on the host running kind:

kubectl exec probe-socket -- docker ps
CONTAINER ID        IMAGE                              COMMAND                  CREATED             STATUS                           PORTS                       NAMES
6254bf978bbf        kindest/node:v1.17.0               "/usr/local/bin/entr…"   12 minutes ago      Up 12 minutes                                                kind-worker
5135ffd8e91e        kindest/node:v1.17.0               "/usr/local/bin/entr…"   12 minutes ago      Up 12 minutes                    127.0.0.1:32769->6443/tcp   kind-control-plane
93825c4d4b2e        kindest/node:v1.17.0               "/usr/local/bin/entr…"   12 minutes ago      Up 12 minutes                                                kind-worker2
...

All 4 comments

Your expectation is not portable 🙃
kind nodes run CRI (containerd)

Docker => kind node container => containerd => your pod

You can use an extra mount in config to pass through the underlying host's docker.

@BenTheElder thanks for clarifying the misunderstanding!

For anyone else trying to do this.

(And yes, this is unsanitary and a big security risk)

This config works:


cluster.yaml:

# three node (two workers) cluster config
# note the dot in `docker.socket`
# we're mounting the actual docker socket
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
  extraMounts:
  - hostPath: /var/run/docker.sock
    containerPath: /var/run/docker.sock
- role: worker
  extraMounts:
  - hostPath: /var/run/docker.sock
    containerPath: /var/run/docker.sock
- role: worker
  extraMounts:
  - hostPath: /var/run/docker.sock
    containerPath: /var/run/docker.sock

create the cluster with:

kind create cluster --config cluster.yaml

then after creating the pod with:

cat <<'EOF' | kubectl create -f -
apiVersion: v1
kind: Pod
metadata:
  name: probe-socket
  labels:
    purpose: debug
spec:
  containers:
  - name: probe-docker
    image: docker:latest
    imagePullPolicy: IfNotPresent
    command: ['sleep', '600' ]
    securityContext:
      privileged: true
    volumeMounts: 
        - mountPath: /var/run 
          name: docker-sock 
  volumes: 
    - name: docker-sock 
      hostPath: 
        path: /var/run 
EOF

waiting for it to come up:

kubectl get pods -w

The following does show pods on the host running kind:

kubectl exec probe-socket -- docker ps
CONTAINER ID        IMAGE                              COMMAND                  CREATED             STATUS                           PORTS                       NAMES
6254bf978bbf        kindest/node:v1.17.0               "/usr/local/bin/entr…"   12 minutes ago      Up 12 minutes                                                kind-worker
5135ffd8e91e        kindest/node:v1.17.0               "/usr/local/bin/entr…"   12 minutes ago      Up 12 minutes                    127.0.0.1:32769->6443/tcp   kind-control-plane
93825c4d4b2e        kindest/node:v1.17.0               "/usr/local/bin/entr…"   12 minutes ago      Up 12 minutes                                                kind-worker2
...

FWIW kind is also a bit of a security risk, particularly if you run priv pods on it. I've been thinking about how to correctly message that, as it's not for example much worse than kubeadm init (better possibly?) on your host, and in docker for mac / windows it's further sandboxed..., but it is likely much worse than say using a real cloudy cluster, or a local VM cluster or ...

passing through the socket doesn't help, but it does allow for some fun tricks like this :-)

Was this page helpful?
0 / 5 - 0 ratings