Minikube: macOS mount: mounted directory is empty

Created on 29 Jan 2018  ·  29Comments  ·  Source: kubernetes/minikube

Is this a BUG REPORT or FEATURE REQUEST?
Bug report

Please provide the following details:

Environment:

Minikube version: v0.25.0

  • OS: macOS 10.13.2
  • VM Driver: hyperkit
  • ISO version: v0.25.1
  • Others: kubernetes v1.7.5

What happened:
Tried to mount a host directory into a container. The directory inside the container is empty.

What you expected to happen:
The directory inside the container should give access to the mounted directory on the host. At least, that was the behavior when I was using minikube and VirtualBox!

How to reproduce it:

$ eval $(minikube docker-env)
$ mkdir test; cd test; echo 'hello' > test.txt
$ docker run --rm -it -v $(pwd):/testmount alpine
(now inside alpoine container)
/ # cd testmount
/testmount # ls -l
total 0

Anything else do we need to know:
Is this not possible when using hyperkit? If not, what is the alternative? I noticed some options relating to NFS, but I didn't dig too deeply. I'd prefer to just use the standard Docker volume mount mechanism if possible.

Thank you!

aremount causgo9p-limitation help wanted kinbug lifecyclstale omacos prioritimportant-longterm

Most helpful comment

I'm suffering this issue as well 😞

All 29 comments

Workaround this issue by using scp to copy the files into the VM:

scp -ri "$(minikube ssh-key)" "$PWD" docker@$(minikube ip):/tmp
docker run --rm -it -v /tmp:/testmount alpine

With xhyve being deprecated would be great to get it fixed, virtualbox is slower for local environments. (i.e. minikube delete deletes all images cache and need to refetch again, while hyperkit caches them on host machine)

I'm suffering this issue as well 😞

+1 same issue here

This still appears to be an issue with minikube v0.28.0, k8s v1.10.0, and macOS 10.13.5. The hyperkit driver in general seems to be pretty solid now (although kubeadm doesn't work at all; localkube must be used), however without a mounting mechanism, minikube isn't useful for local development. Which kind of defeats the purpose.

Back to Virtualbox!

any update on when a fix for this will be released? Agree with @zhaytee that this is a big blocker to use hyperkit. We are using xhye for now.

AFAIK the xhyve driver supports 9p or NFS mounts. Can you explain which one you're trying to use with hyperkit?

@dlorenc just trying to do the standard mounts using hostPath. For xhyve and virtualbox, this is all thats needed since my mount point is already in Users. Below is a snippet of my deployment.yaml template:

{{ define "txechart.volumes"}}
      volumes:
      - name: txe-v
        hostPath:
          path: /Users/ek/myproject/code
{{- end }}

{{ define "txechart.volumeMounts"}}
          volumeMounts:
          - mountPath: /code
            name: txe-v
{{- end}}

Also seeing this issue in minikube v0.28.1

Can replicate this issue in latest...

$ minikube version
minikube version: v0.28.2

This definitely seems like a poor user experience. We need to do a deep dive to make mounts more reliable.

As a workaround, I suspect using VirtualBox as a a vm-driver may work, as it has a more robust mounting mechanism.

still on v0.30.0

just give up on windows

Still on version: v0.32.0

VirtualBox is working fine but poor performance.

Still an issue on v0.33.1

It's a poor workaround to be sure, but this command left running in a terminal

minikube mount /Users:/Users

Seems to make the above testcase work as expected

docker run --rm -it -v $(pwd):/testmount alpine

Still an issue on v1.0.0

I'm curious, for those of you who are seeing this - is this occurring with a directory that contains more than 600 files?

If so, it may be related to #1753

the Mac /Users folder is not getting mounted, this still happens in 1.2.0 and on Mac OS 10.14.5:

> docker-machine-driver-hyperkit version
This is a Docker Machine plugin binary.
Plugin binaries are not intended to be invoked directly.
Please use this plugin through the main 'docker-machine' binary.
(API version: 1)
> hyperkit -v
hyperkit: v0.20180403-41-g64bbfb

Homepage: https://github.com/docker/hyperkit
License: BSD
> minikube version          
minikube version: v1.2.0
> minikube start  
😄  minikube v1.2.0 on darwin (amd64)
👍  minikube will upgrade the local cluster from Kubernetes 1.14.0 to 1.15.0
💿  Downloading Minikube ISO ...
 129.33 MB / 129.33 MB [============================================] 100.00% 0s

⚠️  Ignoring --vm-driver=virtualbox, as the existing "minikube" VM was created using the hyperkit driver.
⚠️  To switch drivers, you may create a new VM using `minikube start -p <name> --vm-driver=virtualbox`
⚠️  Alternatively, you may delete the existing VM using `minikube delete -p minikube`

🔄  Restarting existing hyperkit VM for "minikube" ...
⌛  Waiting for SSH access ...
🐳  Configuring environment for Kubernetes v1.15.0 on Docker 18.06.2-ce
💾  Downloading kubelet v1.15.0
💾  Downloading kubeadm v1.15.0
🚜  Pulling images ...
🔄  Relaunching Kubernetes v1.15.0 using kubeadm ... 
⌛  Verifying: apiserver proxy etcd scheduler controller dns
🏄  Done! kubectl is now configured to use "minikube"
> minikube ssh
                         _             _            
            _         _ ( )           ( )           
  ___ ___  (_)  ___  (_)| |/')  _   _ | |_      __  
/' _ ` _ `\| |/' _ `\| || , <  ( ) ( )| '_`\  /'__`\
| ( ) ( ) || || ( ) || || |\`\ | (_) || |_) )(  ___/
(_) (_) (_)(_)(_) (_)(_)(_) (_)`\___/'(_,__/'`\____)

$ ls
$ ls /
bin  data  dev  etc  home  init  lib  lib64  linuxrc  media  mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var
$ exit
logout

Still happens on MacOS Catalina (10.15.1 (19B88)) with minikube:

minikube version: v1.5.2
commit: 792dbf92a1de583fcee76f8791cff12e0c9440ad

But at least the workaround by @mkempster works! (minikube mount /Users:/Users)

I'm having the same problem using Rhel linux + minikube version 1.6.2

tried to mount

docker run -it -e --rm -v /home/myuser/Documents:/src alpine /bin/sh
then when inside the containter tried to list src folder content, but it is empty.

SELinux is disabled on the machine, my user is in the docker, libvirt, kvm, wheel groups

But the interesting thing is that if i try to mount /var/log (as a user) it is being mounted (but not all files are visible).

So i tried to switch to virtualbox driver since i read that kvm doesn' t support host file sharing, but still having the same identical problem, but when i use docker without miniube environment the mounts are working fine.
Tried also with the none driver and apparently still having the same issue

Didn't work for me on minikube v1.6.2
Upgraded minikube yesterday to v1.8.1 and still doesn't work.

Mounted dirs are empty (either explicit via mount string or the default home dir)

minikube start --driver=hyperkit --mount --mount-string='/data:/data'
😄  minikube v1.8.1 on Darwin 10.14.6
✨  Using the hyperkit driver based on user configuration
💿  Downloading VM boot image ...
🔥  Creating hyperkit VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
🐳  Preparing Kubernetes v1.17.3 on Docker 19.03.6 ...

I checked with virtualbox driver and same issue. Mounted dir is empty.

I only got it working with driver=docker. However docker driver has other issues like minikube dashboard not working.

Being able to mount a local directory is key in development. The alternative I have is to bake new docker image everytime I make change to my files (far from perfect).

+1 exactly the same issue with minikube 1.9.0

Same issue seen with minikube 1.7.3(KVM) on Centos 8.
Have to stick with podman for now. I would very much like to use minikube if this if fixed.

I'm having the same issue, no directory is being created at all. I'm using minikube v1.8.2

minikube start --mount --mount-string "~/test:/test"                                                            
😄  minikube v1.8.2 on Darwin 10.14.6
✨  Using the hyperkit driver based on existing profile
⌛  Reconfiguring existing host ...
🏃  Using the running hyperkit "minikube" VM ...
🐳  Preparing Kubernetes v1.17.3 on Docker 19.03.6 ...
🚀  Launching Kubernetes ...
📁  Creating mount ~/test:/test ...
🌟  Enabling addons: default-storageclass, storage-provisioner
🏄  Done! kubectl is now configured to use "minikube"

❗  /usr/local/bin/kubectl is v1.15.5, which may be incompatible with Kubernetes v1.17.3.
💡  You can also use 'minikube kubectl -- get pods' to invoke a matching version

minikube ssh                                                                                                

$ ls
$ cd ..
$ ls
docker
$ cd ..
$ ls
bin   dev  home  lib    libexec  media  opt   root  sbin  sys  usr
data  etc  init  lib64  linuxrc  mnt    proc  run   srv   tmp  var

Using

  • minikube: version v1.10.1 (commit: 63ab801ac27e5742ae442ce36dff7877dcccb278)
  • OS: Archlinux
  • Virtualbox: 6.1.8r137981
  • Docker: version 19.03.9-ce, build 9d988398e7

Same issue here, but only when using --driver=virtualbox.
Mount fails with start command, and minikube mount also fails.

16:28:00 baron_l:~$ ls ~/projects/pro/portainer/dist
extensions.json  kompose  kubectl  portainer  public  templates.json
16:28:31 baron_l:~$ minikube start --driver=virtualbox --mount --mount-string ~/projects/pro/portainer/dist:/portainer/app --kubernetes-version=v1.18.2
😄  minikube v1.10.1 sur Arch 
✨  Utilisation du pilote virtualbox basé sur la configuration de l'utilisateur
👍  Démarrage du noeud de plan de contrôle minikube dans le cluster minikube
🔥  Création de VM virtualbox (CPUs=2, Mémoire=3900MB, Disque=20000MB)...
🐳  Préparation de Kubernetes v1.18.2 sur Docker 19.03.8...
🔎  Verifying Kubernetes components...
📁  Création de l'installation /home/baron_l/projects/pro/portainer/dist:/portainer/app…
🌟  Enabled addons: default-storageclass, storage-provisioner
🏄  Terminé ! kubectl est maintenant configuré pour utiliser "minikube".
16:30:01 baron_l:~$ minikube ssh
                         _             _            
            _         _ ( )           ( )           
  ___ ___  (_)  ___  (_)| |/')  _   _ | |_      __  
/' _ ` _ `\| |/' _ `\| || , <  ( ) ( )| '_`\  /'__`\
| ( ) ( ) || || ( ) || || |\`\ | (_) || |_) )(  ___/
(_) (_) (_)(_)(_) (_)(_)(_) (_)`\___/'(_,__/'`\____)

$ ls -la /portainer/app
total 0
drwxr-xr-x 2 root root 40 Jun 10 14:29 .
drwxr-xr-x 3 root root 60 Jun 10 14:29 ..
$ exit
logout
16:32:40 baron_l:~$ minikube mount ~/projects/pro/portainer/dist:/portainer/app
📁  Mounting host path /home/baron_l/projects/pro/portainer/dist into VM as /portainer/app ...
    ▪ Mount type:   <no value>
    ▪ User ID:      docker
    ▪ Group ID:     docker
    ▪ Version:      9p2000.L
    ▪ Message Size: 262144
    ▪ Permissions:  755 (-rwxr-xr-x)
    ▪ Options:      map[]
    ▪ Bind Address: 10.0.9.1:36713
🚀  Userspace file server: ufs starting
🛑  Userspace file server is shutdown


💣  mount failed: mount with cmd /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=36713,trans=tcp,version=9p2000.L 10.0.9.1 /portainer/app" : /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=36713,trans=tcp,version=9p2000.L 10.0.9.1 /portainer/app": Process exited with status 32
stdout:

stderr:
mount: /portainer/app: mount(2) system call failed: Connection timed out.


😿  minikube is exiting due to an error. If the above message is not useful, open an issue:
👉  https://github.com/kubernetes/minikube/issues/new/choose

Using --driver=docker it's working.

16:44:52 baron_l:~$ minikube start --driver=docker --mount --mount-string ~/projects/pro/portainer/dist:/portainer/app --kubernetes-version=v1.18.2
😄  minikube v1.10.1 sur Arch 
✨  Utilisation du pilote docker basé sur la configuration de l'utilisateur
👍  Démarrage du noeud de plan de contrôle minikube dans le cluster minikube
🔥  Creating docker container (CPUs=2, Memory=3900MB) ...
🐳  Préparation de Kubernetes v1.18.2 sur Docker 19.03.2...
    ▪ kubeadm.pod-network-cidr=10.244.0.0/16
🔎  Verifying Kubernetes components...
📁  Création de l'installation /home/baron_l/projects/pro/portainer/dist:/portainer/app…
🌟  Enabled addons: default-storageclass, storage-provisioner
🏄  Terminé ! kubectl est maintenant configuré pour utiliser "minikube".
16:45:35 baron_l:~$ minikube ssh
docker@minikube:~$ ls -l /portainer/app
total 133129
-rw-r--r-- 1 docker docker     2723 Jun 10 14:09 extensions.json
-rwxr-xr-x 1 docker docker 53985292 Feb 25 17:15 kompose
-rwxr-xr-x 1 docker docker 44023808 Mar 25 19:20 kubectl
-rwxr-xr-x 1 docker docker 38281216 Jun 10 14:09 portainer
drwxr-xr-x 1 docker docker     4096 Jun 10 14:10 public
-rw-r--r-- 1 docker docker    25716 Jun 10 14:09 templates.json
docker@minikube:~$ exit
logout
16:46:44 baron_l:~$ minikube mount ~:/host_local
📁  Mounting host path /home/baron_l into VM as /host_local ...
    ▪ Mount type:   <no value>
    ▪ User ID:      docker
    ▪ Group ID:     docker
    ▪ Version:      9p2000.L
    ▪ Message Size: 262144
    ▪ Permissions:  755 (-rwxr-xr-x)
    ▪ Options:      map[]
    ▪ Bind Address: 172.17.0.1:43349
🚀  Userspace file server: ufs starting
✅  Successfully mounted /home/baron_l to /host_local

📌  NOTE: This process must stay alive for the mount to be accessible ...

I also tried to tweak mount options using my own UID and GID, without success

16:34:08 baron_l:~$ minikube start --driver=virtualbox --kubernetes-version=v1.18.2
😄  minikube v1.10.1 sur Arch 
✨  Utilisation du pilote virtualbox basé sur la configuration de l'utilisateur
👍  Démarrage du noeud de plan de contrôle minikube dans le cluster minikube
🔥  Création de VM virtualbox (CPUs=2, Mémoire=3900MB, Disque=20000MB)...
🐳  Préparation de Kubernetes v1.18.2 sur Docker 19.03.8...
🔎  Verifying Kubernetes components...
🌟  Enabled addons: default-storageclass, storage-provisioner
🏄  Terminé ! kubectl est maintenant configuré pour utiliser "minikube".
16:35:58 baron_l:~$ minikube mount ~/projects/pro/portainer/dist:/portainer/app
📁  Mounting host path /home/baron_l/projects/pro/portainer/dist into VM as /portainer/app ...
    ▪ Mount type:   <no value>
    ▪ User ID:      docker
    ▪ Group ID:     docker
    ▪ Version:      9p2000.L
    ▪ Message Size: 262144
    ▪ Permissions:  755 (-rwxr-xr-x)
    ▪ Options:      map[]
    ▪ Bind Address: 10.0.9.1:40687
🚀  Userspace file server: ufs starting
🛑  Userspace file server is shutdown

💣  mount failed: mount with cmd /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=40687,trans=tcp,version=9p2000.L 10.0.9.1 /portainer/app" : /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=40687,trans=tcp,version=9p2000.L 10.0.9.1 /portainer/app": Process exited with status 32
stdout:

stderr:
mount: /portainer/app: mount(2) system call failed: Connection timed out.


😿  minikube is exiting due to an error. If the above message is not useful, open an issue:
👉  https://github.com/kubernetes/minikube/issues/new/choose
16:36:57 baron_l:~$ minikube mount --gid=985 --uid=1000 ~/projects/pro/portainer/dist:/portainer/app
📁  Mounting host path /home/baron_l/projects/pro/portainer/dist into VM as /portainer/app ...
    ▪ Mount type:   <no value>
    ▪ User ID:      1000
    ▪ Group ID:     985
    ▪ Version:      9p2000.L
    ▪ Message Size: 262144
    ▪ Permissions:  755 (-rwxr-xr-x)
    ▪ Options:      map[]
    ▪ Bind Address: 10.0.9.1:33299
🚀  Userspace file server: ufs starting
🛑  Userspace file server is shutdown

💣  mount failed: mount with cmd /bin/bash -c "sudo mount -t 9p -o dfltgid=985,dfltuid=1000,msize=262144,port=33299,trans=tcp,version=9p2000.L 10.0.9.1 /portainer/app" : /bin/bash -c "sudo mount -t 9p -o dfltgid=985,dfltuid=1000,msize=262144,port=33299,trans=tcp,version=9p2000.L 10.0.9.1 /portainer/app": Process exited with status 32
stdout:

stderr:
mount: /portainer/app: mount(2) system call failed: Connection timed out.


😿  minikube is exiting due to an error. If the above message is not useful, open an issue:
👉  https://github.com/kubernetes/minikube/issues/new/choose

Is this issue fixed?
i'm still having issue on v1.12.1

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@Divya063 might be that enclosing the mount in quotes you don't get the shell expansion on the tilde?
I think that it confuse minikube on what's the actual home to mount and lead instead to resolve the /home/docker inside the virtual machine.

You can try doing minikube start --mount --mount-string "$HOME/test:/test" (or "$(pwd)/test:/test") and let us know if it fixed.

Was this page helpful?
0 / 5 - 0 ratings