Vagrant: Add feature to mount folders post-provisioning?

Created on 17 May 2012  ·  77Comments  ·  Source: hashicorp/vagrant

I don't think this feature or config setting exists having perused the docs and support. My Vagrant file mounts shared folders, and assigns them a group and user ownership. However, if the case of a new box, the user and group I assign the mounted folder ownership do not exist until the chef provisioning is done on the box. I tried to control the order of provisioning and mounting, but the mounting is always attempted first, which obviously breaks if the provisioning has not occurred yet.

For example:

config.vm.define :dpils_slave_1, :primary => false do |cfg|
cfg.vm.forward_port 8080, 8081
#cfg.vm.network :hostonly, "1.1.1.11"
cfg.vm.provision :chef_solo do |chef|
chef.cookbooks_path = ["chef/cookbooks", "chef/site-cookbooks", "chef/site-cookbooks-tk"]
chef.roles_path = ["chef/roles-tk"]
chef.add_role "jboss"
# chef.log_level = :debug
end

cfg.vm.share_folder 'crx_node_2', '/foo/bar/dpils_repository/dpils_whq_02/crx', 'foo/bar/crx_node_2', :owner => "jboss", :group => "jboss", :create => true

end

Which gives this error on a clean vagrant load:

[dpils_slave_1] -- v-csr-4: /tmp/vagrant-chef-1/chef-solo-4/roles
[dpils_slave_1] -- v-csc-1: /tmp/vagrant-chef-1/chef-solo-1/cookbooks
[dpils_slave_1] -- v-csc-2: /tmp/vagrant-chef-1/chef-solo-2/cookbooks
[dpils_slave_1] -- v-csc-3: /tmp/vagrant-chef-1/chef-solo-3/cookbooks
[dpils_slave_1] -- crx_node_2: /nike/vault/dpils_repository/dpils_whq_02/crx
The following SSH command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!

mount -t vboxsf -o uid=id -u jboss,gid=id -g jboss crx_node_2 /nike/vault/dpils_repository/dpils_whq_02/crx

Most helpful comment

Here's the workaround I landed on:

config.vm.synced_folder /opt/somedir, /opt/somedir, mount_options: ["uid=1010,gid=1010"]

To do this you _must create_ the desired user and group _with the UID specified_ (useradd -u 1010 cooluser and groupadd -g 1010 cooluser in this case). The folder will be synced, and when the user is actually created, its UID will match and it will have permission on the synced dir.

All 77 comments

I've had the same problem, here is a workaround worked for me:

config.vm.share_folder "ganja", "/test", "/test"
config.vm.provision :shell do |shell|
  shell.inline = "sudo mount -t vboxsf -o uid=`id -u apache`,gid=`id -g apache` test /test"
end

If you omit the guest path, it won't auto-mount. Also, the @dominis workaround is quite good. I think this is rare enough that this is satisfactory for now since this would require significant change. Sorry!

+1 for this feature, if you ever reconsider. My use case is to mount a host folder under the mysql user, so I can keep the mysql data folder outside the VM.

You can also mount a folder as a user who doesn't exist yet, if you use their uid.

Thanks! The uid method is nice and clean:

  MYSQL_UID = 106
  MYSQL_GID = 111
  config.vm.synced_folder "local/path", "/mysql_data", owner: MYSQL_UID, group: MYSQL_GID

Then presumably I'd want to manually create the mysql user before any other provisioning, so the ID stays consistent regardless of changes to the build order.

+1, I'm going to have to disagree with you, @mitchellh, I run into this issue all the time.

This becomes a much bigger issue when you want to use NFS. NFS doesn't support auto-creating directories, so for example if I wanted to sync the directory www to /var/www/vhosts/site/httpdocs I'd first need to provision, then enable the nfs sync by manually editing the Vagrantfile, then reload the vagrant box. I can't think of another way to accomplish this (short of modifying the base box), am I missing something?

Do hope this gets reconsidered, I can definitely see the option :mount_after_provision => true being useful.

:+1: I like @ben-rosio idea for :mount_after_provision => true

In the meantime, I had been requiring a vagrant reload, and signaling that user creation had completed during provisioning by touch .vagrantprovision/user_created, and not mounting the directory until that existed.

I will try the @dominis workaround, so that only one vagrant up is required

+1 for @ben-rosio 's :mount_after_provision => true suggestion. If I can set 'owner' and 'group' on a synced folder on a guest machine, then I need to be given the chance to create owner and group (and mount point, as well!). Otherwise it's necessary to use the non-existent UID/GID workaround that @drpebcak suggests (efficacious, yet cludgy).

One of the big uses for synced_folder is to be able to operate on code on the host that is part of a directory structure on the guest. In the case of Puppet Enterprise this causes the installer to do weird things when it finds a directory structure already present.

I hit this issue as well. Deving a server applet for tomcat, would like to mount the webapps folder so I can do partial compiles from my dev environment, but I can't mount a webapps folder that doesn't exist. Doc's had no answers for me. This is a needed feature.

I want to share my host user's ~/.ssh files as the VM vagrant user's ~/.ssh files. The simplest way to do this is to sync the host user's ~/.ssh directory. However, Vagrant needs ~/.ssh to create and manage an authorized_keys file.

A provisioning step I have is to set sshd to use another location for user authorized keys (e.g. /etc/ssh/authorized_keys/%u) and move the vagrant user's authorized_keys file to there (e.g. /etc/ssh/authroized_keys/vagrant). However, the authorized_keys file is destroyed by the folder sync before provisioning has a chance to move it.

Hey @delitescere, just a thought about how you might sidestep the issue in your specific usecase. I don't know if you've already looked at setting guest machine's ssh to enable ssh forward_agent.
You would need an ssh agent locally like Putty's pageant or Mac OS's Keychain (for Leopard and above), and to set config.ssh.forward_agent to true in your Vagrantfile.

After that, all your local ssh keys will be available on the vagrant box! You can even put the ssh.forward_agent config in your user Vagrantfile (located in ~/.vagrant.d/Vagrantfile) so it applies to all your vagrant instances.

@mitchellh Is there any chance you might reconsider this? There's been quite a number of +1s

Even a direct implementation of this feature isn't a requirement.

The +1's here only indicate a problem, that the owner of a folder might be someone who is only created after the provisioning step has happened.

If you compare and contrast a Dockerfile, you see why this isn't a problem for them, they can specify any order they please. Maybe that's the real bug here. Users need more control than the Vagrantfile is offering them.

Another interesting solution would be to have sync folders be provisioning
steps, this way they can happen in any order necessary.
On Mar 3, 2015 11:29 AM, "Ghoughpteighbteau" [email protected]
wrote:

Even a direct implementation of this feature isn't a requirement.

The +1's here only indicate a problem, that the owner of a folder might be
someone who is only created after the provisioning step has happened.

If you compare and contrast a Dockerfile, you see why this isn't a problem
for them, they can specify any order they please. Maybe that's the real bug
here. Users need more control than the Vagrantfile is offering them.


Reply to this email directly or view it on GitHub
https://github.com/mitchellh/vagrant/issues/936#issuecomment-77004343.

@ben-rosio well, that's the hack, right? Adding a post provisiong call in a shell script to mount external folders, and provide an unmounted shared folder at the start so that virtual box knows what to call it.

The problem is it's very non-obvious and strange (and personally I couldn't even get it to work, I ended up just taking advantage of the www-data)

@Ghoughpteighbteau -> nail_on_head.gif. Expecting a Vagrantfile to execute sequentially seems like a far more intuitive UX. I've not run into this yet, but i'm glad i checked this out b/c i'm sure at some point when i start playing with chef I may.

Although, i pre-provision my boxes w/ packer so I feel like this issue may largely avoided by that. It seems like any vagrant provisioning I will ever need will be any experimental packages and will not have any mounting issues. I'm sure i'm missing some edge cases...

Nice to have, but definitely not a show stopper IMHO.

I would consider this a bug/blocker from the standalone pov of vagrant and folder syncing. The way I would see it is that 100% of user's created by package installation - such as my jenkins user - are not able to work with vagrants sync_folder 100% of the time.

But whether its called a bug or enhancement etc - I'd love to see this issue re-opened

Mount synced folder (not all, but some exact) after provision is what I need. How to workaround this?

@AngeIII we've being using www-data as that user/group is often pre-existing in many distributions, and has reasonable security settings.

In some cases this is exactly what we need, in other cases it's a dirty hack.

Mount a third party folder with www-data as the owner, and give your application group access to www-data.

If that's a blocker for you, well then, this needs to be fixed :\

+1 for this feature too. Currently can not do something like below because user and group apache does not exists before provisioning is run. Although as a workaround I can use guid and gid because they are consistent in CentOS.

config.vm.synced_folder "logs", "/vagrant/logs", owner: "apache", group: "apache"

My workaround for this:

config.vm.provider "virtualbox" do |v|
      v.customize ["setextradata", :id, "VBoxInternal2/SharedFoldersEnableSymlinksCreate/v-root", "1"]
  end

And in provision.sh

rm -rf /srv/www/web-dev/content/plugins
rm -rf /srv/www/web-master/content/plugins
ln -ds /srv/www/plugins /srv/www/web-dev/content/plugins
ln -ds /srv/www/plugins /srv/www/web-master/content/plugins

Works only if vagrant is up with admin rights (Windows).

My idea is that:
WWW is shared folder
my provison.sh runs and check if
www/web-dev exists. if yes => git pull, else => git clone
mount in web-dev/plugins my second shared folder www/plugins.

Problem is that by using default config.vc.synced_folder it runs before provisioning and I get that www/web-dev/plugins exists and my check in provision.sh fails (so I get empty project with shared folder inside).

My use case is to have userapp specific data directories on the host available in the guest. The user accounts only exist after provisioning (chef). The hard wired non-existant uid/gid hack doesn't work under vagrant 1.7.2 and vagrant-vmware-workstation 3.2.6 - the mount step tries to do an id against the provided uid/gid, which fails.

There's over three years of comments asking for functionality to address this not-uncommon use case.

+1 for @ben-rosio 's :mount_after_provision => true suggestion.

The comments on this issue prove that it is reasonably common to need to mount shared folders after provisioning, so that the user that is supposed to own them can be created first.

Furthermore, the workaround to use the UID/GID doesn't work with the VMWare provider, as @davidski points out.

We are using the VMWare provider and Ansible for provisioning, and the only workaround I could find for us was to configure the synced folder in the usual way:

config.vm.synced_folder "rollerball/", "/rollerball" # (in the Vagrantfile)

That way, it just gets mounted as owned by the vagrant user, and then add steps in our provisioning process to unmount the shared volume, and then mount it again it with the correct ownership:

sudo mount -t vmhgfs -o uid=`id -u rollerball`,gid=`id -g rollerball` .host:/-rollerball /rollerball

Not the end of the world, but not ideal either.

I just want to take a moment to point out two things:

1) all of us who would like this feature are empowered to contribute a solution,
2) Mitchell does a great job and vagrant saves us all a load of time, so the ways to overcome this obstacle are kludgey but joyful.

@delitescere

true, but the problem is if you want to fix this issue, you need a slight rearchitecture of how the VagrantFile is run. It's either that, or a really stupid and kludgey hack where you include a directive for post mounting and pre mounting.

You see the problem here? Implementing this suggestion either makes the API worse, or breaks compatability.

It should be seriously considered, and frankly, us users know that we can't rightfully make a change like that. That's why we need repository contributors to look at this issue and open a dialogue with us. Only when we have a plan on how this should be implemented can we go and tell people to implement it, right?

@Ghoughpteighbteau

You had me at "true", you had me at "true".

I don't believe it requires a re-architecture, but it does mean exposing the "provisioned" state more broadly than it is currently. As far as the Vagrantfile syntax, there are indeed at least two approaches:

1) Limit the "post-provisioning" behaviour to folder syncing
2) Provide a block for arbitrary post-provisioning actions

Perhaps 1) directly addresses this issue, whereas 2) gets Vagrant into puppet/chef territory.

For the moment I am using a manual approach placing "mount.flag" in the vagrant root and checking if exists in Vagrantfile to en/disable mounting. Post-provision flag or "mount_after_provision" would solve this.

My work place uses Vagrant with VMware fusion so I can't even use the workaround specified above :(

3 days after I ever touched Vagrant in my life, I hit this when provisioning testing boxes which need to use a shared folder with millions of files and have it be in their 'apache' group.

In the end the only option I had left was to create a base image box with all the users already created.

Just use the numeric uid/gid for mounting, they shouldn't change that
often. Works nice for me.

On Thu, Oct 1, 2015 at 9:16 PM SaimonL [email protected] wrote:

In the end the only option I had left was to create a base image box with
all the users already created.


Reply to this email directly or view it on GitHub
https://github.com/mitchellh/vagrant/issues/936#issuecomment-144821087.

As discussed at https://github.com/mitchellh/vagrant/issues/936#issuecomment-104826608, uid/gid does not work for all combinations. vagrant-vmware tries to do an id on the uid and guid, which don't exist.

@SaimonL We too created a basebox where it's just vanilla ubuntu + one user account. Super overkill, but it was the only reasonable workaround which addressed our needs. Given this issue has been open for >3years, an afternoon with Packer to script creation of basebox(es) might be the easiest way to work around this issue.

I am using a "local.yml" file for paths and the UID+GID of the vagrant process during ansible provisioning. After the initial run i have a script (mount.sh) that creates a 'mount.flag' file to indicate that mounting is now possible and updates the /etc/exports for nfs shares + reloads the vagrant box afterwards. See https://gist.github.com/novalis111/e951a3b7d499ccdcbbb1

@mitchellh , maybe re-open a ticket as not resolved? Lot of people asking for this feature. Not all workarounds works fine and it should be more clear to make actions before/after/post/cron and etc.

Indeed this one should be reopened. Although I was able to hardcode the uid and the gid it's extremely hackish and not convienient when doing the provisioning. mount after provision functionality pls!

@mitchellh What's the state here? Will this ticket be reopened or not? Many people are asking for this feature.

I got the best alternative solution.
Use Chef Solo or Puppet Apply and create a the user and do a NFS mount and set up Autofs so on boot it will auto-mount.
The downside is you will have to setup a NFS server but then again NFS server works for all VMware, VirtualBox, and KVM (vagrant-libvirt) and can be use with other services.

@SaimonL I am glad that's working for you, but that breaks the portability of the Vagrantfile. Unless you now add another machine to it that will act as the NFS server. I don't think that would be a great alternative for most folks.

I just realized that Vagrant is all about having everything in one place, simple and easy to set-up and getting started. NFS server takes that away and adds dependencies. So I vote back to ":mount_after_provision => true" :)

@notpeter Packer has been our work around for this too.

I +1 this. I've wasted an enormous amount of time looking for a suitable workaround. One of the main reasons to create a virtual box with Vagrant is so that we can do development in an environment that duplicates our production environment. If directories where our code resides aren't able to have the same owners and permissions as in our production environment, we're not really duplicating that production environment and are thus missing out on one of the main advantages to using a VM.

Here's the workaround I landed on:

config.vm.synced_folder /opt/somedir, /opt/somedir, mount_options: ["uid=1010,gid=1010"]

To do this you _must create_ the desired user and group _with the UID specified_ (useradd -u 1010 cooluser and groupadd -g 1010 cooluser in this case). The folder will be synced, and when the user is actually created, its UID will match and it will have permission on the synced dir.

Adding myself to the list that would like this feature. I am using vagrant-bindfs and need to set the group/owner of the synced folder but need for this to happen after provisioning, which creates the user/group.

Edit: Well vagrant-bindfs already provides this feature, as documented (which I missed):

# Bind a folder after provisioning
config.bindfs.bind_folder "/vagrant-after-provision", "another/guest/mount/point", after: :provision

@mitchellh Could Hashicorp just fix the Vagrant provider so that numeric owner and group IDs in the shared_folder config work right (i.e. detect a numeric id and leave out the id -u/g bit)? The recent updates to the VirtualBox provider in #7720 are a good template for properly handling all the cases.

Anything further with this? I have a slightly different problem that results in the same thing. I want to use the debian/jessie64 vbox, but it doesn't come with the guest additions by default, and thus can't mount vboxfs. As such, I have to install the guest additions before the mount on the first round.

@tareko The same problem here. Any news?

Like @flaugher i've wasted a lot of time trying to work around this so would like to see it supported if possible as it's the last step to getting our vm's in working order from a vagrant up without having to run more commands post build.

@tareko, @nick4fake: For that particular problem, you may find the vagrant-vbguest plugin helpful.

I had to work around this by mounting the folder to a temporary location and then linking to it once the necessary users were created provisioning. I did have to use @jcushman's workaround for mounting though.

I would be great if this were supported

+1 to this issue.

Currently Fedora 25's rpcbind package no longer starts on boot and if we could provision before synced_folder runs, I could fix it with two shell commands. Instead I have to either wait for the package to be fixed, roll my own vagrant box image with the fix or let the NFS mount timeout, fix the issue and run the provision manually.

It is pretty clear based on the feedback in this issue that there are a handful of use cases for doing a provision before synced_folder.

I was wondering if it would be possible for my to precreate my user? As far as I know my installers don't care if the mysql and apache user already exist. So if I could get vagrant to precreate them in the way that it is stuffing in virtual box tools before the mount this would be solved for me. This also create the best case scenario where the install dumps in the default stuff into the directories rather the having something dropped on top of that.

So this is not ideal but sort of handles it if I do this with a wrapper.

https://github.com/oriceon/vagrant-virtual-machine-for-web-development/wiki/Repack-box-from-existing-one

But it forces me to hide the vagrant work in a wrapper script for the developer.

I have a solution to running something post-mounts that's been working for me. I'm using the "always" provisioning feature (I'm not sure when that was added):

Vagrant.configure('2') do |config|
    ...
    config.vm.provision :shell, path: './my_real_provisioning_script.sh'
    config.vm.provision :shell, run: 'always', path: './my_after_mounts_every_boot_script.sh'
    ...

I hope this helps someone out.

Personally I was using this trick to detect first run in the Vagrantfile:

ruby if File.exist?(".vagrant/machines/YOUR_BOX_ID/virtualbox/action_provision") # # Already provisioned, it's now safe to use "config.vm.synced_folder" statements # and additional scripts that require the users exist # else # # Not yet provisioned, only run the script that creates the required users # end

The one downside is that you have to do this once:

vagrant up
vagrant reload

Also, if you want the additional scripts run only once (like provision scripts are meant to), use vagrant reload --provision instead of just vagrant reload. Otherwise, the additional scripts need to use run:always mentioned in the previous comment.

This is rubbish, please fix.

Vagrant attempted to execute the capability 'mount_nfs_folder'
on the detect guest OS 'windows', but the guest doesn't
support that capability. This capability is required for your
configuration of Vagrant. Please either reconfigure Vagrant to
avoid this capability or fix the issue by creating the capability.

Nope guest is was Linux.

On Sep 28, 2017 3:02 AM, "ZhiJian Yang" notifications@github.com wrote:

Vagrant attempted to execute the capability 'mount_nfs_folder'
on the detect guest OS 'windows', but the guest doesn't
support that capability. This capability is required for your
configuration of Vagrant. Please either reconfigure Vagrant to
avoid this capability or fix the issue by creating the capability.


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/hashicorp/vagrant/issues/936#issuecomment-332789767,
or mute the thread
https://github.com/notifications/unsubscribe-auth/APDnK8g7KvxtGnRb--E62_JfceL6NVotks5sm27EgaJpZM4AFFG-
.

I installed windows on the windows, I would like to use NFS way to connect WINDOWS

Oh my bad I thought you were replying. This is really a totally different
feature then my request.

On Thu, Sep 28, 2017 at 8:24 AM, ZhiJian Yang notifications@github.com
wrote:

I installed windows on the windows, I would like to use NFS way to connect
WINDOWS


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/hashicorp/vagrant/issues/936#issuecomment-332871817,
or mute the thread
https://github.com/notifications/unsubscribe-auth/APDnK_lD4VoL764yEvDJcEbS6NHs1qawks5sm7pCgaJpZM4AFFG-
.

@xophere
boot error! Sorry, my English is not good!
==> default: Forwarding ports...
default: 5985 (guest) => 55985 (host) (adapter 1)
default: 5986 (guest) => 55986 (host) (adapter 1)
==> default: Running 'pre-boot' VM customizations...
==> default: Booting VM...
==> default: Waiting for machine to boot. This may take a few minutes...
default: WinRM address: 127.0.0.1:55985
default: WinRM username: IEUser
default: WinRM execution_time_limit: PT2H
default: WinRM transport: plaintext
==> default: Machine booted and ready!
==> default: Checking for guest additions in VM...
==> default: Configuring and enabling network interfaces...
==> default: Exporting NFS shared folders...
==> default: Preparing to edit /etc/exports. Administrator privileges will be required...
==> default: Mounting NFS shared folders...
Vagrant attempted to execute the capability 'mount_nfs_folder'
on the detect guest OS 'windows', but the guest doesn't
support that capability. This capability is required for your
configuration of Vagrant. Please either reconfigure Vagrant to
avoid this capability or fix the issue by creating the capability.

You have to load special software for Windows to talk NFS and no one really
does this any more. You would be better off using samba for windows. Why
inject NFS? this is the kind of thing Vagrant isn't good at and probably
shouldn't be.

On Thu, Sep 28, 2017 at 8:55 AM, ZhiJian Yang notifications@github.com
wrote:

@xophere https://github.com/xophere
boot error! Sorry, my English is not good!
==> default: Forwarding ports...
default: 5985 (guest) => 55985 (host) (adapter 1)
default: 5986 (guest) => 55986 (host) (adapter 1)
==> default: Running 'pre-boot' VM customizations...
==> default: Booting VM...
==> default: Waiting for machine to boot. This may take a few minutes...
default: WinRM address: 127.0.0.1:55985
default: WinRM username: IEUser
default: WinRM execution_time_limit: PT2H
default: WinRM transport: plaintext
==> default: Machine booted and ready!
==> default: Checking for guest additions in VM...
==> default: Configuring and enabling network interfaces...
==> default: Exporting NFS shared folders...
==> default: Preparing to edit /etc/exports. Administrator privileges will
be required...
==> default: Mounting NFS shared folders...
Vagrant attempted to execute the capability 'mount_nfs_folder'
on the detect guest OS 'windows', but the guest doesn't
support that capability. This capability is required for your
configuration of Vagrant. Please either reconfigure Vagrant to
avoid this capability or fix the issue by creating the capability.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/hashicorp/vagrant/issues/936#issuecomment-332881486,
or mute the thread
https://github.com/notifications/unsubscribe-auth/APDnK3TTD-7cQsetVDkGb2fIdENoHAu-ks5sm8GKgaJpZM4AFFG-
.

@xophere I use the MAC OS, I want to work in the MAC installed windows7, the default way to visit very slow, so I would like to use nfs

I would try a smb share from the Mac. But again if that is what you want
you should open a new request.

On Sep 28, 2017 9:09 AM, "ZhiJian Yang" notifications@github.com wrote:

@xophere https://github.com/xophere I use the MAC OS, I want to work in
the MAC installed windows7, the default way to visit very slow, so I would
like to use nfs


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/hashicorp/vagrant/issues/936#issuecomment-332885516,
or mute the thread
https://github.com/notifications/unsubscribe-auth/APDnK7PpfN6NfdXhE6SjbFuZWIKhZuMnks5sm8S9gaJpZM4AFFG-
.

@xophere Thank you, there are ways to support nfs? I have tried to use SMB and also reported some errors

@xophere smb error;
➜ modernie-winrm git:(master) ✗ vagrant up
Bringing machine 'default' up with 'virtualbox' provider...
SMB shared folders are only available when Vagrant is running
on Windows. The guest machine can be running non-Windows. Please use
another synced folder type.

_START A NEW REQUEST!_. PLEASE

On Sep 28, 2017 9:32 AM, "ZhiJian Yang" notifications@github.com wrote:

@xophere https://github.com/xophere smb error;
➜ modernie-winrm git:(master) ✗ vagrant up
Bringing machine 'default' up with 'virtualbox' provider...
SMB shared folders are only available when Vagrant is running
on Windows. The guest machine can be running non-Windows. Please use
another synced folder type.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/hashicorp/vagrant/issues/936#issuecomment-332891874,
or mute the thread
https://github.com/notifications/unsubscribe-auth/APDnK6IbTP2n8SVkM6gd_AeNF-jYwEfzks5sm8oMgaJpZM4AFFG-
.

@xophere You give me a message, I want to rest, very late, tomorrow to work, thanks, cheers

https://github.com/gael-ian/vagrant-bindfs helps mount after provision

Currently fedora does not support vboxfs out of the box. So if one wants to use synced_folder, it's necessary to run a partial provision first which installs akmod-VirtualBox from rpmfusion.

For now the only solution is to comment out stuff in Vagrantfile, maybe create a snapshot after partial provision and then modify Vagrantfile, and run vagrant provision to have it do the rest of the provisioning. That's not a very unattended workflow.

Or write a vagrant plugin I supppose...

It can be argued that synced folders are probably not needed very often before provisioning (except by plugins), so allowing to create a synced folder as a provisioning step would make alot of sense to me.

Ok,

Im going to find a workaround for this. So far I have found some tools that can help:

  • vagrant reload plugin: allows to reload between provisioning steps (although there seem to be alot of issues with this plugin). as an alternative, maybe run halt in the guest and do vagrant up && vagrant provision
  • vagrant trigger: run code before or after vagrant commands
  • ruby code in Vagrantfile can detect environment variables, or run shell commands with the vagrant-host-shell plugin, or test for the existance of files on the host etc...
  • detect whether a machine has been provisioned as described here

Idea:

  • have the synced folder in conditional code
  • after vagrant up, set an indicator on the host (touch a file or an env variable)
  • after first provision step, reload
  • now detect first provision has been done
  • before vagrant destroy, remove the indicator file

If you omit the guest path, it won't auto-mount. Also, the @dominis workaround is quite good. I think this is rare enough that this is satisfactory for now since this would require significant change. Sorry!

@mitchellh
Does the solution @dominis provides cover VMWare? I see vboxsf in there, and I get wondering. What would this "solution" look like for VMWare?

config.vm.share_folder "ganja", "/test", "/test"
config.vm.provision :shell do |shell|
  shell.inline = "sudo mount -t vboxsf -o uid=`id -u apache`,gid=`id -g apache` test /test"
end

I found the GID UID works. If anyone is interested in that solution.

Just make sure you create your user's in your provisioning with the optional uid and gid for the group. I use Ansible so for example it's this for my wildfly user and group:

- name: Create wilfdly group
  group:
    name: wildfly
    gid: 1001
    state: present

- name: Create wildfly user
  user:
    name: wildfly
    uid: 1001
    comment: Created by ansible
    group: wildfly
    home: /home/wildfly

And then in my Vagrantfile:

WILDFLY_UID = 1001
WILDFLY_GID = 1001
api.vm.synced_folder "C:/work/war", "/opt/api/deployments", type: "rsync", rsync__auto: true, create: true, owner: WILDFLY_UID, group: WILDFLY_GID

If you are using Shell provisioning or just running bash it would be this:
useradd -u 1001 -g 1001 wildfly

7.5 years later, I'm wondering if there is a new/better "solution" to this problem than those discussed previously.

I've read every comment and my use-case is most similar to the scenario @jdoss describes in https://github.com/hashicorp/vagrant/issues/936#issuecomment-276864313 and @najamelan describes in https://github.com/hashicorp/vagrant/issues/936#issuecomment-454798627 .

In short, the problem I face is not that the user as whom the share(s) should be mounted does not yet exist; rather, the problem is that the networking configuration required to facilitate the sharing is not in place by the time Vagrant tries to auto-mount the shares.

Specifically, I need to assign a static IP address in the VM, and Vagrant does not support this capability the Hyper-V provider (see both https://www.vagrantup.com/docs/hyperv/limitations.html and https://github.com/hashicorp/vagrant/issues/8384 ).

In terms of possible workarounds/solutions to this problem, the "manual" mount command that @dominis describes in https://github.com/hashicorp/vagrant/issues/936#issuecomment-7179034 would be different for Hyper-V machines (i.e., it would not use -t vboxfs); I mention this only to underscore the fact that this particular "solution" is provider-specific and therefore less than ideal.

The approach that @cecilemuller describes in https://github.com/hashicorp/vagrant/issues/936#issuecomment-288063253 is promising, but it has two significant drawbacks: a) it's box-name specific, and b) it requires vagrant up && vagrant reload to be fully-provisioned.

In essence, I'd really love to be able to defer shared folder mounting until after Vagrant has run an arbitrary provisioner script in which I'm able to do whatever is needed within the guest OS to configure the static IP address.

Ultimately, I'm curious if anybody has found a better way to do this recently... thanks in advance!

I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

Was this page helpful?
0 / 5 - 0 ratings