I use the azure plugin for vagrant to create virtual machines. With azure, synced_folder have to be configured using rsync.
In my case, I'd like the synced_folder to 'rsynced' on a folder that require to be setup with a shell provision script before.
Using something like:
config.vm.provision "mount data disk", type: "shell", inline: '...'
config.vm.synced_folder(src, dst, type: 'rsync')
It appears that the rsync action is performed before the 'mount data disk' provision script, which is not what I expect.
Am I missing something ?
Thanks for your help.
Regards
Hi @lionelperrin
Thank you for opening an issue. Can you use the rsync shared folders instead?
I have a different use case for needing this --
We use configuration management to build everything on top of a very minimal RHEL7 base image, which lacks even nfs-utils and rsync. We used to use HGFS for RHEL5/6 because we had vmware tools installed, but with RHEL7 we're moving to open-vm-tools based on VMwares recommendation for RHEL7 systems, which is why we can't use it. open-vm-tools does not ship HGFS.
I was hoping to use nfs-utils or rsync, but I need a provision script to run prior to the synced_folders command that attaches the system to satellite first, so that it can then install the packages needed for the synced_folders.
Ah - this is actually a duplicate of https://github.com/mitchellh/vagrant/issues/936. Check that issue for some technical reasons why this isn't possible and some workarounds. Thanks! :smile:
There's not really any technical reason given, and the workaround doesn't look like it will work for the vmware_fusion provider.
Rather than encouraging hacking around Vagrant's limitations, I wish there would be some re-consideration of #936, as it seems like there's a lot of desire for such a feature given all the +1's in that thread.
I'll dig through the vagrant code and see if I can manually setup the nfs mount, so that I can have the provisioner mount it once it installs nfs-utils after the package is available.
For the record, I'm encountering the same issue.
Trying to provision, using a minimal centos7 image on Hyper-V, and finding that cifs-utils is missing.
As it is now, the provisioning has to fail, and I need to:
vagrant ssh (powershell)
(connect to instance via putty/ssh)
sudo yum install cifs-utils -y (putty/ssh)
(wait for install...)
exit(putty/ssh)
vagrant reload --provision(powershell)
Need the ability to streamline this by being able to provision this simply via a simple yum install cifs-utils -y. Why is this still an issue?
@KptnKMan I have precisely the same issue... almost 3.5 years later.
Were you ever able to resolve this to your satisfaction? Or are you still employing the manual workaround described in your post?
What I find most odd is the following behavior:
1.) This configuration fails with the error that follows:
config.vm.synced_folder "../data", "/vagrant_data", type: "smb", smb_username: "HyperV", smb_password: "secret"
==> default: Preparing SMB shared folders...
Vagrant requires administrator access to create SMB shares and
may request access to complete setup of configured shares.
==> default: Configuring proxy environment variables...
==> default: Configuring proxy for Yum...
==> default: Mounting SMB shared folders...
default: C:/Users/bjohnson/Work/Web/data => /vagrant_data
Failed to mount folders in Linux guest. This is usually because
the "vboxsf" file system is not available. Please verify that
the guest additions are properly installed in the guest and
can work properly. The command attempted was:
mount -t cifs -o vers=2.0,credentials=/etc/smb_creds_vgt-2c31f9fce8b39081aa64f46f3e8218d6-adda498f781708cde2d8e46c475e9593,uid=1000,gid=1000 //10.0.75.1/vgt-2c31f9fce8b39081aa64f46f3e8218d6-adda498f781708cde2d8e46c475e9593 /vagrant_data
The error output from the last command was:
mount: wrong fs type, bad option, bad superblock on //10.0.75.1/vgt-2c31f9fce8b39081aa64f46f3e8218d6-adda498f781708cde2d8e46c475e9593,
missing codepage or helper program, or other error
(for several filesystems (e.g. nfs, cifs) you might
need a /sbin/mount.<type> helper program)
In some cases useful info is found in syslog - try
dmesg | tail or so.
As noted elsewhere, installing the cifs-utils package and vagrant reload --provision "fixes" the issue.
2.) With this configuration, which I thought to try only because it's mentioned under Common Issues at the bottom of https://www.vagrantup.com/docs/synced-folders/smb.html
config.vm.synced_folder "../data", "/vagrant_data", type: "smb", mount_options: ["username=HyperV","password=secret"]
Vagrant prompts me for SMB credentials, even though they are clearly specified:
==> default: Preparing SMB shared folders...
default: You will be asked for the username and password to use for the SMB
default: folders shortly. Please use the proper username/password of your
default: account.
default:
default: Username: HyperV
default: Password (will be hidden):
but it works!
Ultimately, my question is this: why when Vagrant is clearly using the SMB "Shared Folder" type, as evidenced by the default: Preparing SMB shared folders... in both scenarios does it a) fail outright in the first case, and b) prompt me for credentials in the second case, despite them being specified in the config, but then succeed?!
The fact that the second scenario can succeed at all, regardless of the oddity around prompting for credentials, indicates that the guest is indeed capable of mounting the shares without having to install cifs-utils. So why the anomalous behavior between the two scenarios?
Any guidance here would be appreciated tremendously.
P.S. I'm using the Hyper-V provider, in which case the vboxsf filesystem mentioned in the first scenario is irrelevant; ideally, Vagrant would detect that and omit it from the error message.
I managed this solve this by including both the preferred SMB credential syntax and the legacy syntax:
config.vm.synced_folder "../data", "/vagrant_data", type: "smb", smb_username: "HyperV", smb_password: "secret", mount_options: ["username="HyperV","password="secret"]
As silly as it is, this works because smb_username/smb_password prevent Vagrant from prompting for credentials (which are ignored anyway), and including mount_options "makes it work" by satisfying the ancient CIFS kernel extension that is used in the absence of the cifs-utils package in CentOS 7.
One can jazz this up a bit by making it dynamic, thereby obviating the need to store the credentials insecurely in the Vagrantfile by instead storing them in the environment:
config.vm.synced_folder "../oracle-data", "/vagrant_data", type: "smb", smb_username: ENV["VAGRANT_SMB_USERNAME"], smb_password: ENV["VAGRANT_SMB_PASSWORD"], mount_options: ["username=" + ENV["VAGRANT_SMB_USERNAME"],"password=" + ENV["VAGRANT_SMB_PASSWORD"]]
✨
I'd like to add my "+1" to this issue, I would also like the possibility to execute the/a provision phase before configuring the synced_folder or postpone the configuration of the synced_folder until after the provision, depending on how you look at it.
In my use case in the provisioning I'm adding a new user and the synced_folder would have the owner and group pointing to this user (using standard vbox shared folder).
I guess I could make this user part of some default group and also set the synced_folder to that group ... but I still think that would be nice to be able to control the phases.
I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
Most helpful comment
For the record, I'm encountering the same issue.
Trying to provision, using a minimal centos7 image on Hyper-V, and finding that
cifs-utilsis missing.As it is now, the provisioning has to fail, and I need to:
vagrant ssh(powershell)(connect to instance via putty/ssh)
sudo yum install cifs-utils -y(putty/ssh)(wait for install...)
exit(putty/ssh)vagrant reload --provision(powershell)Need the ability to streamline this by being able to provision this simply via a simple
yum install cifs-utils -y. Why is this still an issue?