I'm running Vagrant 1.8.6. This is not the latest version but I can't run 1.8.7 due to #8024 , nor 1.90 due to #8088 .
Windows 7
CentOS 7
Vagrant.configure(2) do |config|
config.vm.box = 'puppetlabs/centos-7.2-64-puppet-enterprise'
config.vm.define :master do |master|
master.vm.network :private_network, ip: '10.20.1.10'
end
config.vm.define :node do |node|
node.vm.network :private_network, ip: '10.20.1.11'
end
end
Each VM should have assigned the IP addresses specified in the Vagrantfile.
No IP address appears to be assigned. However, an IP address is assigned after restarting.
Log into one of the nodes
vagrant ssh master
Verify that the IP address is actually being configured
cat /etc/sysconfig/network-scripts/ifcfg-enp0s8
#VAGRANT-BEGIN
# The contents below are automatically generated by Vagrant. Do not modify.
NM_CONTROLLED=no
BOOTPROTO=none
ONBOOT=yes
IPADDR=10.20.1.10
NETMASK=255.255.255.0
DEVICE=enp0s8
PEERDNS=no
#VAGRANT-END
ifconfig -a
enp0s8: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet6 fe80::a00:27ff:fe8b:416d prefixlen 64 scopeid 0x20<link>
ether 08:00:27:8b:41:6d txqueuelen 1000 (Ethernet)
RX packets 66 bytes 21226 (20.7 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 65 bytes 11106 (10.8 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
exit
vagrant halt master
vagrant up master
vagrant ssh master
ifconfig -a
enp0s8: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.20.1.10 netmask 255.255.255.0 broadcast 10.20.1.255
inet6 fe80::a00:27ff:fe8b:416d prefixlen 64 scopeid 0x20<link>
ether 08:00:27:8b:41:6d txqueuelen 1000 (Ethernet)
RX packets 9 bytes 3078 (3.0 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 21 bytes 1586 (1.5 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
It seems very similar to #2968. However, that has been fixed in a previous version.
I am seeing a very similar behavior on a CentOS VM (running on macOS). This behavior is happening in Vagrant 1.9.1 and NOT 1.9.0. This manifests itself in a failure to mount our defined synced folders between host and VM because the hostonly interface doesn't come up.
Vagrantfile
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure("2") do |config|
config.vm.define "consumerservices" do |consumerservices|
consumerservices.vm.box = "bento/centos-7.1"
consumerservices.vm.synced_folder "./code", "/vagrant/code", type: "nfs"
consumerservices.vm.synced_folder "./deployments", "/vagrant/deployments", type: "nfs"
consumerservices.vm.synced_folder "./logs", "/vagrant/logs", type: "nfs"
consumerservices.vm.synced_folder "./backup", "/vagrant/backup", type: "nfs"
consumerservices.vm.synced_folder "./setup", "/vagrant/setup", type: "nfs"
consumerservices.vm.provider "virtualbox" do |v|
v.memory = (`sysctl -n hw.memsize`.to_i / 1024) / 1024 / 4
v.cpus = 2
v.customize ["modifyvm", :id, "--natdnshostresolver1", "on"]
v.customize ["modifyvm", :id, "--natdnsproxy1", "on"]
v.customize ["modifyvm", :id, "--ioapic", "on"]
end
consumerservices.vm.network "private_network", ip: "192.168.50.10"
consumerservices.vm.host_name = "ConsumerServiceBundler"
consumerservices.vm.provision "shell", inline: "timedatectl set-timezone America/New_York"
consumerservices.vm.provision "shell", inline: "cd /vagrant/setup && ./setup.sh"
end
end
Startup output
Daniel-Welcomes-MacBook-Pro:consumer-service-bundler danielwelcome$ vagrant up
Bringing machine 'consumerservices' up with 'virtualbox' provider...
==> consumerservices: Checking if box 'bento/centos-7.1' is up to date...
==> consumerservices: Clearing any previously set forwarded ports...
==> consumerservices: Clearing any previously set network interfaces...
==> consumerservices: Preparing network interfaces based on configuration...
consumerservices: Adapter 1: nat
consumerservices: Adapter 2: hostonly
==> consumerservices: Forwarding ports...
consumerservices: 22 (guest) => 2222 (host) (adapter 1)
==> consumerservices: Running 'pre-boot' VM customizations...
==> consumerservices: Booting VM...
==> consumerservices: Waiting for machine to boot. This may take a few minutes...
consumerservices: SSH address: 127.0.0.1:2222
consumerservices: SSH username: vagrant
consumerservices: SSH auth method: private key
consumerservices: Warning: Remote connection disconnect. Retrying...
==> consumerservices: Machine booted and ready!
==> consumerservices: Checking for guest additions in VM...
==> consumerservices: Setting hostname...
==> consumerservices: Configuring and enabling network interfaces...
==> consumerservices: Exporting NFS shared folders...
==> consumerservices: Preparing to edit /etc/exports. Administrator privileges will be required...
==> consumerservices: Mounting NFS shared folders...
The following SSH command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!
mount -o vers=3,udp 192.168.50.1:/Users/danielwelcome/Development/IdeaProjects/consumer-service-bundler/code /vagrant/code
result=$?
if test $result -eq 0; then
if test -x /sbin/initctl && command -v /sbin/init && /sbin/init 2>/dev/null --version | grep upstart; then
/sbin/initctl emit --no-wait vagrant-mounted MOUNTPOINT=/vagrant/code
fi
else
exit $result
fi
Stdout from the command:
Stderr from the command:
mount.nfs: access denied by server while mounting 192.168.50.1:/Users/danielwelcome/Development/IdeaProjects/consumer-service-bundler/code
If I SSH into the machine, the hostonly interface doesn't have an IP assigned:
[vagrant@ConsumerServiceBundler ~]$ ifconfig -a
enp0s3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.0.2.15 netmask 255.255.255.0 broadcast 10.0.2.255
inet6 fe80::a00:27ff:fef6:b007 prefixlen 64 scopeid 0x20<link>
ether 08:00:27:f6:b0:07 txqueuelen 1000 (Ethernet)
RX packets 643 bytes 71930 (70.2 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 421 bytes 60809 (59.3 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
enp0s8: flags=4098<BROADCAST,MULTICAST> mtu 1500
ether 08:00:27:e3:f8:fa txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 11 bytes 818 (818.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 0 (Local Loopback)
RX packets 16 bytes 1172 (1.1 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 16 bytes 1172 (1.1 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
The IP address is configured:
[vagrant@ConsumerServiceBundler ~]$ cat /etc/sysconfig/network-scripts/ifcfg-enp0s8
#VAGRANT-BEGIN
# The contents below are automatically generated by Vagrant. Do not modify.
NM_CONTROLLED=no
BOOTPROTO=none
ONBOOT=yes
IPADDR=192.168.50.10
NETMASK=255.255.255.0
DEVICE=enp0s8
PEERDNS=no
#VAGRANT-END
My issue differs from @rupert654 in that halting and starting the VM doesn't fix the issue. However, I can restart networking and the interface will come back up. If I ssh into the VM while the shared folders are being mounted and restart networking, the procedure will succeed.
[vagrant@ConsumerServiceBundler ~]$ ifconfig
enp0s3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.0.2.15 netmask 255.255.255.0 broadcast 10.0.2.255
inet6 fe80::a00:27ff:fef6:b007 prefixlen 64 scopeid 0x20<link>
ether 08:00:27:f6:b0:07 txqueuelen 1000 (Ethernet)
RX packets 669 bytes 73868 (72.1 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 436 bytes 62823 (61.3 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 0 (Local Loopback)
RX packets 16 bytes 1172 (1.1 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 16 bytes 1172 (1.1 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
[vagrant@ConsumerServiceBundler ~]$ sudo /etc/init.d/network restart
Restarting network (via systemctl): [ OK ]
[vagrant@ConsumerServiceBundler ~]$ ifconfig
enp0s3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.0.2.15 netmask 255.255.255.0 broadcast 10.0.2.255
inet6 fe80::a00:27ff:fef6:b007 prefixlen 64 scopeid 0x20<link>
ether 08:00:27:f6:b0:07 txqueuelen 1000 (Ethernet)
RX packets 755 bytes 80758 (78.8 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 490 bytes 68029 (66.4 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
enp0s8: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.50.10 netmask 255.255.255.0 broadcast 192.168.50.255
inet6 fe80::a00:27ff:fee3:f8fa prefixlen 64 scopeid 0x20<link>
ether 08:00:27:e3:f8:fa txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 21 bytes 1566 (1.5 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 0 (Local Loopback)
RX packets 16 bytes 1172 (1.1 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 16 bytes 1172 (1.1 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
I was having this issue too from 1.8.6
, here's the cross report https://github.com/geerlingguy/drupal-vm/issues/1040
I think this is a regression in https://github.com/mitchellh/vagrant/pull/8052. I reverted these changes in my local 1.9.1 installation and it is working again. Can anyone else confirm?
@andyshinn I tried this out (built vagrant and reverse patched it on master) and all went good. I'm going to try without the reverse patch too.
Yup, you've found the regression for sure. Undoing the patch pumps out that error.
I've hit this too—confirmed when I ran my automated build/test cycle for my geerlingguy/centos7
Packer/Vagrant box: https://github.com/geerlingguy/packer-centos-7
I just ran:
$ packer build --only=virtualbox-iso centos7.json
$ vagrant up virtualbox
And I get the error when it tries mounting the NFS share:
==> virtualbox: Exporting NFS shared folders...
==> virtualbox: Preparing to edit /etc/exports. Administrator privileges will be required...
==> virtualbox: Mounting NFS shared folders...
The following SSH command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!
mount -o vers=3,udp 172.16.3.1:/Users/jeff.geerling/Dropbox/VMs/packer/centos7 /vagrant
result=$?
if test $result -eq 0; then
if test -x /sbin/initctl && command -v /sbin/init && /sbin/init 2>/dev/null --version | grep upstart; then
/sbin/initctl emit --no-wait vagrant-mounted MOUNTPOINT=/vagrant
fi
else
exit $result
fi
Stdout from the command:
Stderr from the command:
mount.nfs: access denied by server while mounting 172.16.3.1:/Users/jeff.geerling/Dropbox/VMs/packer/centos7
This was on macOS Sierra 10.12.1, using Vagrant 1.9.1 and Packer 0.12.0.
@chrisroberts for this one, the command being submitted on RH7/Centos7 is:
if service NetworkManager status 2>&1 | grep -q running; then
service NetworkManager restart
else
service network restart
fi
The problem is on this line:
NM_CONTROLLED=no
Since service network restart
on RH7/Centos7 is redireced to NetworkManager, it does nothing
so, when RedHat7/Centos7 is used, should be:
NM_CONTROLLED=yes
So this template should take an argument for NM_CONTROLLED
embedded/gems/gems/vagrant-1.9.1/templates/guests/redhat/network_static.erb
@kikitux Thanks a lot, I confirm that your fix sets every thing back on track, it's working fine.
Vagrant version: 1.9.1
As workaround, in the meantime this gets fixed, you can use:
replace IFACE with eth1/ens34/name_of_interface
config.vm.provision "shell", inline: "ifup IFACE", run: "always"
I'm not sure that changing NM_CONTROLLED=no
to NM_CONTROLLED=yes
is always the right answer. It's definitely one option that would make the #8052 code work, but I'm not sure it's the correct fix or not.
It's not clear why #8052 was made yet, because the PR doesn't actually say precisely what problem it's fixing. It says "service network restart
might fail." on RHEL-7 / Fedora but not how or why. And #8120 reports that that solution causes the same problem reported here on Fedora. So #8052 doesn't seem to have been tested with a static IP (host-only interface).
I think it comes down to this:
What are the pros and cons of each approach? Why choose one over the other?
Is it a problem that vagrant 1.9.1 is not setting the correct labels on the ifcfg-* file it creates for the private nic it adds? I am seeing error in audit2allow and the selinux labels (as well as file owner and permissions) on the ifcfg-eth1 file are messed up. Centos 7.3
I am defining nic in vagrant file like so:
config.vm.network "private_network", ip: "192.168.33.10", nic_type: "virtio"
Notice ifcfg-eth1 labels, etc:
-rw-r--r--. root root system_u:object_r:net_conf_t:s0 /etc/sysconfig/network-scripts/ifcfg-eth0
-rw-rw-r--. vagrant vagrant unconfined_u:object_r:user_tmp_t:s0 /etc/sysconfig/network-scripts/ifcfg-eth1
-rw-r--r--. root root system_u:object_r:net_conf_t:s0 /etc/sysconfig/network-scripts/ifcfg-lo
I can fix permissions, ownership and run chcon to fix labels but they get changed back on restart. Restarting the network service seems to bring up the interface but might as well fix the labels.
@karlkfi maybe just enable NM_CONTROLLED inline?
# Restart network (through NetworkManager if running)
if service NetworkManager status 2>&1 | grep -q running; then
sed -e 's/^NM_CONTROLLED=no/NM_CONTROLLED=yes/g' /etc/sysconfig/network-scripts/ifcfg-*
service NetworkManager restart
else
service network restart
fi
I would guess a large amount of RHEL-like vagrant machines will be running NetworkManager out of the box, but if they wish to change that it should be up to the end-user to implement any non-standard behavior either in a provision script or by creating a file via the kickstart config like so:
# ifcfg-eth0
cat > /etc/sysconfig/network-scripts/ifcfg-eth0 <<-EOT
DEVICE=eth0
ONBOOT=yes
BOOTPROTO=dhcp
TYPE=Ethernet
EOT
FWIW in our packer builds we also include NetworkManager-config-server
in our kickstart:
```URL : http://www.gnome.org/projects/NetworkManager/
Summary : NetworkManager config file for "server-like" defaults
Description :
This adds a NetworkManager configuration file to make it behave more
like the old "network" service. In particular, it stops NetworkManager
from automatically running DHCP on unconfigured ethernet devices, and
allows connections with static IP addresses to be brought up even on
ethernet devices with no carrier.
This package is intended to be installed by default for server
deployments.
```
Good to know about NetworkManager-config-server
, thanks.
As for enabling NM_CONTROLLED
inline, that just feels like such a hack. Vagrant already generates the ifcfg file. The generation should really be updated to optionally support NetworkManager, based on some sort of overridable config with an intelligent auto-detected default.
@karlkfi I believe the default/only solution should just be to restart the network service and let the system take the appropriate action based on how it's configured, ie: if it should restart NetworkManger.service or something else like systemd-networkd.service.
@tonylambiris: That's what it used to do. If that worked everywhere, it wouldn't have been changed in the first place.
Just for the sake of more information: same issues for me on CentOS7 with Vagrant 1.9.1
details here: https://github.com/puphpet/puphpet/issues/2533
@karlkfi For me running ifup eth1
on CentOS 7.3 and 1.9.1 configures the interface without having to restart any services.
Reproduced issue on Vagrant 1.9.1 and Centos 7.1.
Host-only adapter (eth1) doesn't come up, because "service network restart" never runs after Vagrant creates ifcfg-eth1 (w/ nm_controlled=no). This is because NetworkManager service is running/enabled by default, so conditional logic within the regression restarts NM, but not the network service.
After reviewing some of the function defs in /etc/sysconfig/network-scripts/network-functions, I've come up with this workaround for configure_networks.rb:
if [ "$(LANG=C nmcli -t -f running g 2>/dev/null)" = "running" ]; then
service NetworkManager restart
fi
for s in $(LANG=C nmcli -t -f type,state d|grep -v loopback|cut -f2 -d: 2>/dev/null); do
if [ "$s" = "unmanaged" ]; then
service network restart
break
fi
done
I use the 'type' field for the 'device' object in nmcli and grep to filter loopback, which always shows up as 'unmanaged'. Essentially, if any interface (other than loopback) is in the state 'unmanaged', run "service network restart". 'Unmanaged' appears to be derived from nm_controlled=no.
@kikitux
Switching all interfaces to NM controlled on RHEL/Centos 7 is not a great solution, imho. Many purposefully disable NM and prefer to use the network service instead.
It would be advisable to leave any specific completely out of the logic checks as one could use systemd-network.service
for configuring network interfaces.
@tonylambiris
Is that backward-compatible with RHEL/CentOS 6?
@mvolhontseff no systemd was introduced in RHEL7
My pull request should help us : https://github.com/mitchellh/vagrant/pull/8148
In RHEL5/6/7 restarting network using : service network restart
work for nm users and others.
still having these issues. is there any timeline on this. my team is stuck.
@Artistan Consider downgrading vagrant to 1.9.0
https://releases.hashicorp.com/vagrant/1.9.0/
Yep, IP works there, NFS does not... another issue.
@Artistan Allright, for NFS issues, try to start vagrant as root/sudoers. If it don't fix your issue, I suggest you to open new issue.
Is it a regression of https://github.com/mitchellh/vagrant/issues/7791 ?
@mvolhontseff I don't understand the point of reinventing the wheel when this is already handled by RHEL network scripts:
23:23:38 builder ~ $ grep ^is_nm /etc/sysconfig/network-scripts/network-functions
is_nm_running ()
is_nm_active ()
is_nm_handling ()
is_nm_device_unmanaged ()
23:24:50 builder ~ $ sudo strace -e open ifup eth1 2>&1 | grep network-scripts
open("/etc/sysconfig/network-scripts/ifcfg-eth1", O_RDONLY) = 3
open("/etc/sysconfig/network-scripts/ifup-eth", O_RDONLY) = 3
open("/etc/sysconfig/network-scripts/ifcfg-eth1", O_RDONLY) = 3
open("/etc/sysconfig/network-scripts/ifup-post", O_RDONLY) = 3
open("/etc/sysconfig/network-scripts/ifcfg-eth1", O_RDONLY) = 3
23:25:59 builder ~ $ apropos ifup ifdown
ifdown (8) - bring a network interface up
ifup (8) - bring a network interface up
@Artistan can you add this to your vagrant file and git it a try (replace X with interface #)?
config.vm.provision "shell", inline: "/sbin/ifup ethX"
It looks like most people using various CentOS boxes with Vagrant are getting hit by this bug, and I'm starting to get a larger deluge (not just a stream anymore) of people reporting issues with a lot of downstream projects which use my CentOS 7 box (geerlingguy.centos7
) and need private networking with Vagrant, and have upgraded to Vagrant 1.9.1.
Most people report that downgrading to 1.9.0 works, and I haven't had enough time to look more deeply into the issue—is there an open PR to review to help get this fixed? Or no PR at all yet? I'm currently just telling everyone to hold back on 1.9.0 and don't upgrade, but I wish I could give a better answer :(
@geerlingguy Hold on, my PR https://github.com/mitchellh/vagrant/pull/8148 has been accepted. (:
@mikefaille - Ah, I totally missed that. So now fingers crossed that 1.9.2 is the next 1.8.6-like release, where things 'just work' on all the different guest OSes again :)
I've been using
config.vm.provision "shell", inline: "systemctl restart network.service", run: "always"
as a workaround until this gets sorted out, hopefully someone can make use of this.
If you're like me and using @jerrywardlow's workaround but get errors during the systemctl restart network.service
step, you can use this
config.vm.provision "shell", inline: "sudo systemctl restart network 2>/dev/null || true", run: "always"
Restarting the entire network subsystem just feels super heavy-handed, especially considering some interfaces could be configured manually/externally (ie: using flanneld or creating a bridge interface). Vagrant should only operate in the interface definition and not make assumptions globally.
Could one of the project admins please explain why the ifup
command isn't deemed sufficient enough for this task?
20:22:22 builder ~ # bash -x /usr/sbin/ifup eth1
[...TRIM...]
+ '[' -f ../network ']'
+ . ../network
++ NETWORKING=yes
++ HOSTNAME=builder
+ CONFIG=eth1
+ '[' -z eth1 ']'
+ need_config eth1
+ local nconfig
+ CONFIG=ifcfg-eth1
+ '[' -f ifcfg-eth1 ']'
+ return
+ '[' -f ifcfg-eth1 ']'
+ '[' 0 '!=' 0 ']'
+ source_config
+ CONFIG=ifcfg-eth1
+ DEVNAME=eth1
+ . /etc/sysconfig/network-scripts/ifcfg-eth1
++ NM_CONTROLLED=no
++ BOOTPROTO=none
++ ONBOOT=yes
++ IPADDR=172.27.1.10
++ NETMASK=255.255.255.0
++ DEVICE=eth1
++ HWADDR=52:54:00:0d:95:6a
++ PEERDNS=no
+ '[' -r keys-eth1 ']'
+ case "$TYPE" in
+ '[' -n 52:54:00:0d:95:6a ']'
++ echo 52:54:00:0d:95:6a
++ awk '{ print toupper($0) }'
+ HWADDR=52:54:00:0D:95:6A
+ '[' -n '' ']'
+ '[' -z eth1 -a -n 52:54:00:0D:95:6A ']'
+ '[' -z '' ']'
++ echo eth1
++ sed 's/[0-9]*$//'
+ DEVICETYPE=eth
+ '[' -z '' -a -n '' ']'
+ '[' -z '' ']'
+ REALDEVICE=eth1
+ '[' -z '' ']'
+ SYSCTLDEVICE=eth1
+ '[' eth1 '!=' eth1 ']'
+ ISALIAS=no
+ is_nm_running
++ LANG=C
++ nmcli -t --fields running general status
+ '[' running = running ']'
+ '[' eth1 '!=' lo ']'
+ nmcli con load /etc/sysconfig/network-scripts/ifcfg-eth1
+ is_false no
+ case "$1" in
+ return 0
+ '[' foo = fooboot ']'
+ '[' -n '' ']'
+ '[' -n '' -a '' = Bridge ']'
+ '[' '' = true -a -n '' -a eth1 '!=' lo ']'
+ '[' '' = yes ']'
+ '[' none = bootp -o none = dhcp ']'
+ '[' -x /sbin/ifup-pre-local ']'
+ OTHERSCRIPT=/etc/sysconfig/network-scripts/ifup-eth
+ '[' '!' -x /etc/sysconfig/network-scripts/ifup-eth ']'
+ '[' '!' -x /etc/sysconfig/network-scripts/ifup-eth ']'
+ exec /etc/sysconfig/network-scripts/ifup-eth ifcfg-eth1
RTNETLINK answers: File exists
@tonylambiris good point. I think this is the best way to work.
I think you can't use service network restart
because firewall will be refreshed and probably you'll need to setup up it again.
I believe that the best thing to do is to bring up only that interface which need to be up ifup <newethdevice>
.
I downgraded to 1.9.0 which don't have this 1.9.1 behavior.
My workaround here, until a permanent fix:
# 1.9.1 workaround for centos/7
if Vagrant::VERSION == "1.9.1" && config.vm.box == "centos/7"
config.vm.provision "shell", inline: "service network restart", run: "always"
end
This is fixed in the 1.9.2 release via PR #8148. Thanks!
So what happens when distros start migrating to systemd-networkd as their network manager?
I am seeing this behaviour in 1.9.5
Seeing this behavior as well with Vagrant 1.9.3 and fedora 25.
Same for me as 1.9.5 , scientific linux 6.1, host MacOS Sierra.
config.vm.network "private_network", ip: "192.168.10.10"
getting eth0 and eth2 both assigned 192.168.10.10
@ruibinghao @ianmiell @nezaboravi Can you test vagrant v1.9.2 ? https://releases.hashicorp.com/vagrant/1.9.2/
@tonylambiris
> So what happens when distros start migrating to systemd-networkd as their network manager?
I can't answer for version >=1.9.3 but with v1.9.2 you should be ok since I use the daemon script named network to (re)start all interfaces independently which interface is used.
let me find first how to uninstall the current one and will give a try to 1.9.2
This problem was discussed and closed due to being a duplicate of this issue 8115
This issue was diagnosed to be related to a fix with #8052
Manually reverting the #8052 mentioned in the above comment in local installation make everything works again.
FWIW, this is still happening for me on Vagrant 1.9.8, using a box that has CentOS 7.0 (base box: https://app.vagrantup.com/peichman-umd/boxes/ruby/versions/1.0.0).
I ran into a few of the bugs regarding CentOS 7, systemd and Vagrant 1.9+.
https://github.com/mitchellh/vagrant/issues/8115
https://github.com/puphpet/puphpet/issues/2533
I had an issue with setting private networking would bring up some random subnet, remove the Vagrant host ip (10.0.2.15) and be completely unavailable on the network. When checking the interfaces, I would see, lo, eth0, eth1, and enp0s8. I resolved this issue by rebuilding my images to use the following in the kickstart:
bootloader --append="net.ifnames=0 biosdevname=0 crashkernel=auto" --location=mbr --boot-drive=sda
Added additional systemd network package:
NetworkManager-config-server
You can also manually modify the /etc/default/grub and rebuild the initrd.
I tested this and found it working on Vagrant 2.0.
@ecray Is your assessment of this bug that the problem is in the base box and not the Vagrant code?
I always blame SystemD. But, it seems like Vagrant is not detecting if it should use persistent device naming as it creates eth0, but also enp0s8 when specifying a private network.
I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
Most helpful comment
I've been using
config.vm.provision "shell", inline: "systemctl restart network.service", run: "always"
as a workaround until this gets sorted out, hopefully someone can make use of this.