Vagrant: Multi-machine vagrantfile wrong configuration references when a machine is specified in vagrant up

Created on 9 Jul 2019  ·  13Comments  ·  Source: hashicorp/vagrant

Vagrant version

Vagrant 2.2.5

Host operating system

Macos Mojave/Ubuntu 18.04

Guest operating system

ubuntu 18.04/ubuntu 16.04

Vagrantfile

VAGRANTFILE_API_VERSION = '2'
ROOT_DIRECTORY = File.dirname(File.realpath(__FILE__))

Vagrant.require_version ">= 2.1.1"


Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|

  config.trigger.before :up, :provision do |trigger|
    trigger.name = "upgrade or install librarian-puppet libs"
    trigger.info = "upgrade or install librarian-puppet libs"
    require 'fileutils'
    FileUtils.mkdir_p './environments/production/modules'
    trigger.on_error = :halt
    trigger.run  = {path: "bin/librarian-puppet.sh"}
  end

  config.vm.provision :shell, path: 'bin/bootstrap.sh'
  config.vm.synced_folder './environments/production/site/easywelfare/templates/',
                          '/tmp/vagrant-puppet/templates'
  config.vm.box_check_update = false

  config.cache.scope = :box
  config.cache.auto_detect = true
  config.vm.boot_timeout = 600


  { :'test' => {
      :os         => 'bento/ubuntu-16.04',
      :hostname   => "development-test-1.public.#{ew_domain}",
      :ip         => '192.168.33.10',
      :memory     => '2048',
      :cpus       => 2,
      :disk_size  => '80GB',
      :autostart  => false,
      :primary    => false
    },
    :'test1' => {
      :os         => 'bento/ubuntu-18.04',
      :hostname   => "development-test-2.public.#{ew_domain}",
      :ip         => '192.168.33.11',
      :memory     => '2048',
      :cpus       => 2,
      :autostart  => false,
      :primary    => false
    },
    :'ws' => {
      :os         => 'bento/ubuntu-16.04',
      :hostname   => "development-ws-1.public.#{ew_domain}",
      :ip         => '192.168.33.12',
      :memory     => '2048',
      :cpus       => 1,
      :autostart  => false,
      :primary    => false
    },
    :'all' => {
      :os         => 'bento/ubuntu-16.04',
      :hostname   => "development-all-1.public.#{ew_domain}",
      :ip         => '192.168.33.13',
      :memory     => '2048',
      :cpus       => 2,
      :autostart  => false,
      :primary    => false
    },
    :'mail' => {
      :os         => 'bento/ubuntu-16.04',
      :hostname   => "development-mail-1.public.#{ew_domain}",
      :ip         => '192.168.33.14',
      :memory     => '1024',
      :cpus       => 3,
      :autostart  => false,
      :primary    => false
    },
   :'monitoring' => {
      :os         => 'bento/ubuntu-16.04',
      :hostname   => "development-monitoring-1.development.aws.#{ew_domain}",
      :ip         => '192.168.33.16',
      :memory     => '2048',
      :cpus       => 2,
      :autostart  => false,
      :primary    => false
    },
    :'puppetserver' => {
      :os           => 'bento/ubuntu-16.04',
      :hostname     => "development-puppetserver-1.development.aws.#{ew_domain}",
      :ip           => '192.168.33.17',
      :memory       => '4192',
      :cpus         => 2,
      :autostart    => false,
      :primary      => false
    },
    :'development' => {
      :os         => 'bento/ubuntu-16.04',
      :hostname   => "development-all-vm.public.#{ew_domain}",
      :ip         => '192.168.33.18',
      :memory     => '2048',
      :cpus       => 2,
      :disk_size  => '80GB',
      :autostart  => false,
      :primary    => false
    },
    :'development_test' => {
      :os         => 'bento/ubuntu-18.04',
      :hostname   => "development-all-vm.public.#{ew_domain}",
      :ip         => '192.168.33.19',
      :memory     => '2048',
      :cpus       => 2,
      :autostart  => false,
      :primary    => false
    },
    :'development_test1' => {
      :os         => 'bento/ubuntu-16.04',
      :hostname   => "development-all-vm.public.#{ew_domain}",
      :ip         => '192.168.33.20',
      :memory     => '3192',
      :cpus       => 2,
      :autostart  => false,
      :primary    => false
    },
    :'cicd' => {
      :os         => 'bento/ubuntu-16.04',
      :hostname   => "development-cicd-vm.public.#{ew_domain}",
      :ip         => '192.168.33.19',
      :memory     => '2048',
      :cpus       => 2,
      :autostart  => false,
      :primary    => false
    },
    :'openvpn' => {
      :os         => 'bento/ubuntu-16.04',
      :hostname   => "development-openvpn-vm.public.#{ew_domain}",
      :ip         => '192.168.33.19',
      :memory     => '2048',
      :cpus       => 2,
      :autostart  => false,
      :primary    => false
    },
    :'intranet' => {
      :os         => 'bento/ubuntu-16.04',
      :hostname   => "development-intranet-1.public.#{ew_domain}",
      :ip         => '192.168.33.20',
      :memory     => '2048',
      :cpus       => 1,
      :autostart  => false,
      :primary    => false
    },
    :'kubemaster' => {
      :os         => 'bento/ubuntu-16.04',
      :hostname   => "development-kubemaster-1.public.#{ew_domain}",
      :ip         => '192.168.33.20',
      :memory     => '2048',
      :cpus       => 1,
      :autostart  => false,
      :primary    => false
    },
    :'kubenode' => {
      :os         => 'bento/ubuntu-16.04',
      :hostname   => "development-kubenode-1.public.#{ew_domain}",
      :ip         => '192.168.33.21',
      :memory     => '2048',
      :cpus       => 1,
      :autostart  => false,
      :primary    => false
    },
    :'kubenode2' => {
      :os         => 'bento/ubuntu-16.04',
      :hostname   => "development-kubenode-2.public.#{ew_domain}",
      :ip         => '192.168.33.22',
      :memory     => '2048',
      :cpus       => 1,
      :autostart  => false,
      :primary    => false
    },
    :'kibana' => {
      :os         => 'bento/ubuntu-16.04',
      :hostname   => "development-kibana-1.public.#{ew_domain}",
      :ip         => '192.168.33.21',
      :memory     => '2048',
      :cpus       => 1,
      :autostart  => false,
      :primary    => false
    },
    :'aptcacher' => {
      :os         => 'bento/ubuntu-16.04',
      :hostname   => "development-aptcacher-1.development.aws.#{ew_domain}",
      :ip         => '192.168.33.24',
      :memory     => '1024',
      :cpus       => 1,
      :autostart  => false,
      :primary    => false
    }
  }.each do |name, configuration|

    config.vm.define name ,
      primary: configuration[:primary],
      autostart: configuration[:autostart] do |instance|
          if (configuration[:os].include?('16.04') || configuration[:os].include?('18.04'))
            instance.cache.synced_folder_opts = {
                owner: "_apt",
                group: "_apt",
                mount_options: ["dmode=777", "fmode=666"]
            }
          end

        # change disk sizing if disk_size is declared
        if configuration.has_key?(:disk_size)
           puts " ip is #{configuration[:ip]}"
           puts " hostname is #{configuration[:ip]}"
           puts " name is #{name}"
           puts "disk size required is #{configuration[:disk_size]}"
           config.disksize.size = configuration[:disk_size]
        end

        instance.vm.box = configuration[:os]
        instance.vm.hostname = configuration[:hostname]
        instance.vm.network 'private_network', ip: configuration[:ip]


        if vagrantconfig
          if vagrantconfig[name.to_s]
             vagrantconfig[name.to_s].each do |settings|
              if settings['public_network'] and settings['ipaddress']
                instance.vm.network "public_network", ip: settings['ipaddress']
              elsif settings ['public_network']
                instance.vm.network "public_network"
              end
             end
          end
        end

        # VirtualBox
        instance.vm.provider 'virtualbox' do |vb|
        # Boot in headless mode
        vb.gui = false

        # VM customization
        vb.cpus = configuration[:cpus]
        vb.customize ['modifyvm', :id, '--memory', configuration[:memory]]
        vb.customize ['modifyvm', :id, '--natnet1', '192.168.199/24']


         if vagrantconfig
           if vagrantconfig[name.to_s]
              vagrantconfig[name.to_s].each do |settings|
               if settings['ram']
                 vb.customize ['modifyvm', :id, '--memory', settings['ram']]
               end
               if settings['cpus']
                 vb.cpus = settings['cpus']
               end
              end
           end
         end

        end



      if configuration.has_key?(:disk_size)
        instance.vm.provision :shell, path: 'bin/resize_lvm.sh'
      end

      instance.vm.provision 'puppet' do |puppet|
        puppet.environment = 'production'
        puppet.environment_path = 'environments'
        puppet.hiera_config_path = 'etc/hiera-vagrant.yaml'
        puppet.module_path = ['environments/production/site',
                              'environments/production/modules',
                              'environments/production/custom_modules']
        puppet.options = ['--verbose',
#                          '--debug',
#                          '--trace',
                          '--show_diff',
                          '--write-catalog-summary',
                          '--disable_warnings=deprecations',
                          '--pluginsync',
                          '--graph']
      end

         project_dir = '/opt/projects/Easywelfare'

         os_router= CommandExecutor.new(personal_files_dir,configuration[:ip],project_dir)

         config.trigger.after :provision,:up  do |trigger|
            trigger.only_on = "development"
            trigger.ignore = :halt
            trigger.name = "mount projects dir"
            trigger.info = "mount projects dir"
             retries = 0
             max_retries = 10
                begin
                  port_open?(configuration[:ip], 22)
                rescue Exception => e
                  if retries <= max_retries
                    retries += 1
                    sleep 2 ** retries
                    retry
                  else
                    raise "Timeout: #{e.message}"
                  end
                end
             if ! os_router.already_mounted?
               trigger.run = {inline: os_router.mount}
             end
         end

         config.trigger.before :halt do |trigger|
            trigger.only_on = "development"
            trigger.name = "umount projects dir"
            trigger.info = "umount projects dir"
             if os_router.already_mounted?
               trigger.run= {inline: os_router.umount}
             end
         end

      if File.exists?('customize-after.sh')
        instance.vm.provision 'shell', path: 'customize-after.sh'
      end
    end
  end
end

def port_open?(ip, port, seconds=1)
  Timeout::timeout(seconds) do
    begin
      TCPSocket.new(ip, port).close
      true
    rescue Errno::ECONNREFUSED, Errno::EHOSTUNREACH, SocketError
      false
    end
  end
  rescue Timeout::Error
    false
end

Debug Output

https://gist.github.com/lzecca78/c2019a65990da676285c6ba3a85e96c2#file-vagrant_debug_output

Expected behavior

When i hit vagrant up development i expect that the references to the development hash are right and only that hash is used

Actual behavior

For some reasons, the provision takes also the first element of the list (in the case above the hash with key test), passing then wrong values to the functions above (i.e: it set the ip address to 192.168.33.10 instead of 192.168.33.18. Seems that it loops over the list before start to provision. This wasn't happening with the previous version of vagrant (2.2.4)

Steps to reproduce

  1. vagrant up development
  2. see the log in the stdout
waiting-reply

Most helpful comment

@briancain thank you a lot! that's was the main issue, once i replaced config with instance everything works as expected! Really thanks for the support even if wasn't related at all with the opened issue. I guess now is possible to close the issue, because basically the issue emerged with the 2.2.5 but was not related with this version, but with a misconfiguration of the Vagrantfile. Thanks a lot again!

All 13 comments

Hi @lzecca78 - Do you have a minimal vagrantfile that reproduces this behavior? The one you shared is quite large, and there are a number of things that could go wrong or make a mistake with on the user side considering you are iterating over a ruby hash to generate Vagrant configs. It might also help you to debug if you print the name and configuration hash so you can see if it's properly using the right values.

@briancain sorry, you are right. I made a slim one, basically, the behaviour described in the issue happens no matter what is the complexity of the Vagrantfile. Seem that something was changed in the iteration logic when a multi-vm are declared in the Vagrantfile. I added some puts with the access to the hash values in order to underline better the wrong behaviour. See how this output change from vagrant 2.2.4 to 2.2.5.

VAGRANTFILE_API_VERSION = '2'
ROOT_DIRECTORY = File.dirname(File.realpath(__FILE__))

Vagrant.require_version ">= 2.1.1"


Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|


  config.cache.scope = :box
  config.cache.auto_detect = true
  config.vm.boot_timeout = 600


  { :'test' => {
      :os         => 'bento/ubuntu-16.04',
      :hostname   => "development-test-1.public.#{ew_domain}",
      :ip         => '192.168.33.10',
      :memory     => '2048',
      :cpus       => 2,
      :disk_size  => '80GB',
      :autostart  => false,
      :primary    => false
    },
    :'test1' => {
      :os         => 'bento/ubuntu-18.04',
      :hostname   => "development-test-2.public.#{ew_domain}",
      :ip         => '192.168.33.11',
      :memory     => '2048',
      :cpus       => 2,
      :autostart  => false,
      :primary    => false
    },
    :'ws' => {
      :os         => 'bento/ubuntu-16.04',
      :hostname   => "development-ws-1.public.#{ew_domain}",
      :ip         => '192.168.33.12',
      :memory     => '2048',
      :cpus       => 1,
      :autostart  => false,
      :primary    => false
    },
      :'puppetserver' => {
      :os           => 'bento/ubuntu-16.04',
      :hostname     => "development-puppetserver-1.development.aws.#{ew_domain}",
      :ip           => '192.168.33.17',
      :memory       => '4192',
      :cpus         => 2,
      :autostart    => false,
      :primary      => false
    },
    :'development' => {
      :os         => 'bento/ubuntu-16.04',
      :hostname   => "development-all-vm.public.#{ew_domain}",
      :ip         => '192.168.33.18',
      :memory     => '2048',
      :cpus       => 2,
      :disk_size  => '80GB',
      :autostart  => false,
      :primary    => false
    },
    :'unit' => {
      :os         => 'bento/ubuntu-16.04',
      :hostname   => "development-unit-1.public.#{ew_domain}",
      :ip         => '192.168.33.19',
      :memory     => '1024',
      :cpus       => 1,
      :autostart  => false,
      :primary    => false
    },
    :'development_test' => {
      :os         => 'bento/ubuntu-18.04',
      :hostname   => "development-all-vm.public.#{ew_domain}",
      :ip         => '192.168.33.19',
      :memory     => '2048',
      :cpus       => 2,
      :autostart  => false,
      :primary    => false
    },
    :'development_test1' => {
      :os         => 'bento/ubuntu-16.04',
      :hostname   => "development-all-vm.public.#{ew_domain}",
      :ip         => '192.168.33.20',
      :memory     => '3192',
      :cpus       => 2,
      :autostart  => false,
      :primary    => false
    },
    :'cicd' => {
      :os         => 'bento/ubuntu-16.04',
      :hostname   => "development-cicd-vm.public.#{ew_domain}",
      :ip         => '192.168.33.19',
      :memory     => '2048',
      :cpus       => 2,
      :autostart  => false,
      :primary    => false
    },
    :'openvpn' => {
      :os         => 'bento/ubuntu-16.04',
      :hostname   => "development-openvpn-vm.public.#{ew_domain}",
      :ip         => '192.168.33.19',
      :memory     => '2048',
      :cpus       => 2,
      :autostart  => false,
      :primary    => false
    },
    :'kubemaster' => {
      :os         => 'bento/ubuntu-16.04',
      :hostname   => "development-kubemaster-1.public.#{ew_domain}",
      :ip         => '192.168.33.20',
      :memory     => '2048',
      :cpus       => 1,
      :autostart  => false,
      :primary    => false
    },
    :'kubenode' => {
      :os         => 'bento/ubuntu-16.04',
      :hostname   => "development-kubenode-1.public.#{ew_domain}",
      :ip         => '192.168.33.21',
      :memory     => '2048',
      :cpus       => 1,
      :autostart  => false,
      :primary    => false
    },
    :'kubenode2' => {
      :os         => 'bento/ubuntu-16.04',
      :hostname   => "development-kubenode-2.public.#{ew_domain}",
      :ip         => '192.168.33.22',
      :memory     => '2048',
      :cpus       => 1,
      :autostart  => false,
      :primary    => false
    },
    :'kibana' => {
      :os         => 'bento/ubuntu-16.04',
      :hostname   => "development-kibana-1.public.#{ew_domain}",
      :ip         => '192.168.33.21',
      :memory     => '2048',
      :cpus       => 1,
      :autostart  => false,
      :primary    => false
    },
    :'gitlab' => {
      :os         => 'bento/ubuntu-16.04',
      :hostname   => "development-gitlab-1.development.aws.#{ew_domain}",
      :ip         => '192.168.33.25',
      :memory     => '2048',
      :cpus       => 2,
      :autostart  => false,
      :primary    => false
    },
    :'redmine' => {
      :os         => 'bento/ubuntu-16.04',
      :hostname   => "development-redmine-1.development.aws.#{ew_domain}",
      :ip         => '192.168.33.26',
      :memory     => '2048',
      :cpus       => 2,
      :autostart  => false,
      :primary    => false
    }
  }.each do |name, configuration|

    config.vm.define name ,
      primary: configuration[:primary],
      autostart: configuration[:autostart] do |instance|
          if (configuration[:os].include?('16.04') || configuration[:os].include?('18.04'))
            instance.cache.synced_folder_opts = {
                owner: "_apt",
                group: "_apt",
                mount_options: ["dmode=777", "fmode=666"]
            }
          end

        # change disk sizing if disk_size is declared
        puts "ip is #{configuration[:ip]}"
        puts "hostname is #{configuration[:hostname]}"
        puts "name is #{name}"
        if configuration.has_key?(:disk_size)
           puts "disk size required is #{configuration[:disk_size]}"
           config.disksize.size = configuration[:disk_size]
        end

        instance.vm.box = configuration[:os]
        instance.vm.hostname = configuration[:hostname]
        instance.vm.network 'private_network', ip: configuration[:ip]


        # VirtualBox
        instance.vm.provider 'virtualbox' do |vb|
        # Boot in headless mode
        vb.gui = false

        # VM customization
        vb.cpus = configuration[:cpus]
        vb.customize ['modifyvm', :id, '--memory', configuration[:memory]]
        vb.customize ['modifyvm', :id, '--natnet1', '192.168.199/24']


        end



      instance.vm.provision 'puppet' do |puppet|
        puppet.environment = 'production'
        puppet.environment_path = 'environments'
        puppet.hiera_config_path = 'etc/hiera-vagrant.yaml'
        puppet.module_path = ['environments/production/site',
                              'environments/production/modules',
                              'environments/production/custom_modules']
        puppet.options = ['--verbose',
#                          '--debug',
#                          '--trace',
                          '--show_diff',
                          '--write-catalog-summary',
                          '--disable_warnings=deprecations',
                          '--pluginsync',
                          '--graph']
      end


    end
  end
end

We seem to be seeing on 2.2.5 as well. I'm also on macOS Mojave.

Our simplified Vagrantfile:

Vagrant.configure("2") do |config|
  config.vm.box = "ubuntu/trusty64"

  config.vm.define "base" do |base|
  end

  config.vm.define "gui" do |gui|
    config.vm.provider "virtualbox" do |vb|
        vb.gui = true
    end
  end
end

On Vagrant 2.2.4, running vagrant up base would get you a normal headless VM. Now, on 2.2.5, vagrant up base opens up the VirtualBox GUI window.

@chrisko - that's because inside your "gui" guest you've defined a top scope config option using config. If you just wanted it to apply to the gui guest you would say

config.vm.define "gui" do |gui|
    gui.vm.provider "virtualbox" do |vb|
        vb.gui = true
    end
end

So @lzecca78 - I wrote my own minimal vagrantfile...this is essentially what you are doing, right? It works as expected for me:

 {:test1 => {:ip => "192.168.33.99"},
   :anothertest => {:ip => "192.168.33.100"}
  }.each do |name, conf|
    config.vm.define name do |a|
      a.vm.box = "bento/ubuntu-18.04"
      a.vm.network "private_network", ip: conf[:ip]
      a.vm.provision :shell, inline:<<-SHELL
      ip addr
      SHELL
    end
  end

Both guests get the right name to ip address as defined in the hash above...

Bringing machine 'test1' up with 'vmware_desktop' provider...
Bringing machine 'anothertest' up with 'vmware_desktop' provider...
==> test1: Cloning VMware VM: 'bento/ubuntu-18.04'. This can take some time...
==> test1: Checking if box 'bento/ubuntu-18.04' version '201906.18.0' is up to date...
==> test1: Verifying vmnet devices are healthy...
==> test1: Preparing network adapters...
==> test1: Starting the VMware VM...
==> test1: Waiting for the VM to receive an address...
==> test1: Forwarding ports...
    test1: -- 22 => 2222
==> test1: Waiting for machine to boot. This may take a few minutes...
    test1: SSH address: 127.0.0.1:2222
    test1: SSH username: vagrant
    test1: SSH auth method: private key
    test1:
    test1: Vagrant insecure key detected. Vagrant will automatically replace
    test1: this with a newly generated keypair for better security.
    test1:
    test1: Inserting generated public key within guest...
    test1: Removing insecure key from the guest if it's present...
    test1: Key inserted! Disconnecting and reconnecting using new SSH key...
==> test1: Machine booted and ready!
==> test1: Configuring network adapters within the VM...
==> test1: Waiting for HGFS to become available...
==> test1: Enabling and configuring shared folders...
    test1: -- /home/brian/code/vagrant-sandbox: /vagrant
==> test1: Running provisioner: shell...
    test1: Running: inline script
    test1: 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    test1:     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    test1:     inet 127.0.0.1/8 scope host lo
    test1:        valid_lft forever preferred_lft forever
    test1:     inet6 ::1/128 scope host
    test1:        valid_lft forever preferred_lft forever
    test1: 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    test1:     link/ether 00:0c:29:ba:21:c0 brd ff:ff:ff:ff:ff:ff
    test1:     inet 172.16.183.216/24 brd 172.16.183.255 scope global dynamic eth0
    test1:        valid_lft 1793sec preferred_lft 1793sec
    test1:     inet6 fe80::20c:29ff:feba:21c0/64 scope link
    test1:        valid_lft forever preferred_lft forever
    test1: 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    test1:     link/ether 00:0c:29:ba:21:ca brd ff:ff:ff:ff:ff:ff
    test1:     inet 192.168.33.99/24 brd 192.168.33.255 scope global eth1
    test1:        valid_lft forever preferred_lft forever
    test1:     inet6 fe80::20c:29ff:feba:21ca/64 scope link
    test1:        valid_lft forever preferred_lft forever
==> anothertest: Cloning VMware VM: 'bento/ubuntu-18.04'. This can take some time...
==> anothertest: Checking if box 'bento/ubuntu-18.04' version '201906.18.0' is up to date...
==> anothertest: Verifying vmnet devices are healthy...
==> anothertest: Preparing network adapters...
==> anothertest: Fixed port collision for 22 => 2222. Now on port 2200.
==> anothertest: Starting the VMware VM...
==> anothertest: Waiting for the VM to receive an address...
==> anothertest: Forwarding ports...
    anothertest: -- 22 => 2200
==> anothertest: Waiting for machine to boot. This may take a few minutes...
    anothertest: SSH address: 127.0.0.1:2200
    anothertest: SSH username: vagrant
    anothertest: SSH auth method: private key
    anothertest:
    anothertest: Vagrant insecure key detected. Vagrant will automatically replace
    anothertest: this with a newly generated keypair for better security.
    anothertest:
    anothertest: Inserting generated public key within guest...
    anothertest: Removing insecure key from the guest if it's present...
    anothertest: Key inserted! Disconnecting and reconnecting using new SSH key...
==> anothertest: Machine booted and ready!
==> anothertest: Configuring network adapters within the VM...
==> anothertest: Waiting for HGFS to become available...
==> anothertest: Enabling and configuring shared folders...
    anothertest: -- /home/brian/code/vagrant-sandbox: /vagrant
==> anothertest: Running provisioner: shell...
    anothertest: Running: inline script
    anothertest: 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    anothertest:     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    anothertest:     inet 127.0.0.1/8 scope host lo
    anothertest:        valid_lft forever preferred_lft forever
    anothertest:     inet6 ::1/128 scope host
    anothertest:        valid_lft forever preferred_lft forever
    anothertest: 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    anothertest:     link/ether 00:0c:29:07:fa:28 brd ff:ff:ff:ff:ff:ff
    anothertest:     inet 172.16.183.217/24 brd 172.16.183.255 scope global dynamic eth0
    anothertest:        valid_lft 1794sec preferred_lft 1794sec
    anothertest:     inet6 fe80::20c:29ff:fe07:fa28/64 scope link
    anothertest:        valid_lft forever preferred_lft forever
    anothertest: 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    anothertest:     link/ether 00:0c:29:07:fa:32 brd ff:ff:ff:ff:ff:ff
    anothertest:     inet 192.168.33.100/24 brd 192.168.33.255 scope global eth1
    anothertest:        valid_lft forever preferred_lft forever
    anothertest:     inet6 fe80::20c:29ff:fe07:fa32/64 scope link
    anothertest:        valid_lft forever preferred_lft forever

@briancain i try to go deeper: when you define an hash of hash of vms, each vm with custom parameters and you choose to start just only one of them (your choice must not match with the first element of the hash), the behaviour we are having upgrading from Vagrant 2.2.4 to 2.2.5 is that vagrant is using (_wrongly_) the first vm-hash to identifies parameter, instead of using the hash values from the key selected from the command vagrant up <vm-name> . The gist i paste in the issue is the debug log of the command vagrant up development. If you give a deep look inside that huge file, it should print only the hash values that match the label development but if you look it firstly print the following info:

 ip is 192.168.33.10
 hostname is 192.168.33.10
 name is test

(ref: https://gist.github.com/lzecca78/c2019a65990da676285c6ba3a85e96c2#file-vagrant_debug_output-L156-L158)

only after some raws (and waiting pretty much time) below it prints the right information: https://gist.github.com/lzecca78/c2019a65990da676285c6ba3a85e96c2#file-vagrant_debug_output-L301-L304

but with Vagrant 2.2.4 and previous releases as well, this problem didn't exist at all.
The consequences of this, is not only the stdout that change twice, but is the following:

  1. before start the provision vagrant try twice to make the ssh connection (first the wrong first access hash, second to the right one)
  2. when i use the configuration[:ip] (that should use the development ip address) in the trigger that takes this parameter to do the nfs command at the end of the provision, hangs because is using the one taken from the first wrong one : https://gist.github.com/lzecca78/c2019a65990da676285c6ba3a85e96c2#file-vagrant_debug_output-L1529-L1533 and this is why i end to crl-c because it hangs forever, because of is using an ip related to a vm (in my case, the ip of the vm test) that wasn't been created at all.
    I can assure that the same identical Vagrantfile, never gave us this kind of behaviours with all the previous releases of vagrant so far.

Is pretty crystal clear that some behaviour has changed in the latest release, i didn't change this Vagrantfile since the release 2.1.1, the only change i did was to upgrade the vagrant release to 2.2.5. For this reason i opened this issue, because there is a problem that with the previous releases never happened.
I give you all the support you need to gather more informations , if needed, to go deeper in this problem, because i am 100% sure that something was changed and the behaviour changed with it.

Hi @lzecca78 - Sorry, I think you might be confusing yourself with the puts statements. Those will get evaluated no matter what, even if you invoke a single guest. When Vagrant parses a Vagrantfile, it evaluates every line, similar to a ruby script. A Vagrantfile is essentially just a special DSL ruby program that Vagrant uses. So a ruby construct like puts works just like a ruby program, where as the DSL constructs for Vagrant like config options store state internally, since they aren't native ruby methods but something Vagrant defines.

Additionally, Vagrant doesn't do anything special when it comes to these ruby constructs. When you iterate over a hash, that's ruby itself, not Vagrant. This is the main reason why I think there might be something wrong with your Vagrantfile. Now, we did recently upgrade ruby to a new version, so perhaps in an older version of ruby, whatever you were doing worked, but now with the newer version of ruby, it works differently? That seems surprising, though.

In this case for example, I took my same minimal Vagrantfile and printed puts conf[:ip]. Although I only invoked the second guest defined in the hash, both ips in the hash were printed. Also if you notice, my vm still gets the proper ip when it gets brought up:

brian@localghost:vagrant-sandbox % be vagrant up anothertest                                                                                     
192.168.33.99
192.168.33.100
Bringing machine 'anothertest' up with 'vmware_desktop' provider...
==> anothertest: Checking if box 'bento/ubuntu-18.04' version '201906.18.0' is up to date...
==> anothertest: Cloning VMware VM: 'bento/ubuntu-18.04'. This can take some time...
==> anothertest: Checking if box 'bento/ubuntu-18.04' version '201906.18.0' is up to date...
==> anothertest: Verifying vmnet devices are healthy...
==> anothertest: Preparing network adapters...
==> anothertest: Starting the VMware VM...
==> anothertest: Waiting for the VM to receive an address...
==> anothertest: Forwarding ports...
    anothertest: -- 22 => 2222
==> anothertest: Waiting for machine to boot. This may take a few minutes...
    anothertest: SSH address: 127.0.0.1:2222
    anothertest: SSH username: vagrant
    anothertest: SSH auth method: private key
    anothertest: Warning: Connection reset. Retrying...
    anothertest:
    anothertest: Vagrant insecure key detected. Vagrant will automatically replace
    anothertest: this with a newly generated keypair for better security.
    anothertest:
    anothertest: Inserting generated public key within guest...
    anothertest: Removing insecure key from the guest if it's present...
    anothertest: Key inserted! Disconnecting and reconnecting using new SSH key...
==> anothertest: Machine booted and ready!
==> anothertest: Configuring network adapters within the VM...
==> anothertest: Waiting for HGFS to become available...
==> anothertest: Enabling and configuring shared folders...
    anothertest: -- /home/brian/code/vagrant-sandbox: /vagrant
==> anothertest: Running provisioner: shell...
    anothertest: Running: inline script
    anothertest: 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    anothertest:     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    anothertest:     inet 127.0.0.1/8 scope host lo
    anothertest:        valid_lft forever preferred_lft forever
    anothertest:     inet6 ::1/128 scope host
    anothertest:        valid_lft forever preferred_lft forever
    anothertest: 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    anothertest:     link/ether 00:0c:29:2a:84:95 brd ff:ff:ff:ff:ff:ff
    anothertest:     inet 172.16.183.223/24 brd 172.16.183.255 scope global dynamic eth0
    anothertest:        valid_lft 1794sec preferred_lft 1794sec
    anothertest:     inet6 fe80::20c:29ff:fe2a:8495/64 scope link
    anothertest:        valid_lft forever preferred_lft forever
    anothertest: 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    anothertest:     link/ether 00:0c:29:2a:84:9f brd ff:ff:ff:ff:ff:ff
    anothertest:     inet 192.168.33.100/24 brd 192.168.33.255 scope global eth1
    anothertest:        valid_lft forever preferred_lft forever
    anothertest:     inet6 fe80::20c:29ff:fe2a:849f/64 scope link
    anothertest:        valid_lft forever preferred_lft forever

Please try my minimal Vagrantfile and see if it works for you. It should be similar to yours, but smaller, agreed? They both have hashes that define key values, where the key is the guest name and the value is a config option for that given guest.

If my example is not like yours, then lets work to get them the same and see if we can figure out what is going on.

Also, I looked at your Vagrantfile again. Are you using the vagrant-triggers plugin, or the native triggers feature in Vagrant core? If it's core, then this statement will not work like you think it might per my explanation above:

  config.trigger.before :up, :provision do |trigger|
    trigger.name = "upgrade or install librarian-puppet libs"
    trigger.info = "upgrade or install librarian-puppet libs"
    require 'fileutils'
    FileUtils.mkdir_p './environments/production/modules'
    trigger.on_error = :halt
    trigger.run  = {path: "bin/librarian-puppet.sh"}
  end

The FileUtils call will always run every time Vagrant parses your Vagrantfile, and this is true for the rest of your trigger blocks too like this one:

config.trigger.after :provision,:up  do |trigger|
            trigger.only_on = "development"
            trigger.ignore = :halt
            trigger.name = "mount projects dir"
            trigger.info = "mount projects dir"
             retries = 0
             max_retries = 10
                begin
                  port_open?(configuration[:ip], 22)
                rescue Exception => e
                  if retries <= max_retries
                    retries += 1
                    sleep 2 ** retries
                    retry
                  else
                    raise "Timeout: #{e.message}"
                  end
                end
             if ! os_router.already_mounted?
               trigger.run = {inline: os_router.mount}
             end
         end

Something like that should be inside a ruby block: https://www.vagrantup.com/docs/triggers/usage.html#ruby-option

thanks again @briancain ! I am using the trigger feature present in Vagrant core, yes. So as far as i understood, the Vagrantfile have the expect behaviour almost incidentally. I guess that something was change whit the new ruby version and so the problem that was incidentally hidden before, now is emerged. Is the only explanation i can give to all this. I will try to modifiy the Vagrantfile according to the documentation. Do you have some tips to _convert_ the triggers i have in the _right way_ ? Going back to your previous answer, i always used the ruby hash for declaring multi-vm Vagrantfile, because was easier than declare each time a block with the specs. Probably my assumptions to the behaviour were always wrong so far ( i used this kind of setup in Vagrant since 5-6 years more or less) and now this _ unstable balance _ was broken by the last ruby update i guess.
If you think that the best practice for the a multi-vm Vagrantfile is the ones written here : https://www.vagrantup.com/docs/multi-machine/ , i will convert everything i have in that format, although is more verbose, i will do that only once.

@lzecca78 - I think ultimately your hash structure should be just fine. I agree, considering you have so many guests that use the same config options but different values, the hash structure is probably the right way to go here, so I would keep that as is. I think what was actually going wrong here is your trigger definitions, which are relatively new to Vagrant. As I was showing above with my example, the hash iteration should work as expected. :pray:

Now onto the trigger issue....I am hoping it isn't too much work. Based on the two code blocks I extracted from your Vagrantfile earlier, you would simply need to do something like:

        config.trigger.after :provision,:up  do |trigger|
          trigger.only_on = "development"
          trigger.ignore = :halt
          trigger.name = "mount projects dir"
          trigger.info = "mount projects dir"
          trigger.ruby do |env,machine|
             retries = 0
             max_retries = 10
                begin
                  port_open?(configuration[:ip], 22)
                rescue Exception => e
                  if retries <= max_retries
                    retries += 1
                    sleep 2 ** retries
                    retry
                  else
                    raise "Timeout: #{e.message}"
                  end
                end
             if ! os_router.already_mounted?
               trigger.run = {inline: os_router.mount}
             end
           end
       end
config.trigger.before :up, :provision do |trigger|
    trigger.name = "upgrade or install librarian-puppet libs"
    trigger.info = "upgrade or install librarian-puppet libs"
    trigger.ruby do |env,machine|
      require 'fileutils'
      FileUtils.mkdir_p './environments/production/modules'
    end
    trigger.on_error = :halt
    trigger.run  = {path: "bin/librarian-puppet.sh"}
end

I haven't actually tested that for correctness with your Vagrantfile fyi, but the idea generally is, any trigger that you wish to run ruby code with, needs to exist inside this option:

trigger.ruby do |env,machine|
  puts "Your awesome ruby code goes here!"
end

Now that puts statement and any other ruby code will only be executed when the trigger is defined to run, rather than when it gets parsed. Here's my example from earlier, but with a ruby trigger:

 {:test1 => {:ip => "192.168.33.99"},
   :anothertest => {:ip => "192.168.33.100"}
  }.each do |name, conf|
    config.vm.define name do |a|
      a.vm.box = "bento/ubuntu-18.04"
      a.vm.network "private_network", ip: conf[:ip]
      a.vm.provision :shell, inline:<<-SHELL
      ip addr
      SHELL
      a.trigger.after :up do |t|
        t.ruby do |env,machine|
          puts "the ip: #{conf[:ip]}"
        end
      end
    end
  end

And the result....

brian@localghost:vagrant-sandbox % be vagrant up anothertest                                             ±[●][master]
Bringing machine 'anothertest' up with 'vmware_desktop' provider...
==> anothertest: Cloning VMware VM: 'bento/ubuntu-18.04'. This can take some time...
==> anothertest: Checking if box 'bento/ubuntu-18.04' version '201906.18.0' is up to date...
==> anothertest: Verifying vmnet devices are healthy...
==> anothertest: Preparing network adapters...
==> anothertest: Starting the VMware VM...
==> anothertest: Waiting for the VM to receive an address...
==> anothertest: Forwarding ports...
    anothertest: -- 22 => 2222
==> anothertest: Waiting for machine to boot. This may take a few minutes...
    anothertest: SSH address: 127.0.0.1:2222
    anothertest: SSH username: vagrant
    anothertest: SSH auth method: private key
    anothertest:
    anothertest: Vagrant insecure key detected. Vagrant will automatically replace
    anothertest: this with a newly generated keypair for better security.
    anothertest:
    anothertest: Inserting generated public key within guest...
    anothertest: Removing insecure key from the guest if it's present...
    anothertest: Key inserted! Disconnecting and reconnecting using new SSH key...
==> anothertest: Machine booted and ready!
==> anothertest: Configuring network adapters within the VM...
==> anothertest: Waiting for HGFS to become available...
==> anothertest: Enabling and configuring shared folders...
    anothertest: -- /home/brian/code/vagrant-sandbox: /vagrant
==> anothertest: Running provisioner: shell...
    anothertest: Running: inline script
    anothertest: 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    anothertest:     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    anothertest:     inet 127.0.0.1/8 scope host lo
    anothertest:        valid_lft forever preferred_lft forever
    anothertest:     inet6 ::1/128 scope host
    anothertest:        valid_lft forever preferred_lft forever
    anothertest: 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    anothertest:     link/ether 00:0c:29:5b:32:42 brd ff:ff:ff:ff:ff:ff
    anothertest:     inet 172.16.183.224/24 brd 172.16.183.255 scope global dynamic eth0
    anothertest:        valid_lft 1795sec preferred_lft 1795sec
    anothertest:     inet6 fe80::20c:29ff:fe5b:3242/64 scope link
    anothertest:        valid_lft forever preferred_lft forever
    anothertest: 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    anothertest:     link/ether 00:0c:29:5b:32:4c brd ff:ff:ff:ff:ff:ff
    anothertest:     inet 192.168.33.100/24 brd 192.168.33.255 scope global eth1
    anothertest:        valid_lft forever preferred_lft forever
    anothertest:     inet6 fe80::20c:29ff:fe5b:324c/64 scope link
    anothertest:        valid_lft forever preferred_lft forever
==> anothertest: Running action triggers after up ...
==> anothertest: Running trigger...
the ip: 192.168.33.100

You'll see at the end there that the ruby code is running with the correct ip address from the hash.

@briancain i know we are going pretty OOT, but i am trying to convert all my triggers in the right way you told me, but i am facing a really weird behaviour :

this is more ore less the trigger:

        config.trigger.after :provision, :up  do |trigger|
           trigger.only_on = "development"
           trigger.ignore = :halt
           trigger.name = "mount projects dir"
           trigger.ruby do |env,machine|
             if name == 'development'
                puts "name is #{name}"
                project_dir = '/opt/projects/Easywelfare'
                os_router= CommandExecutor.new(personal_files_dir,configuration[:ip],project_dir)

                retries = 0
                max_retries = 10
                   begin
                     port_open?(configuration[:ip], 22)
                   rescue Exception => e
                     if retries <= max_retries
                       retries += 1
                       sleep 2 ** retries
                       retry
                     else
                       raise "Timeout: #{e.message}"
                     end
                   end
                if ! os_router.already_mounted?
                  puts "mounting nfs"
                  system(os_router.mount)
                end
             end
           end
        end

analyzing it, what i was expecting was that it was triggered at the end, only for vm that is called development (because of the trigger.only_on), but what happens is that the trigger is executed n times where n is the number of the hashes in the Vagrantfile:

=> development: Notice: Applied catalog in 20.54 seconds ==> development: Configuring cache buckets... ==> development: Running action triggers after up ... ==> development: Running trigger: mount projects dir... ip is 192.168.33.10 name is test ==> development: Running trigger: mount projects dir... ip is 192.168.33.11 name is test1 ==> development: Running trigger: mount projects dir... ip is 192.168.33.12 name is ws ==> development: Running trigger: mount projects dir... ip is 192.168.33.14 name is mail ==> development: Running trigger: mount projects dir... ip is 192.168.33.16 name is monitoring ==> development: Running trigger: mount projects dir... ip is 192.168.33.17 name is puppetserver ==> development: Running trigger: mount projects dir... ip is 192.168.33.18 name is development ==> development: Running trigger: mount projects dir... ip is 192.168.33.19 name is unit ==> development: Running trigger: mount projects dir... ip is 192.168.33.19 name is development_test ==> development: Running trigger: mount projects dir... ip is 192.168.33.20 name is development_test1 ==> development: Running trigger: mount projects dir...

I dunno why, but is the current behaviour. Is there something wrong?

Where are you printing the ip is ___? I don't see that in the code you shared. Also if I remember right that trigger was inside a guest definition, right? Try defining it as a guest scoped trigger, rather than a global config trigger. So for any given trigger defined inside the guest, replace config with instance, since I believe that's what the local guest config option is called in your Vagrantfile.

@briancain thank you a lot! that's was the main issue, once i replaced config with instance everything works as expected! Really thanks for the support even if wasn't related at all with the opened issue. I guess now is possible to close the issue, because basically the issue emerged with the 2.2.5 but was not related with this version, but with a misconfiguration of the Vagrantfile. Thanks a lot again!

I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

StefanScherer picture StefanScherer  ·  3Comments

bbaassssiiee picture bbaassssiiee  ·  3Comments

lebogan picture lebogan  ·  3Comments

mpontillo picture mpontillo  ·  3Comments

hesco picture hesco  ·  3Comments