Vagrant: Ansible_local provisioner does not generate a proper inventory for all hosts in a multi-guest configuration

Created on 19 Jul 2017  路  5Comments  路  Source: hashicorp/vagrant

I am using Windows 10 and HyperV. I want to setup an elasticsearch cluster on 3 centos/7 guests and an ansible controller guest. The controller will configure the guests through the ansible_local provisioner, this is necessary because ansible will not run from windows.

The guests' network will be configured by DHCP (is I understand it currently there is no way for vagrant to set the ip address of the guests in HyperV).

Vagrant version

Vagrant 1.9.7

Host operating system

Windows 10 Enterprise Creators update

Guest operating system

Centos/7 and hashicorp/precise64

Vagrantfile

Vagrant.configure("2") do |config|
  config.vm.box = "centos/7"

  config.vm.define "centos-dev-1" do |machine|
    machine.vm.provider "hyperv" do |h|
      h.vmname = "centos-dev-1"
    end
  end

  config.vm.define "centos-dev-2" do |machine|
    machine.vm.provider "hyperv" do |h|
      h.vmname = "centos-dev-2"
    end
  end

  config.vm.define "centos-dev-3" do |machine|
    machine.vm.provider "hyperv" do |h|
      h.vmname = "centos-dev-3"
    end
  end

  config.vm.define "controller" do |machine|
    config.vm.box = "hashicorp/precise64"

    machine.vm.provider "hyperv" do |h|
      h.vmname = "controller"
    end

    # copy the ssh host keys to the controller so ansible can login to these machines
    machine.vm.provision "file", source: ".vagrant/machines/centos-dev-1/hyperv/private_key", destination: "/home/vagrant/machines/centos-dev-1.private_key"
    machine.vm.provision "file", source: ".vagrant/machines/centos-dev-2/hyperv/private_key", destination: "/home/vagrant/machines/centos-dev-2.private_key"
    machine.vm.provision "file", source: ".vagrant/machines/centos-dev-3/hyperv/private_key", destination: "/home/vagrant/machines/centos-dev-3.private_key"

    # chmod 600 the keyfiles, or else ssh will report an error
    machine.vm.provision "shell", path: "update_hostkeys.sh"

    # Run Ansible from the controller VM to setup controller dependencies
    machine.vm.provision "ansible_local" do |ansible|
      ansible.playbook = "controller.yml"
      ansible.verbose = true
      ansible.limit = "all"
    end

    # Run Ansible from the controller VM to setup the elasticsearch hosts
    machine.vm.provision "ansible_local" do |ansible|
      ansible.groups = {
        "elasticsearch" => ["centos-dev-1", "centos-dev-2", "centos-dev-3"]
      }
      ansible.playbook = "elasticsearch.yml"
      ansible.verbose = true
      ansible.limit = "all"
    end
  end

config.vm.provider "hyperv" do |h|
  h.vm_integration_services = {
      guest_service_interface: true,
      heartbeat: true,
      key_value_pair_exchange: true,
      shutdown: true,
      time_synchronization: true,
      vss: true
  }
end

end

Expected behavior

The controller should login to the centos guests with ansible and configure elasticsearch.

Actual behavior

Ansible is unable to connect to the other guests because the generated inventory file does not contain the proper configuration for the other guests.

The generated inventory file (in /tmp/vagrant-ansible/inventory/vagrant_ansible_local_inventory on the controller guest):

# Generated by Vagrant

centos-dev-1
centos-dev-2
centos-dev-3
controller ansible_connection=local

[elasticsearch]
centos-dev-1
centos-dev-2
centos-dev-3

Steps to reproduce

Vagrant up will configure the controller with the ansible_local provisioner, this works, because the inventory file does contain reference to the local host.

The next step, to configure the elasticsearch guests (centos-dev-1, 2 and 3) fails, because the centos guests cannot be reached.
Ansible output:

==> controller: Running provisioner: ansible_local...
    controller: Running ansible-playbook...
cd /vagrant && PYTHONUNBUFFERED=1 ANSIBLE_NOCOLOR=true ansible-playbook --limit="all" --inventory-file=/tmp/vagrant-ansible/inventory -v elasticsearch.yml
Using /vagrant/ansible.cfg as config file

PLAY [elasticsearch] ***********************************************************

TASK [Gathering Facts] *********************************************************
fatal: [centos-dev-3]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname centos-dev-3: Temporary failure in name resolution\r\n", "unreachable": true}
fatal: [centos-dev-2]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname centos-dev-2: Temporary failure in name resolution\r\n", "unreachable": true}
fatal: [centos-dev-1]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname centos-dev-1: Temporary failure in name resolution\r\n", "unreachable": true}
        to retry, use: --limit @/vagrant/elasticsearch.retry

PLAY RECAP *********************************************************************
centos-dev-1               : ok=0    changed=0    unreachable=1    failed=0
centos-dev-2               : ok=0    changed=0    unreachable=1    failed=0
centos-dev-3               : ok=0    changed=0    unreachable=1    failed=0

Possible resolution

The ansible provisioner does generate the inventory file with the ip addresses for each guest. The ansible_local provisioner could generate the same output, with the only exception being the controller guest for which the ansible_connection should be local.

Possible workaround

It is possible with ansible.host_vars to add variables to the inventory items. Something like this would do the trick:

ansible.host_vars = {
        "centos-dev-1" => { "ansible_ssh_host" => "<get ip of centos-dev-1>", "ansible_ssh_port" => "22", "ansible_ssh_user" => "'vagrant'", "ansible_ssh_private_key_file" => "'/home/vagrant/machines/centos-dev-1.private_key'" },
        "centos-dev-2" => { "ansible_ssh_host" => "<get ip of centos-dev-2>", "ansible_ssh_port" => "22", "ansible_ssh_user" => "'vagrant'", "ansible_ssh_private_key_file" => "'/home/vagrant/machines/centos-dev-2.private_key'" },
        "centos-dev-3" => { "ansible_ssh_host" => "<get ip of centos-dev-3>", "ansible_ssh_port" => "22", "ansible_ssh_user" => "'vagrant'", "ansible_ssh_private_key_file" => "'/home/vagrant/machines/centos-dev-3.private_key'" }
      }

Replace "\

documentation provisioneansible_local task-small

Most helpful comment

@gildegoma thanks for introducing me to Landrush, host manager works well but it is quite slow to update all the hosts when there are a lot of machine in a cluster. In theory landrush should be a lot quicker!

All 5 comments

This is easily resolved by using the vagrant-hostmanager plugin, which will update the local hosts file on each box with the correct entries. I use this myself with the ansible_local provisioner and it works well.

@robkaandorp Thank you for the detailed feature request, which is very well described.

As @caveman-dick mentioned, such problematic can be resolved with well-known Vagrant plugins like landrush (which provides an internal DNS service for your Vagrant guests) and/or vagrant-hostmanager.

As the IP-Addresses management and visibility across vagrant guests strongly depends on the provider(s) used, it is quite challenging to make the ansible_local provisioner capable to deal with all the possible situation. Hence the strategy to stick on the guest hostnames, and delegate the task.

However, this makes me realise that the current documentation only speak about the "static inventory" alternative. I'll keep this issue open until the documentation is updated to mention the DNS/hosts plugins alternatives, but won't plan to implement any patches in the provisioner. I hope this is okay for you so 馃槂.

@gildegoma thanks for introducing me to Landrush, host manager works well but it is quite slow to update all the hosts when there are a lot of machine in a cluster. In theory landrush should be a lot quicker!

How did you handle the SSH keys in the autogenerated inventory for ansible_local? hostmanager solved the name resolution problem and the VMs can ping each other, but ansible still fails ssh logins.

have the same pb here as @PatrickLang

Was this page helpful?
0 / 5 - 0 ratings

Related issues

StefanScherer picture StefanScherer  路  3Comments

janw-me picture janw-me  路  3Comments

hesco picture hesco  路  3Comments

OtezVikentiy picture OtezVikentiy  路  3Comments

RobertSwirsky picture RobertSwirsky  路  3Comments