Vagrant: Too many authentication failures for vagrant

Created on 7 Nov 2017  ยท  5Comments  ยท  Source: hashicorp/vagrant

Vagrant version

2.0.0

Host operating system

Arch Linux

Guest operating system

CentOS 6.9

Vagrantfile

CPUS = 2
MEMORY = 4096

boxes = {
    'test1' => '10.19.0.10',
    'test2' => '10.19.0.12',
    'test3' => '10.19.0.14',
    'test4' => '10.19.0.16',
    'test5' => '10.19.0.18',
    'test6' => '10.19.0.20',
    'test7' => '10.19.0.22',
}

Vagrant.configure("2") do |config|

    config.vm.box = "centos/6"
    config.vm.synced_folder '.', '/vagrant', disabled: true

    VAGRANT_ROOT = File.dirname(__FILE__)
    ANSIBLE_RAW_SSH_ARGS = []

    count = boxes.size
    boxes.each do |host, ip|

        ANSIBLE_RAW_SSH_ARGS << "-o IdentityFile=#{VAGRANT_ROOT}/.vagrant/machines/#{host}/virtualbox/private_key"

        config.vm.define "#{host}" do |node|

            node.vm.hostname = "#{host}"
            node.vm.network :private_network, ip: "#{ip}"

            node.vm.provider :virtualbox do |vb|
                vb.name = node.vm.hostname
            end

            # Ansible provisioner: install_new_host.yml
            count -= 1
            if count == 0
                node.vm.provision :ansible do |ansible|
                    ansible.limit = [ "cluster", "localhost" ]
                    ansible.playbook = "play.yml"
                    ansible.inventory_path = "hosts"
                    ansible.become = true
                    ansible.become_user = "root"
                    ansible.raw_ssh_args = ANSIBLE_RAW_SSH_ARGS
                    ansible.verbose = "vvvv"
                end
            end
        end
    end

    config.vm.provider :virtualbox do |vb|

        vb.gui = false

        # General
        vb.customize ["modifyvm", :id, "--clipboard", "bidirectional"]
        vb.customize ["modifyvm", :id, "--draganddrop", "bidirectional"]

        # System
        vb.memory = MEMORY
        vb.customize ["modifyvm", :id, "--boot1", "disk"]
        vb.customize ["modifyvm", :id, "--boot2", "dvd"]
        vb.customize ["modifyvm", :id, "--boot3", "none"]
        vb.customize ["modifyvm", :id, "--boot4", "none"]
        vb.customize ["modifyvm", :id, "--chipset", "ich9"]
        vb.customize ["modifyvm", :id, "--mouse", "ps2"]
        vb.customize ["modifyvm", :id, "--apic", "on"]
        vb.customize ["modifyvm", :id, "--rtcuseutc", "on"]
        vb.cpus = CPUS
        vb.customize ["modifyvm", :id, "--pae", "on"]
        vb.customize ["modifyvm", :id, "--paravirtprovider", "default"]
        vb.customize ["modifyvm", :id, "--hwvirtex", "on"]
        vb.customize ["modifyvm", :id, "--vtxvpid", "on"]
        vb.customize ["modifyvm", :id, "--vtxux", "on"]

        # Display
        vb.customize ["modifyvm", :id, "--vram", "16"]
        vb.customize ["modifyvm", :id, "--accelerate3d", "on"]

        # Storage
        vb.customize ["storagectl", :id, "--name", "IDE", "--controller", "ICH6", "--hostiocache", "on"]
        vb.customize ["storageattach", :id, "--storagectl", "IDE", "--port", "0", "--device", "0", "--type", "hdd", "--nonrotational", "on"]

        # Audio
        vb.customize ["modifyvm", :id, "--audio", "none"]

        # Network
        vb.customize ["modifyvm", :id, "--nictype1", "82540EM"]

        # USB
        vb.customize ["modifyvm", :id, "--usb", "on", "--usbehci", "on"]
    end

    # Shell provisioner: Enable remote root access
    config.vm.provision :shell, inline: <<-SHELL
        sed -ie 's/^PasswordAuthentication no/PasswordAuthentication yes/g' /etc/ssh/sshd_config
        service sshd reload
        echo "--> Server reporting for duty."
    SHELL
end
# vim: ft=ruby:ai:ts=4:sw=4:sts=4

Debug output

https://gist.github.com/pnedkov/bdb5021a50ae5bc93d56bba411efaf47

Expected behavior

Vagrant should be able to login to all VMs including test6.

Actual behavior

The login to test6 always fails!

fatal: [test6]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Warning: Permanently added '10.19.0.20' (RSA) to the list of known hosts.rnReceived disconnect from 10.19.0.20 port 22:2: Too many authentication failures for vagrantrnDisconnected from 10.19.0.20 port 22rn", "unreachable": true}

Steps to reproduce

  1. Place play.yml, hosts, and Vagrantfile in the same directory
    play.yml:
---
- name: Test play
  hosts: cluster
  gather_facts: true
  become: true
  tasks:
    - name: Test task
      debug:
        msg: "Host: {{ ansible_hostname }} ({{ ansible_distribution }} {{ ansible_distribution_release }} {{ ansible_distribution_version }})"

hosts:

[cluster]
test1  ansible_host=10.19.0.10
test2  ansible_host=10.19.0.12
test3  ansible_host=10.19.0.14
test4  ansible_host=10.19.0.16
test5  ansible_host=10.19.0.18
test6  ansible_host=10.19.0.20
test7  ansible_host=10.19.0.22
  1. vagrant up

Additional information

~/.ssh/config:

Host *  
    StrictHostKeyChecking no
    UserKnownHostsFile=/dev/null

~/.ssh/known_hosts is empty.

test6 machine always fails after vagrant up! On the other hand:
ansible-playbook play.yml -i hosts -k -u vagrant
works like a charm. Any idea what is wrong?

provisioneansible question

All 5 comments

I'm /cc 'ing @gildegoma on this issue since he knows more about the ansible stuff than I do....

Today with a fresh set of eyes I found two solutions, but I would like to hear what the community has to say.

  1. Increase MaxAuthTries in /etc/ssh/sshd_config on all VMs to a value that is equal or bigger than the number of VMs (boxes.length)
    config.vm.provision :shell, inline: <<-SHELL
        sed -ie 's/^PasswordAuthentication no/PasswordAuthentication yes/g' /etc/ssh/sshd_config
        sed -ie 's/^#MaxAuthTries.*/MaxAuthTries #{boxes.length}/g' /etc/ssh/sshd_config
        service sshd reload
        echo "--> Server reporting for duty."
    SHELL

Maybe the previous version of centos/6 has had larger default value for MaxAuthTries than in the latest image, and this is why I did not have this problem before.

  1. Use ~/.vagrant.d/insecure_private_key for all VMs
config.ssh.insert_key = false
ansible.raw_arguments = ["--private-key=~/.vagrant.d/insecure_private_key"]

It does not look like a vagrant issue after all.
If there is a better approach please let me know. Obviously the security is not my main concern since these are test VMs, behind NAT and with very short lifespan.

@pnedkov Your two solutions are fine. When using Ansible Parallelism, I would recommend to try to reduce the number of SSH key-pairs. I don't know how your "solution 1" performs, but I personally would try to share the same key pair across the managed VMs.

As you said, if security is not a concern, the "solution 2" (config.ssh.insert_key = false) is the easiest way to go. Otherwise, you'll need to distribute/generate the key in a previous step.

Note also that since Vagrant 1.7.3 (#5765), the generated inventory contains the ansible_ssh_private_key_file parameter for each VMs. This allows Ansible parallel provisioning with config.ssh.insert_key = false. Do you really need your static inventory in your case?

I propose to close this thread here, because this kind of questions are preferred to be discussed via the project mailing list (or on StackOverflow).

Thanks @gildegoma . I appreciate your prompt reply.

I'm going to lock this issue because it has been closed for _30 days_ โณ. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

Was this page helpful?
0 / 5 - 0 ratings