Vagrant: Failed vagrant action prevents further actions by locking the machine

Created on 28 May 2014  路  8Comments  路  Source: hashicorp/vagrant

Host: Ubuntu 12.04 x64
Guest: Centos 6.5 x64
Vagrant Version 1.6.2

Similar to #3741

Vagrant up command fails inconsistently at Booting VM with:

The SSH connection was unexpectedly closed by the remote end. This
usually indicates that SSH within the guest machine was unable to
properly start up. Please boot the VM in GUI mode to check whether
it is booting properly.

An attempt to then run vagrant reload will result in:

INFO batch_action: Enabling parallelization by default.
 INFO batch_action: Disabling parallelization because provider doesn't support it: virtualbox
 INFO batch_action: Batch action will parallelize: false
 INFO batch_action: Starting action: #<Vagrant::Machine:0x00000003aed758> up {:destroy_on_error=>true, :parallel=>true, :provision_ignore_sentinel=>false, :provision_types=>nil}
 INFO machine: Calling action: up on provider VirtualBox (37ee2d6c-3883-40d7-a1aa-62d3319d7c7f)
 INFO environment: Acquired process lock: dotlock
 INFO environment: Released process lock: dotlock
 WARN environment: Process-lock in use: machine-action-b6e253cfd50dfb7274e5b853a2eac27c
 INFO environment: Running hook: environment_unload
 INFO runner: Preparing hooks for middleware sequence...
 INFO runner: 2 hooks defined.
 INFO runner: Running action: #<Vagrant::Action::Builder:0x00000002a4d600>
ERROR vagrant: Vagrant experienced an error! Details:
ERROR vagrant: #<Vagrant::Errors::MachineActionLockedError: An action 'up' was attempted on the machine 'default',
but another process is already executing an action on the machine.
Vagrant locks each machine for access by only one process at a time.
Please wait until the other Vagrant process finishes modifying this
machine, then try again.

If you believe this message is in error, please check the process
listing for any "ruby" or "vagrant" processes and kill them. Then
try again.>

ERROR vagrant: An action 'up' was attempted on the machine 'default',
but another process is already executing an action on the machine.
Vagrant locks each machine for access by only one process at a time.
Please wait until the other Vagrant process finishes modifying this
machine, then try again.

If you believe this message is in error, please check the process
listing for any "ruby" or "vagrant" processes and kill them. Then
try again.
ERROR vagrant: /opt/vagrant/embedded/gems/gems/vagrant-1.6.2/lib/vagrant/machine.rb:176:in `rescue in action'
/opt/vagrant/embedded/gems/gems/vagrant-1.6.2/lib/vagrant/machine.rb:149:in `action'
/opt/vagrant/embedded/gems/gems/vagrant-1.6.2/lib/vagrant/batch_action.rb:82:in `block (2 levels) in run'

Stopping the ruby process will then allow the vagrant reload call to proceed.

Thanks

Most helpful comment

Hi,
I ran into similar issue when trying to run "vagrant up --provider=lxc"
I got it fixed simply by checking to see if any ruby/vagrant process was already running and kill it.
FYI, In my case, the "vagrant up" process initiated by me prior to the current run was still running.
$ ps -ef | grep ruby
$ ps -ef | grep vagrant

$ kill -9

$ Restart "vagrant up --provider=lxc"

Hope this helps someone with similar issue in the future.

All 8 comments

Why is the other Ruby process still running? This is working as intended if there is another Ruby process still running. We've seen this behavior with plugins that behave poorly (fork, generally). Are you running with no plugins?

I just tried and couldn't repro this on Mac or Windows, so I'm going to close, but if you give me more info and maybe a repro script then I'll for sure fix this.

Thanks for looking at this.

Installed vagrant plugins:

vagrant-login (1.0.1, system)
vagrant-notify (0.4.0)
vagrant-share (1.0.1, system)
vagrant-vbguest (0.10.0)

The vagrant script uses Puppet to provision and is from www.puphpet.com:

require 'yaml'

dir = File.dirname(File.expand_path(__FILE__))

configValues = YAML.load_file("#{dir}/puphpet/config.yaml")
data = configValues['vagrantfile-local']

Vagrant.configure("2") do |config|
  config.vm.box = "#{data['vm']['box']}"
  config.vm.box_url = "#{data['vm']['box_url']}"

  if data['vm']['hostname'].to_s != ''
    config.vm.hostname = "#{data['vm']['hostname']}"
  end

  if data['vm']['network']['private_network'].to_s != ''
    config.vm.network "private_network", ip: "#{data['vm']['network']['private_network']}"
  end

  data['vm']['network']['forwarded_port'].each do |i, port|
    if port['guest'] != '' && port['host'] != ''
      config.vm.network :forwarded_port, guest: port['guest'].to_i, host: port['host'].to_i
    end
  end

  data['vm']['synced_folder'].each do |i, folder|
    if folder['source'] != '' && folder['target'] != ''
      nfs = (folder['nfs'] == "true") ? "nfs" : nil
      if nfs == "nfs"
        config.vm.synced_folder "#{folder['source']}", "#{folder['target']}", id: "#{i}", type: nfs
      else
        config.vm.synced_folder "#{folder['source']}", "#{folder['target']}", id: "#{i}", type: nfs,
          group: 'www-data', user: 'www-data', mount_options: ["dmode=775", "fmode=764"]
      end
    end
  end

  config.vm.usable_port_range = (10200..10500)

  if data['vm']['chosen_provider'].empty? || data['vm']['chosen_provider'] == "virtualbox"
    ENV['VAGRANT_DEFAULT_PROVIDER'] = 'virtualbox'

    config.vm.provider :virtualbox do |virtualbox|
      data['vm']['provider']['virtualbox']['modifyvm'].each do |key, value|
        if key == "memory"
          next
        end

        if key == "natdnshostresolver1"
          value = value ? "on" : "off"
        end

        virtualbox.customize ["modifyvm", :id, "--#{key}", "#{value}"]
      end

      virtualbox.customize ["modifyvm", :id, "--memory", "#{data['vm']['memory']}"]
    end
  end

  if data['vm']['chosen_provider'] == "vmware_fusion" || data['vm']['chosen_provider'] == "vmware_workstation"
    ENV['VAGRANT_DEFAULT_PROVIDER'] = (data['vm']['chosen_provider'] == "vmware_fusion") ? "vmware_fusion" : "vmware_workstation"

    config.vm.provider "vmware_fusion" do |v|
      data['vm']['provider']['vmware'].each do |key, value|
        if key == "memsize"
          next
        end

        v.vmx["#{key}"] = "#{value}"
      end

      v.vmx["memsize"] = "#{data['vm']['memory']}"
    end
  end

  ssh_username = !data['ssh']['username'].nil? ? data['ssh']['username'] : "vagrant"

  config.vm.provision "shell" do |s|
    s.path = "puphpet/shell/initial-setup.sh"
    s.args = "/vagrant/puphpet"
  end
  config.vm.provision "shell" do |kg|
    kg.path = "puphpet/shell/ssh-keygen.sh"
    kg.args = "#{ssh_username}"
  end
  config.vm.provision :shell, :path => "puphpet/shell/update-puppet.sh"

  config.vm.provision :puppet do |puppet|
    puppet.facter = {
      "ssh_username"     => "#{ssh_username}",
      "provisioner_type" => ENV['VAGRANT_DEFAULT_PROVIDER'],
      "vm_target_key"    => 'vagrantfile-local',
    }
    puppet.manifests_path = "#{data['vm']['provision']['puppet']['manifests_path']}"
    puppet.manifest_file = "#{data['vm']['provision']['puppet']['manifest_file']}"
    puppet.module_path = "#{data['vm']['provision']['puppet']['module_path']}"

    if !data['vm']['provision']['puppet']['options'].empty?
      puppet.options = data['vm']['provision']['puppet']['options']
    end
  end

  config.vm.provision :shell, :path => "puphpet/shell/execute-files.sh"
  config.vm.provision :shell, :path => "puphpet/shell/important-notices.sh"

  if File.file?("#{dir}/puphpet/files/dot/ssh/id_rsa")
    config.ssh.private_key_path = [
      "#{dir}/puphpet/files/dot/ssh/id_rsa",
      "#{dir}/puphpet/files/dot/ssh/insecure_private_key"
    ]
  end

  if !data['ssh']['host'].nil?
    config.ssh.host = "#{data['ssh']['host']}"
  end
  if !data['ssh']['port'].nil?
    config.ssh.port = "#{data['ssh']['port']}"
  end
  if !data['ssh']['username'].nil?
    config.ssh.username = "#{data['ssh']['username']}"
  end
  if !data['ssh']['guest_port'].nil?
    config.ssh.guest_port = data['ssh']['guest_port']
  end
  if !data['ssh']['shell'].nil?
    config.ssh.shell = "#{data['ssh']['shell']}"
  end
  if !data['ssh']['keep_alive'].nil?
    config.ssh.keep_alive = data['ssh']['keep_alive']
  end
  if !data['ssh']['forward_agent'].nil?
    config.ssh.forward_agent = data['ssh']['forward_agent']
  end
  if !data['ssh']['forward_x11'].nil?
    config.ssh.forward_x11 = data['ssh']['forward_x11']
  end
  if !data['vagrant']['host'].nil?
    config.vagrant.host = data['vagrant']['host'].gsub(":", "").intern
  end

end

I can consistently reproduce this on my system each time the vagrant box fails to maintain ssh connection during boot with error:

The SSH connection was unexpectedly closed by the remote end. This
usually indicates that SSH within the guest machine was unable to
properly start up. Please boot the VM in GUI mode to check whether
it is booting properly.

@appsol this seems to be a known issue with vagrant-notify. more information can be found on GH-3725 and updates about it will be posted to https://github.com/fgrehm/vagrant-notify/issues/17

@fgrehm thanks for the info. Looks like this is a nix issue which would explain why @mitchellh can't reproduce this on his systems.
I'm watching your repo so if you get a fix and post it there I'd be really grateful.

I had a similar issue as reported below with aws plugin while running "vagrant up --provider=aws"

$ vagrant up --provider=aws
Bringing machine 'default' up with 'aws' provider...
An action 'up' was attempted on the machine 'default',
but another process is already executing an action on the machine.
Vagrant locks each machine for access by only one process at a time.
Please wait until the other Vagrant process finishes modifying this
machine, then try again.

If you believe this message is in error, please check the process
listing for any "ruby" or "vagrant" processes and kill them. Then
try again.

The fix was to open up the task manager and kill the ruby process.

Hi,
I ran into similar issue when trying to run "vagrant up --provider=lxc"
I got it fixed simply by checking to see if any ruby/vagrant process was already running and kill it.
FYI, In my case, the "vagrant up" process initiated by me prior to the current run was still running.
$ ps -ef | grep ruby
$ ps -ef | grep vagrant

$ kill -9

$ Restart "vagrant up --provider=lxc"

Hope this helps someone with similar issue in the future.

killall vagrant-notify-server
vagrant plugin uninstall vagrant-notify
vagrant reload

helped me to get vagrant up working
vagrant 1.8.1
host Ubuntu 16.04
Guest Ubuntu 14.04

The vagrant global-status --prune command worked for me to remove invalid entries in case of the invalid old entries (after crash).
In case of AWS plugin, see: #428, #457

Was this page helpful?
0 / 5 - 0 ratings