Vagrant: random "Forcing shutdown of VM" through provisioning process

Created on 5 Oct 2011  路  90Comments  路  Source: hashicorp/vagrant

If I run "vagrant up" and have a puppet provisioner configured, the provisioning process sometimes gets abborted by the following message. I think this happens more often if I run puppet in the foreground - being done with puppet.options = ["--test"] in the Vagrantfile.

[avm01] Forcing shutdown of VM...
[avm01] Destroying VM and associated drives...
[avm01] Destroying unused networking interface...
/usr/local/Cellar/ruby/1.9.2-p290/lib/ruby/gems/1.9.1/gems/net-ssh-2.1.4/lib/net/ssh/ruby_compat.rb:22:in `select': closed stream (IOError)
    from /usr/local/Cellar/ruby/1.9.2-p290/lib/ruby/gems/1.9.1/gems/net-ssh-2.1.4/lib/net/ssh/ruby_compat.rb:22:in `io_select'
    from /usr/local/Cellar/ruby/1.9.2-p290/lib/ruby/gems/1.9.1/gems/net-ssh-2.1.4/lib/net/ssh/transport/packet_stream.rb:73:in `available_for_read?'
    from /usr/local/Cellar/ruby/1.9.2-p290/lib/ruby/gems/1.9.1/gems/net-ssh-2.1.4/lib/net/ssh/transport/packet_stream.rb:85:in `next_packet'
    from /usr/local/Cellar/ruby/1.9.2-p290/lib/ruby/gems/1.9.1/gems/net-ssh-2.1.4/lib/net/ssh/transport/session.rb:169:in `block in poll_message'
    from /usr/local/Cellar/ruby/1.9.2-p290/lib/ruby/gems/1.9.1/gems/net-ssh-2.1.4/lib/net/ssh/transport/session.rb:164:in `loop'
    from /usr/local/Cellar/ruby/1.9.2-p290/lib/ruby/gems/1.9.1/gems/net-ssh-2.1.4/lib/net/ssh/transport/session.rb:164:in `poll_message'
    from /usr/local/Cellar/ruby/1.9.2-p290/lib/ruby/gems/1.9.1/gems/net-ssh-2.1.4/lib/net/ssh/connection/session.rb:451:in `dispatch_incoming_packets'
    from /usr/local/Cellar/ruby/1.9.2-p290/lib/ruby/gems/1.9.1/gems/net-ssh-2.1.4/lib/net/ssh/connection/session.rb:213:in `preprocess'
    from /usr/local/Cellar/ruby/1.9.2-p290/lib/ruby/gems/1.9.1/gems/net-ssh-2.1.4/lib/net/ssh/connection/session.rb:197:in `process'
    from /usr/local/Cellar/ruby/1.9.2-p290/lib/ruby/gems/1.9.1/gems/net-ssh-2.1.4/lib/net/ssh/connection/session.rb:161:in `block in loop'
    from /usr/local/Cellar/ruby/1.9.2-p290/lib/ruby/gems/1.9.1/gems/net-ssh-2.1.4/lib/net/ssh/connection/session.rb:161:in `loop'
    from /usr/local/Cellar/ruby/1.9.2-p290/lib/ruby/gems/1.9.1/gems/net-ssh-2.1.4/lib/net/ssh/connection/session.rb:161:in `loop'
    from /usr/local/Cellar/ruby/1.9.2-p290/lib/ruby/gems/1.9.1/gems/net-ssh-2.1.4/lib/net/ssh/connection/channel.rb:269:in `wait'
    from /usr/local/Cellar/ruby/1.9.2-p290/lib/ruby/gems/1.9.1/gems/vagrant-0.8.7/lib/vagrant/ssh/session.rb:55:in `sudo!'
    from /usr/local/Cellar/ruby/1.9.2-p290/lib/ruby/gems/1.9.1/gems/vagrant-0.8.7/lib/vagrant/provisioners/puppet_server.rb:47:in `block in run_puppetd_client'
    from /usr/local/Cellar/ruby/1.9.2-p290/lib/ruby/gems/1.9.1/gems/vagrant-0.8.7/lib/vagrant/ssh.rb:119:in `execute'
    from /usr/local/Cellar/ruby/1.9.2-p290/lib/ruby/gems/1.9.1/gems/vagrant-0.8.7/lib/vagrant/provisioners/puppet_server.rb:46:in `run_puppetd_client'
    from /usr/local/Cellar/ruby/1.9.2-p290/lib/ruby/gems/1.9.1/gems/vagrant-0.8.7/lib/vagrant/provisioners/puppet_server.rb:24:in `provision!'
    from /usr/local/Cellar/ruby/1.9.2-p290/lib/ruby/gems/1.9.1/gems/vagrant-0.8.7/lib/vagrant/action/vm/provision.rb:22:in `block in call'
    from /usr/local/Cellar/ruby/1.9.2-p290/lib/ruby/gems/1.9.1/gems/vagrant-0.8.7/lib/vagrant/action/vm/provision.rb:20:in `each'
    from /usr/local/Cellar/ruby/1.9.2-p290/lib/ruby/gems/1.9.1/gems/vagrant-0.8.7/lib/vagrant/action/vm/provision.rb:20:in `call'
    from /usr/local/Cellar/ruby/1.9.2-p290/lib/ruby/gems/1.9.1/gems/vagrant-0.8.7/lib/vagrant/action/warden.rb:30:in `call'
    from /usr/local/Cellar/ruby/1.9.2-p290/lib/ruby/gems/1.9.1/gems/vagrant-0.8.7/lib/vagrant/action/vm/forward_ports.rb:95:in `call'
    from /usr/local/Cellar/ruby/1.9.2-p290/lib/ruby/gems/1.9.1/gems/vagrant-0.8.7/lib/vagrant/action/warden.rb:30:in `call'
    from /usr/local/Cellar/ruby/1.9.2-p290/lib/ruby/gems/1.9.1/gems/vagrant-0.8.7/lib/vagrant/action/vm/clear_forwarded_ports.rb:21:in `call'
    from /usr/local/Cellar/ruby/1.9.2-p290/lib/ruby/gems/1.9.1/gems/vagrant-0.8.7/lib/vagrant/action/warden.rb:30:in `call'
    from /usr/local/Cellar/ruby/1.9.2-p290/lib/ruby/gems/1.9.1/gems/vagrant-0.8.7/lib/vagrant/action/vm/clean_machine_folder.rb:17:in `call'
    from /usr/local/Cellar/ruby/1.9.2-p290/lib/ruby/gems/1.9.1/gems/vagrant-0.8.7/lib/vagrant/action/warden.rb:30:in `call'
    from /usr/local/Cellar/ruby/1.9.2-p290/lib/ruby/gems/1.9.1/gems/vagrant-0.8.7/lib/vagrant/action/vm/check_guest_additions.rb:30:in `call'
    from /usr/local/Cellar/ruby/1.9.2-p290/lib/ruby/gems/1.9.1/gems/vagrant-0.8.7/lib/vagrant/action/warden.rb:30:in `call'
    from /usr/local/Cellar/ruby/1.9.2-p290/lib/ruby/gems/1.9.1/gems/vagrant-0.8.7/lib/vagrant/action/vm/match_mac_address.rb:21:in `call'
    from /usr/local/Cellar/ruby/1.9.2-p290/lib/ruby/gems/1.9.1/gems/vagrant-0.8.7/lib/vagrant/action/warden.rb:30:in `call'
    from /usr/local/Cellar/ruby/1.9.2-p290/lib/ruby/gems/1.9.1/gems/vagrant-0.8.7/lib/vagrant/action/vm/import.rb:26:in `call'
    from /usr/local/Cellar/ruby/1.9.2-p290/lib/ruby/gems/1.9.1/gems/vagrant-0.8.7/lib/vagrant/action/warden.rb:30:in `call'
    from /usr/local/Cellar/ruby/1.9.2-p290/lib/ruby/gems/1.9.1/gems/vagrant-0.8.7/lib/vagrant/action/vm/check_box.rb:23:in `call'
    from /usr/local/Cellar/ruby/1.9.2-p290/lib/ruby/gems/1.9.1/gems/vagrant-0.8.7/lib/vagrant/action/warden.rb:30:in `call'
    from /usr/local/Cellar/ruby/1.9.2-p290/lib/ruby/gems/1.9.1/gems/vagrant-0.8.7/lib/vagrant/action/builder.rb:120:in `call'
    from /usr/local/Cellar/ruby/1.9.2-p290/lib/ruby/gems/1.9.1/gems/vagrant-0.8.7/lib/vagrant/action.rb:134:in `block (2 levels) in run'
    from /usr/local/Cellar/ruby/1.9.2-p290/lib/ruby/gems/1.9.1/gems/vagrant-0.8.7/lib/vagrant/util/busy.rb:19:in `busy'
    from /usr/local/Cellar/ruby/1.9.2-p290/lib/ruby/gems/1.9.1/gems/vagrant-0.8.7/lib/vagrant/action.rb:134:in `block in run'
    from /usr/local/Cellar/ruby/1.9.2-p290/lib/ruby/gems/1.9.1/gems/vagrant-0.8.7/lib/vagrant/environment.rb:364:in `block in lock'
    from /usr/local/Cellar/ruby/1.9.2-p290/lib/ruby/gems/1.9.1/gems/vagrant-0.8.7/lib/vagrant/environment.rb:354:in `open'
    from /usr/local/Cellar/ruby/1.9.2-p290/lib/ruby/gems/1.9.1/gems/vagrant-0.8.7/lib/vagrant/environment.rb:354:in `lock'
    from /usr/local/Cellar/ruby/1.9.2-p290/lib/ruby/gems/1.9.1/gems/vagrant-0.8.7/lib/vagrant/action.rb:133:in `run'
    from /usr/local/Cellar/ruby/1.9.2-p290/lib/ruby/gems/1.9.1/gems/vagrant-0.8.7/lib/vagrant/vm.rb:140:in `up'
    from /usr/local/Cellar/ruby/1.9.2-p290/lib/ruby/gems/1.9.1/gems/vagrant-0.8.7/lib/vagrant/command/up.rb:13:in `block in execute'
    from /usr/local/Cellar/ruby/1.9.2-p290/lib/ruby/gems/1.9.1/gems/vagrant-0.8.7/lib/vagrant/command/up.rb:8:in `each'
    from /usr/local/Cellar/ruby/1.9.2-p290/lib/ruby/gems/1.9.1/gems/vagrant-0.8.7/lib/vagrant/command/up.rb:8:in `execute'
    from /usr/local/Cellar/ruby/1.9.2-p290/lib/ruby/gems/1.9.1/gems/thor-0.14.6/lib/thor/task.rb:22:in `run'
    from /usr/local/Cellar/ruby/1.9.2-p290/lib/ruby/gems/1.9.1/gems/thor-0.14.6/lib/thor/invocation.rb:118:in `invoke_task'
    from /usr/local/Cellar/ruby/1.9.2-p290/lib/ruby/gems/1.9.1/gems/thor-0.14.6/lib/thor/invocation.rb:124:in `block in invoke_all'
    from /usr/local/Cellar/ruby/1.9.2-p290/lib/ruby/gems/1.9.1/gems/thor-0.14.6/lib/thor/invocation.rb:124:in `each'
    from /usr/local/Cellar/ruby/1.9.2-p290/lib/ruby/gems/1.9.1/gems/thor-0.14.6/lib/thor/invocation.rb:124:in `map'
    from /usr/local/Cellar/ruby/1.9.2-p290/lib/ruby/gems/1.9.1/gems/thor-0.14.6/lib/thor/invocation.rb:124:in `invoke_all'
    from /usr/local/Cellar/ruby/1.9.2-p290/lib/ruby/gems/1.9.1/gems/thor-0.14.6/lib/thor/group.rb:226:in `dispatch'
    from /usr/local/Cellar/ruby/1.9.2-p290/lib/ruby/gems/1.9.1/gems/thor-0.14.6/lib/thor/invocation.rb:109:in `invoke'
    from /usr/local/Cellar/ruby/1.9.2-p290/lib/ruby/gems/1.9.1/gems/vagrant-0.8.7/lib/vagrant/cli.rb:45:in `block in register'
    from /usr/local/Cellar/ruby/1.9.2-p290/lib/ruby/gems/1.9.1/gems/thor-0.14.6/lib/thor/task.rb:22:in `run'
    from /usr/local/Cellar/ruby/1.9.2-p290/lib/ruby/gems/1.9.1/gems/thor-0.14.6/lib/thor/invocation.rb:118:in `invoke_task'
    from /usr/local/Cellar/ruby/1.9.2-p290/lib/ruby/gems/1.9.1/gems/thor-0.14.6/lib/thor.rb:263:in `dispatch'
    from /usr/local/Cellar/ruby/1.9.2-p290/lib/ruby/gems/1.9.1/gems/thor-0.14.6/lib/thor/base.rb:389:in `start'
    from /usr/local/Cellar/ruby/1.9.2-p290/lib/ruby/gems/1.9.1/gems/vagrant-0.8.7/bin/vagrant:21:in `<top (required)>'
    from /usr/local/Cellar/ruby/1.9.2-p290/bin/vagrant:19:in `load'
    from /usr/local/Cellar/ruby/1.9.2-p290/bin/vagrant:19:in `<main>'

I think this doesn't fail when I only run 'vagrant provision' after I created a VM without a provisioner configured in the Vagrantfile when created with 'vagrant up'.

I run ruby 1.9.2 with vagrant 0.8.7 on a Mac OS 10.7.1 in iTerm 2 - if this is relevant for the "closed stream (IOError)" in the first ruby error line. But I have also heard from a colleague who runs Ubuntu, that he has the same issue. He uses puppetfiles from his local machine and I use a remote puppet server. From that point of view I can say that I hardly believe this is a general problem, rather than a Mac OS or iTerm specific or a problem with that I use a remote puppet server.

This is my Vagrantfile btw.
Vagrant::Config.run do |config| config.vm.define :avm01 do |config| config.vm.box = "squeeze-pxe-vbox4.1.4-v3" config.vm.network "33.33.33.10" config.vm.customize do |vm| vm.memory_size = 1024 end end config.vm.provision :puppet_server do |puppet| puppet.puppet_server = "puppet.fq.dn" puppet.puppet_node = "avm01.vagrant.internal" puppet.options = ["--test"] end end

bug

All 90 comments

I run into the same error, but with me it always happens during a long running process (in my case running pip install requirements on a guest).

[default] [Wed, 19 Oct 2011 05:56:39 -0700] INFO: execute[install requirements] sh(. /home/vagrant/dev-env/bin/activate && pip install -r /vagrant/requirements.txt)
 : stdout
[default] [Wed, 19 Oct 2011 05:58:41 -0700] ERROR: execute[install requirements] (main::dev line 8) has had an error
 : stdout
[default] Forcing shutdown of VM...
[default] Destroying VM and associated drives...
/usr/lib/ruby/gems/1.8/gems/net-ssh-2.1.4/lib/net/ssh/ruby_compat.rb:33:in `select': closed stream (IOError)

I can re-run it without making any changes and it'll run just fine. Appears to be completely random. I'm wondering if it has something to do with available memory or network problems.

FWIW I ran into the same issue as well on long running actions. In my case it is running chef. I as well can rerun and it will continue

[default] [Wed, 19 Oct 2011 17:02:41 +0200] INFO: Storing updated cookbooks/apache2/recipes/mod_proxy.rb in the cache.
: stdout
[default] [Wed, 19 Oct 2011 17:03:01 +0200] ERROR: Running exception handlers
: stdout
[default] Forcing shutdown of VM...
[default] Destroying VM and associated drives...
/Library/Ruby/Gems/1.8/gems/net-ssh-2.1.4/lib/net/ssh/ruby_compat.rb:33:in `select': closed stream (IOError)
from /Library/Ruby/Gems/1.8/gems/net-ssh-2.1.4/lib/net/ssh/ruby_compat.rb:33:in `io_select'
from /Library/Ruby/Gems/1.8/gems/net-ssh-2.1.4/lib/net/ssh/ruby_compat.rb:32:in `synchronize'
from /Library/Ruby/Gems/1.8/gems/net-ssh-2.1.4/lib/net/ssh/ruby_compat.rb:32:in `io_select'
from /Library/Ruby/Gems/1.8/gems/net-ssh-2.1.4/lib/net/ssh/transport/packet_stream.rb:73:in `available_for_read?'
from /Library/Ruby/Gems/1.8/gems/net-ssh-2.1.4/lib/net/ssh/transport/packet_stream.rb:85:in `next_packet'
from /Library/Ruby/Gems/1.8/gems/net-ssh-2.1.4/lib/net/ssh/transport/session.rb:169:in `poll_message'
from /Library/Ruby/Gems/1.8/gems/net-ssh-2.1.4/lib/net/ssh/transport/session.rb:164:in `loop'

I also have modified my ssh parameters to be as follows with no benefit:

  config.ssh.max_tries = 100
  config.ssh.timeout = 600

I'm seeing this with Chef as well:

...
[default] [Thu, 27 Oct 2011 09:26:03 +0000] INFO: execute[upgrade-pear-with-pear] ran successfully
: stdout
[default] Forcing shutdown of VM...
[default] Destroying VM and associated drives...
[default] Destroying unused networking interface...
/usr/local/rvm/gems/ruby-1.9.2-p290/gems/net-ssh-2.1.4/lib/net/ssh/ruby_compat.rb:22:in `select': closed stream (IOError)
...

Everything seems to be going well, then it stops and tears down the VM.

I've seen this as well. Not sure yet how to work around this but marking as a bug.

Looks like I can do client side keep-alive on connections that take a long time: http://net-ssh.github.com/net-ssh/classes/Net/SSH/Connection/Session.html#M000091

Unfortunately I don't think this will work in this case because we're waiting on a command to finish... Hm.

Also looks like there is this keep-alive setting, but I'm not sure if I can set it for net-ssh: http://www.fettesps.com/enable-ssh-keep-alive/

Same here:

[default] Forcing shutdown of VM... [default] Destroying VM and associated drives... /Users/dikbrouwer/.rvm/gems/ruby-1.9.2-p290/gems/net-ssh-2.1.4/lib/net/ssh/ruby_compat.rb:22:inselect': closed stream (IOError)
from /Users/dikbrouwer/.rvm/gems/ruby-1.9.2-p290/gems/net-ssh-2.1.4/lib/net/ssh/ruby_compat.rb:22:in io_select' from /Users/dikbrouwer/.rvm/gems/ruby-1.9.2-p290/gems/net-ssh-2.1.4/lib/net/ssh/transport/packet_stream.rb:73:inavailable_for_read?`

Any suggestions for a temporary workaround? I'm unable to provision the VM at all as it happens every time for me (Ubuntu 11.10 if that matters, I thought I wasn't seeing it before).

same here. This is a blocker...

Can we disable the forced shutdown and destroy during the provision step? Where would I look in the code to do this?

Interesting. Can anyone else that can consistently cause this problem reproduce @chalfant's results?

I can, it happens on every run (with ruby 1.9.2, where did Gregor's comment
re: reuby 1.8.7+ go?)

On Thu, Jan 5, 2012 at 10:34 PM, Mitchell Hashimoto <
[email protected]

wrote:

Interesting. Can anyone else that can consistently cause this problem
reproduce @chalfant's results?


Reply to this email directly or view it on GitHub:
https://github.com/mitchellh/vagrant/issues/516#issuecomment-3376704

@mitchellh Was that with reference to the comment about this issue being related to ruby 1.8.7 (which now appears to be removed)?

I'm seeing this with ruby version "1.9.2p290 (2011-07-09 revision 32553) [x86_64-darwin10.8.0]" so not a Ruby 1.8.7 problem as far as I can tell.

Ah, it must have been deleted. :)

Yeah I figured this would affect every version. Sigh. I'm still not sure how to fix this since it appears to be a Net::SSH bug but Net::SSH is in maintenance mode and only accepting pull requests, and I'm not sure where to begin in there. :-\ I'll continue thinking.

yes, I was too slow in deleting. I noticed I did some changes that would cause my command generate some output from time to time. In this case, the error does not happen, though output is only shown in the end.

So I think I may have a way to fix this. Can anyone post a Vagrantfile/puppet recipe that can consistently show this bug? I'm having trouble repo-ing it.

I did not try this out, but something like

execute "sleep" do
command "sleep 3600"
end

should do it.

@mitchellh We can reproduce often (although not consistently) but it's on a chef provision that connects to our chef server. It usually fails when trying to install mysql via apt-get on Ubuntu 10.04.

For me, compiling node.js often fails (although it's part of a much
larger recipe). A large pip_install requirements.txt will also do it.

On Jan 9, 2012, at 2:47 PM, Shaun
[email protected]
wrote:

@mitchellh We can reproduce often (although not consistently) but it's
on a chef provision that connects to our chef server. It usually fails
when trying to install mysql via apt-get on Ubuntu 10.04.


Reply to this email directly or view it on GitHub:
https://github.com/mitchellh/vagrant/issues/516#issuecomment-3421884

This reproduces it every single time for me on Ruby 1.8.7 and vagrant 0.8.10 (I also tried with vagrant 0.9.2 and I think it happens there too).

git clone -n git://github.com/mozilla/zamboni.git
cd zamboni
git checkout 0bbef8533575875e7240c142957e8d09a797ee26
git submodule update --init --recursive
vagrant up

I am desperately trying to find a workaround for this :( As others have commented, the exception is triggered during pip install -r requirements/compiled.txt which often takes a long time.

We've worked around this by not relying on vagrant up to perform provisioning.

After vagrant up has created the box and right as it starts to provision our box with chef-client, I use ctrl+c to bail out of vagrant up. Then I run vagrant ssh to get into the box and run chef-client manually from there. This circumvents the troublesome timeout in our case since chef is not relying on vagrant to establish connections anymore.

Ugh. This is pretty hard to work around. I packaged up my provisioned machine which cut down on long running commands but we need our users to periodically run database migrations via puppet. These can run long.

Any ideas on how to fix the SSH timeout?

At the very least is there a way to rescue the IOError so that the command can fail without causing the VM to shut down?

Here are some more clues. I think it's more than just an SSH timeout. It appears that the VM is actually losing all connectivity to the network.

To workaround the problem I installed screen and set up /home/vagrant/.profile so it starts a screen session when you log in and runs a script to download everything it needs when you SSH in. I see the same kind of error where it says

Connection to 127.0.0.1 closed by remote host.

When I ssh back in to resume the screen session, I see an exception in my Python script (that was running) showing that it lost connectivity. This has happened several times now.

If this were merely some kind of SSH timeout then in theory the screen session would have continued running in the background. Also, I went nuts and reconfigured /etc/ssh/sshd_config so that it could not possibly timeout. That did not have any affect.

I can reproduce this consistently. How can I help debug the source of the problem? I'm on IRC, freenode and irc.mozilla.org as kumar

Possibly related:
https://www.virtualbox.org/ticket/6559

Though I'll note I'm using VirtualBox 4.1.4 and am still having this issue.

I'm w/ @kumar303 in that it doesn't seem to be just an ssh issue.

I checked ulimit -a and noted that it had 1024 files-per-proc so thought that might be being hit - no, it doesn't appear so, nor OOM. Quite frustrating :-/

Is there any update on this issue? I'm seeing the same thing running a chef_client provision. I lose connection right around the time that my cookbooks have finished downloading.

After a bunch of research, my top candidate for the cause of this bug is this:

https://www.virtualbox.org/ticket/4870

It's an old ticket, but matches what we seem to be experiencing to a T.

Supposedly fixed in 3.0.8 and I'm running 4.1.4. They closed the ticket because nobody responded to the most recent request for info, but it seems unlikely it was actually fixed. Hmmph.

At last, I found other people with the same issue!

I too hit this with a large pip install. My suspicion is that the quantity of outbound connections chokes VB's NAT connection tracking. The only other way I've reproduced it is by _really_ hammering DNS lookups. If the OP's provisioner is pulling in lots of packages from external sources then it could produce the same effect.

I gave up trying to register for an "Oracle account" and just put findings in a Gist. I've got some host and guest network captures kicking around too.

https://gist.github.com/1716454

Ah, this is super helpful!I think in the future the best thing would be to allow non-NAT interfaces to SSH.

@mitchellh If I understand correctly, the NAT interface is always set up for vagrant to do the ssh into, and any calls to config.vm.network make additional interfaces. And since the NAT interface is eth0, it's being used for the provisioning.

Would it be reasonable as a short-term workaround to have the NAT interface been the last interface rather that the first one?

You can try working around it with a different adapter type. I've had some success with:

config.vm.customize(["modifyvm", :id, "--nictype1", "Am79C973"])

Is everyone who is having trouble here using ubuntu? I am.

Me too

Sent from my iPhone

On Feb 16, 2012, at 10:26, Kumar McMillan
[email protected]
wrote:

Is everyone who is having trouble here using ubuntu? I am.


Reply to this email directly or view it on GitHub:
https://github.com/mitchellh/vagrant/issues/516#issuecomment-4006266

Actually, heh, a better question is: Does anyone hit the symptoms of this bug in a _non-ubuntu_ VM? In talking with others I found someone with a Cent OS vm who has never seen this.

Yeah, the actual bug report I was coming up with was made, whilst provisioning a Debian VM.

CentOS guests on Ubuntu hosts, here.

FWIW, dcarley's solution mentioned above worked for me on an Ubuntu Lucid guest box.

Why? No idea. :)

@mattd @dcarley Where exactly are you putting the line?

I'm assuming it must be within
Vagrant::Config.run do |config| / end

but no matter where I put it in my Vagrantfile, vagrant up hangs trying to talk to the VM:

[default] Booting VM...
[default] Waiting for VM to boot. This can take a few minutes.

Looks like a lot of DNS requests alone are enough to reproduce it.

I've created a Vagrant environment to reproduce it. There's also an example of how modifyvm is being used:

https://github.com/dcarley/vbox_natbug

@dcarley Great information. Thanks for putting this together!

any updates,
i updated my vagrant version to 1.0.3 but error "/var/lib/gems/1.8/gems/net-ssh-2.2.2/lib/net/ssh/ruby_compat.rb:33:in `select': closed stream (IOError) " still appears in step of install pips

I'm seeing the same issue as @EslamElHusseiny. I get to the point where I install my pip-requirements via pip install -r and it dies. In my case: /Users/brad/.rvm/gems/ruby-1.9.3-p125/gems/net-ssh-2.2.2/lib/net/ssh/ruby_compat.rb:22:inselect': closed stream (IOError)`

I'm running vagrant 1.0.3 and VirtualBox 4.1.14

Update: I have the nictype fix (config.vm.customize(["modifyvm", :id, "--nictype1", "Am79C973"]), and if I go in through the host-only networking IP, it takes longer to fail, but still fails eventually.

Update 2: Not sure of the relevance, but it always seems to fail while installing python-memcached via pip. Seems related to https://github.com/pypa/pip/issues/205

I can reproduce this bug over and over. It also happens to me when installing python-memcached and the workaround does not help mitigate the effects of the issue.

@bgreenlee im seeing the exact same problem and always as it tries to install python-memcache. did you find a fix?

@mikeywaites My workaround was to download python-memcached manually and run pip install directly on the tar file.

I've seen this happening on a CentOS 6 box, running a sizeable puppet run from a shell provisioner. Prior to the stacktrace being thrown, the output from my "vagrant up" run pauses, and a shell opened using "vagrant ssh" also gets stuck, and then usually thrown out with a "connection closed by remote host".

My VM has a host-only network interface configured, and a straight ssh connection to that address continues to work without trouble throughout.

I'm happy to provide you with a copy of my setup if that would help.

This isn't happening repeatably enough for me to be able to have a good run at debugging, but on the occasions when it happens, I'm gathering what data I can. This morning, my "vagrant ssh" initiated sessions all got stuck whilst doing a "vagrant provision". I was able to get in via ssh to my host-only network interface and attach strace to the sshd process. I saw this when attempting to make a new "vagrant ssh" initiated connection:

[pid  5586] getsockopt(3, SOL_IP, IP_OPTIONS, "", [0]) = 0
[pid  5586] getpeername(3, {sa_family=AF_INET, sin_port=htons(54185), sin_addr=inet_addr("10.0.2.2")}, [16]) = 0
[pid  5586] getpeername(3, {sa_family=AF_INET, sin_port=htons(54185), sin_addr=inet_addr("10.0.2.2")}, [16]) = 0
[pid  5586] getsockname(3, {sa_family=AF_INET, sin_port=htons(22), sin_addr=inet_addr("10.0.2.15")}, [16]) = 0
[pid  5586] open("/etc/hosts.allow", O_RDONLY) = 4
[pid  5586] fstat(4, {st_mode=S_IFREG|0644, st_size=370, ...}) = 0
[pid  5586] mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fdfe8965000
[pid  5586] read(4, "#\n# hosts.allow\tThis file contai"..., 4096) = 370
[pid  5586] read(4, "", 4096)           = 0
[pid  5586] close(4)                    = 0
[pid  5586] munmap(0x7fdfe8965000, 4096) = 0
[pid  5586] open("/etc/hosts.deny", O_RDONLY) = 4
[pid  5586] fstat(4, {st_mode=S_IFREG|0644, st_size=460, ...}) = 0
[pid  5586] mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fdfe8965000
[pid  5586] read(4, "#\n# hosts.deny\tThis file contain"..., 4096) = 460
[pid  5586] read(4, "", 4096)           = 0
[pid  5586] close(4)                    = 0
[pid  5586] munmap(0x7fdfe8965000, 4096) = 0
[pid  5586] rt_sigaction(SIGALRM, NULL, {SIG_DFL, [], 0}, 8) = 0
[pid  5586] rt_sigaction(SIGALRM, {0x7fdfe8977300, [], SA_RESTORER|SA_INTERRUPT, 0x7fdfe5cea900}, NULL, 8) = 0
[pid  5586] alarm(120)                  = 0
[pid  5586] write(3, "SSH-2.0-OpenSSH_5.3\r\n", 21) = 21
[pid  5586] read(3,

The ssh connection hits sshd, and then stalls, with sshd waiting on communication from the client. This happens whether I use "vagrant ssh" or use ssh directly from the commandline, and only on the NATed ssh port.

I did have

ControlMaster auto
ControlPath /tmp/ssh_mux_%h_%p_%r

in my ssh client config, but removing this didn't make a difference.

I have my suspicions that that is a VirtualBox bug, but I'm not sure how I'd rule that out.

I'm certain that it is a VB bug. Assuming that all of these issues are indeed the very same thing. I could reproduce the hang of a NAT interface without any Vagrant in the picture.

@jtopper Does your Puppet run hammer the network interface a fair amount?

@dcarley It's relatively heavy, yeah, lots of RPM downloads and such.

My workaround for this is to double ctrl-c vagrant as soon as it get into the provisioning step of the up command, and then run the provisioning step manually, that way any failures don't trigger the VM deletion.

@mitchellh: Perhaps the problem could be neutralised slightly by reworking the exception handling on the up command. I can see why it makes sense to destroy the VM if the box import etc goes awry, however I don't think an error during the provisioning step of up should necessarily cause the destruction of the VM.

The "`select': closed stream (IOError)" error happens to me whenever I run "vagrant provision" on a host/guest Ubuntu.

I have a very simple recipe that hangs when it tries to "rbenv_gem "rails". I'm unable to provision my vagrant machine. The workaround mentioned (modifyvm nictype1) by @dcarley didn't work for me.

To mitigate this problem, for now I'm simply logging onto the VM and executing Chef-Solo by hand. Totally defeats the purpose of Vagrant, doesn't it? :( Hope one day this gets fixed somehow...

I agree with @leth 's comment above "I don't think an error during the provisioning step of up should necessarily cause the destruction of the VM"

A better workaround: vagrant up --no-provision && vagrant provision.

@leth when I try that I get the following error:

"Shared folders that Chef requires are missing on the virtual machine.
This is usually due to configuration changing after already booting the
machine. The fix is to run a vagrant reload so that the proper shared
folders will prepared and mounted on the VM."

this comes at the provision stage

@anentropic What version of Vagrant? There was a change in #803 that made this possible.

@dcarley I'm on:

$ vagrant --version
Vagrant version 1.0.1

ok, my version is too old to have that fix...

cool, works under v1.0.3 :)

although it doesn't alleviate the problem... installing a big pip requirements can still crash the provision step as before, and it often crashes the same way on repeated tries of vagrant provision until you get a run that succeeds.

@leth 's solution helped but, like @anentropic, I still have issues when I have a vagrant provision that runs more than a couple of minutes.

I had the same issue, while running pretty complex multi-step provision setup. The same as noted by few folks above, suggestion in https://github.com/mitchellh/vagrant/issues/516#issuecomment-3998630 worked for me. While investigating the issue, I also found http://www.virtualbox.org/manual/ch09.html#changenat . Nothing described seems like immediate solution for the problem (I would be looking for something like "NAT port mapping table capacity"), but nonetheless, tweaking network buffer sized and/or DNS modes maybe a good idea for desperate. I didn't have chance to try them, as again, changing NIC type to Am79C973 helped me.

Bitten as well by this (VBox 4.1.24), with long python pip installs; the trick at https://github.com/mitchellh/vagrant/issues/516#issuecomment-3998630 seems to fix things .... so far.

For me this happens while compiling php.
This is after the VM already has been provisioned.
The network interface swap didn't fix it.
(VBox 4.2.10)

Same problem.

Terminal output: http://pastie.org/7302608
VagrantFile: http://pastie.org/7302638

Operating System: OS X 10.8.3
Vagrant: 1.1.5
VirtualBox: 4.2.10 r84104

Edit: It seems this problem happens on vagrant reload specifically

In Vagrant 1.2.0 I send a keep-alive packet every 5 seconds so this should work around this issue. YAY!

Same problem here on my Macbook Air (WLAN), even on Vagrant 1.2.1 . I'm using OS X 10.8.3 and VirtualBox 4.2.12. After some digging, i've found out that it works over a shared Bluetooth Network Connection via my iPhone, and on my Mac Mini - which is connected directly to Ethernet. I've tried some MTU Size tweaking, but no success.

Like @koellcode this is still a problem for me:

VirtualBox 4.2.12
Vagrant 1.2.1
Debian 6 box

It occurs just trying to run 'apt-get update' once inside the VM (I first encountered the problem when provisioning via Puppet, but found even without the Puppet provisioning, an apt-get update is all it takes to force this shutdown).

I also tried the config.vm.provision NIC modification from @dcarley but it didn't help.

Also makes no difference over wifi or wired ethernet.

FWIW, a new router (or maybe only a reboot of the router) fixed it, which leads me to think it was some sort of network or MTU issue

like @mig5 i've switched my router too. now it works...

Ugh. Being bitten as well. Current vagrant and Virtualbox, OS X 10.8.3.

I even saw the box die randomly when logged in with SSH interactively. :(

Had the problem yesterday and had to fiddle with Vagrant and Virtualbox for half of the night.
Ubuntu Precise 12.04
Vagrant was at 1.22 the whole time.
Been using the Virtualbox in Ubuntu (4.1.12), but also Oracle Virtualbox didn't work.
It finally worked when switching off VT Extensions in the VM Config. At least once, I currently do not dare to run the process one more time as I fear to be unable to work again if it turns out to happen again...
Not sure if that is really a Vagrant or a Virtualbox issue, also.

I had this problem also: Host Debian Wheezy, guest homegrown CentOs 6.4 Vagrant box, both 64 Bit, Vagrant 1.2.3, Virtualbox 4.2.16.

My Vagrant run would execute, via a shell provisioner, the command yum list installed pretty much first thing. This yum list installed hammers the network some trying to find fastesmirror. This yum list installed would also crash about half the time.

Here, "crash" means "Vagrant destroys the virtual machine". Typically last words from such a run:

[default] Destroying VM and associated drives...
/opt/vagrant/embedded/gems/gems/net-ssh-2.6.8/lib/net/ssh/ruby_compat.rb:30:in `select': closed stream (IOError)
    from /opt/vagrant/embedded/gems/gems/net-ssh-2.6.8/lib/net/ssh/ruby_compat.rb:30:in `io_select'
    from /opt/vagrant/embedded/gems/gems/net-ssh-2.6.8/lib/net/ssh/transport/packet_stream.rb:73:in `available_for_read?'
    from /opt/vagrant/embedded/gems/gems/net-ssh-2.6.8/lib/net/ssh/transport/packet_stream.rb:85:in `next_packet'
    from /opt/vagrant/embedded/gems/gems/net-ssh-2.6.8/lib/net/ssh/transport/session.rb:172:in `block in poll_message'
    from /opt/vagrant/embedded/gems/gems/net-ssh-2.6.8/lib/net/ssh/transport/session.rb:167:in `loop'
    from /opt/vagrant/embedded/gems/gems/net-ssh-2.6.8/lib/net/ssh/transport/session.rb:167:in `poll_message'
    from /opt/vagrant/embedded/gems/gems/net-ssh-2.6.8/lib/net/ssh/connection/session.rb:454:in `dispatch_incoming_packets'
    from /opt/vagrant/embedded/gems/gems/net-ssh-2.6.8/lib/net/ssh/connection/session.rb:216:in `preprocess'
    from /opt/vagrant/embedded/gems/gems/net-ssh-2.6.8/lib/net/ssh/connection/session.rb:200:in `process'
    from /opt/vagrant/embedded/gems/gems/net-ssh-2.6.8/lib/net/ssh/connection/session.rb:164:in `block in loop'
    from /opt/vagrant/embedded/gems/gems/net-ssh-2.6.8/lib/net/ssh/connection/session.rb:164:in `loop'
    from /opt/vagrant/embedded/gems/gems/net-ssh-2.6.8/lib/net/ssh/connection/session.rb:164:in `loop'
    from /opt/vagrant/embedded/gems/gems/net-ssh-2.6.8/lib/net/ssh/connection/channel.rb:269:in `wait'
    from /opt/vagrant/embedded/gems/gems/vagrant-1.2.3/plugins/communicators/ssh/communicator.rb:318:in `shell_execute'
    from /opt/vagrant/embedded/gems/gems/vagrant-1.2.3/plugins/communicators/ssh/communicator.rb:61:in `block in execute'
    from /opt/vagrant/embedded/gems/gems/vagrant-1.2.3/plugins/communicators/ssh/communicator.rb:139:in `connect'
    from /opt/vagrant/embedded/gems/gems/vagrant-1.2.3/plugins/communicators/ssh/communicator.rb:60:in `execute'
    from /opt/vagrant/embedded/gems/gems/vagrant-1.2.3/plugins/communicators/ssh/communicator.rb:80:in `sudo'
    from /opt/vagrant/embedded/gems/gems/vagrant-1.2.3/plugins/provisioners/shell/provisioner.rb:31:in `block (2 levels) in provision'
    from /opt/vagrant/embedded/gems/gems/vagrant-1.2.3/plugins/provisioners/shell/provisioner.rb:14:in `tap'
    from /opt/vagrant/embedded/gems/gems/vagrant-1.2.3/plugins/provisioners/shell/provisioner.rb:14:in `block in provision'
    from /opt/vagrant/embedded/gems/gems/vagrant-1.2.3/plugins/provisioners/shell/provisioner.rb:78:in `with_script_file'
    from /opt/vagrant/embedded/gems/gems/vagrant-1.2.3/plugins/provisioners/shell/provisioner.rb:12:in `provision'
    from /opt/vagrant/embedded/gems/gems/vagrant-1.2.3/lib/vagrant/action/builtin/provision.rb:65:in `run_provisioner'
    from /opt/vagrant/embedded/gems/gems/vagrant-1.2.3/lib/vagrant/action/builtin/provision.rb:53:in `block in call'
    from /opt/vagrant/embedded/gems/gems/vagrant-1.2.3/lib/vagrant/action/builtin/provision.rb:49:in `each'
    from /opt/vagrant/embedded/gems/gems/vagrant-1.2.3/lib/vagrant/action/builtin/provision.rb:49:in `call'
    from /opt/vagrant/embedded/gems/gems/vagrant-1.2.3/lib/vagrant/action/warden.rb:34:in `call'
    from /opt/vagrant/embedded/gems/gems/vagrant-1.2.3/plugins/providers/virtualbox/action/clear_forwarded_ports.rb:13:in `call'
    from /opt/vagrant/embedded/gems/gems/vagrant-1.2.3/lib/vagrant/action/warden.rb:34:in `call'
    from /opt/vagrant/embedded/gems/gems/vagrant-1.2.3/plugins/providers/virtualbox/action/set_name.rb:35:in `call'
    from /opt/vagrant/embedded/gems/gems/vagrant-1.2.3/lib/vagrant/action/warden.rb:34:in `call'
    from /opt/vagrant/embedded/gems/gems/vagrant-1.2.3/plugins/providers/virtualbox/action/clean_machine_folder.rb:17:in `call'
    from /opt/vagrant/embedded/gems/gems/vagrant-1.2.3/lib/vagrant/action/warden.rb:34:in `call'
    from /opt/vagrant/embedded/gems/gems/vagrant-1.2.3/plugins/providers/virtualbox/action/check_accessible.rb:18:in `call'
    from /opt/vagrant/embedded/gems/gems/vagrant-1.2.3/lib/vagrant/action/warden.rb:34:in `call'
    from /opt/vagrant/embedded/gems/gems/vagrant-1.2.3/lib/vagrant/action/runner.rb:61:in `block in run'
    from /opt/vagrant/embedded/gems/gems/vagrant-1.2.3/lib/vagrant/util/busy.rb:19:in `busy'
    from /opt/vagrant/embedded/gems/gems/vagrant-1.2.3/lib/vagrant/action/runner.rb:61:in `run'
    from /opt/vagrant/embedded/gems/gems/vagrant-1.2.3/lib/vagrant/action/builtin/call.rb:51:in `call'
    from /opt/vagrant/embedded/gems/gems/vagrant-1.2.3/lib/vagrant/action/warden.rb:34:in `call'
    from /opt/vagrant/embedded/gems/gems/vagrant-1.2.3/lib/vagrant/action/runner.rb:61:in `block in run'
    from /opt/vagrant/embedded/gems/gems/vagrant-1.2.3/lib/vagrant/util/busy.rb:19:in `busy'
    from /opt/vagrant/embedded/gems/gems/vagrant-1.2.3/lib/vagrant/action/runner.rb:61:in `run'
    from /opt/vagrant/embedded/gems/gems/vagrant-1.2.3/lib/vagrant/action/builtin/call.rb:51:in `call'
    from /opt/vagrant/embedded/gems/gems/vagrant-1.2.3/lib/vagrant/action/warden.rb:34:in `call'
    from /opt/vagrant/embedded/gems/gems/vagrant-1.2.3/lib/vagrant/action/runner.rb:61:in `block in run'
    from /opt/vagrant/embedded/gems/gems/vagrant-1.2.3/lib/vagrant/util/busy.rb:19:in `busy'
    from /opt/vagrant/embedded/gems/gems/vagrant-1.2.3/lib/vagrant/action/runner.rb:61:in `run'
    from /opt/vagrant/embedded/gems/gems/vagrant-1.2.3/lib/vagrant/action/builtin/call.rb:51:in `call'
    from /opt/vagrant/embedded/gems/gems/vagrant-1.2.3/lib/vagrant/action/warden.rb:34:in `call'
    from /opt/vagrant/embedded/gems/gems/vagrant-1.2.3/lib/vagrant/action/builtin/config_validate.rb:25:in `call'
    from /opt/vagrant/embedded/gems/gems/vagrant-1.2.3/lib/vagrant/action/warden.rb:34:in `call'
    from /opt/vagrant/embedded/gems/gems/vagrant-1.2.3/plugins/providers/virtualbox/action/check_virtualbox.rb:17:in `call'
    from /opt/vagrant/embedded/gems/gems/vagrant-1.2.3/lib/vagrant/action/warden.rb:34:in `call'
    from /opt/vagrant/embedded/gems/gems/vagrant-1.2.3/lib/vagrant/action/builtin/call.rb:57:in `call'
    from /opt/vagrant/embedded/gems/gems/vagrant-1.2.3/lib/vagrant/action/warden.rb:34:in `call'
    from /opt/vagrant/embedded/gems/gems/vagrant-1.2.3/lib/vagrant/action/builtin/config_validate.rb:25:in `call'
    from /opt/vagrant/embedded/gems/gems/vagrant-1.2.3/lib/vagrant/action/warden.rb:34:in `call'
    from /opt/vagrant/embedded/gems/gems/vagrant-1.2.3/plugins/providers/virtualbox/action/check_virtualbox.rb:17:in `call'
    from /opt/vagrant/embedded/gems/gems/vagrant-1.2.3/lib/vagrant/action/warden.rb:34:in `call'
    from /opt/vagrant/embedded/gems/gems/vagrant-1.2.3/lib/vagrant/action/builder.rb:116:in `call'
    from /opt/vagrant/embedded/gems/gems/vagrant-1.2.3/lib/vagrant/action/runner.rb:61:in `block in run'
    from /opt/vagrant/embedded/gems/gems/vagrant-1.2.3/lib/vagrant/util/busy.rb:19:in `busy'
    from /opt/vagrant/embedded/gems/gems/vagrant-1.2.3/lib/vagrant/action/runner.rb:61:in `run'
    from /opt/vagrant/embedded/gems/gems/vagrant-1.2.3/lib/vagrant/machine.rb:147:in `action'
    from /opt/vagrant/embedded/gems/gems/vagrant-1.2.3/lib/vagrant/batch_action.rb:63:in `block (2 levels) in run'

I would appreciate if I could make Vagrant to _not_ destroy the machine, so I can sift through the remaining rubble.

This is a Heisenbug: When I set VAGRANT_LOG=debug, the problem disappears.

I have also been able to once or twice reproduce the crash by removing that shell provisioner from my Vagrantfile and instead using my hosts's native ssh command to connect to the guest, sudo -s there and then execute yum list installed manually. In that case, "crash" means the machine is stopped, according to the Virtualbox GUI.

For me, the problem frequency seemed to decrease when I enlarged VirtualBox's socket buffers from their standard value of 64 kB and also the memory that guest gets:

config.vm.provider "virtualbox" do |vbox|
  vbox.customize ["modifyvm", :id, "--memory", "2048", "--natsettings1", "0,400,400,0,0" ]
end

But, through several attempts at fiddling these values, I could not make the problem go away completely. Also, the provisioning process would frequently fail at a later stage.

How do I re-open this issue?

It's not fixed yet, so reopening the ticket would be great..

Please re-open this bug. I've seen you're comments in https://github.com/mitchellh/vagrant/issues/2010, but honestly this is still an ongoing issue. We see this about 30% of the time during our Puppet provisioning process. For what its worth, I think in our case this happens when Puppet does a quick reload of the networking stack after making some changes. I'm not sure if that helps diagnose the issue.

Since this is clearly an outstanding issue, I suggest leaving the bug open until it can be resolved. It sends an odd message closing the but without any resolution.

I think I have a reproducible scenario. This Vagrantfile is meant to stand up an instance of Docker and then load this Dockerfile, but it consistently (100% of the time on my machine after multiple runs) breaks with the error found in this ticket. Tested on Vagrant 1.2.7. Halp?

@TroyGoode I was able to reproduce using your setup. Looking into it. THANK YOU!

:+1: glad my noobishness could be of some help!

This is fixed in master.

great news. thanks @mitchellh. any idea when master'll be cut into a new release?

Not at the moment, but "soon"

Just to be sure - did the fix for #516 make it into Vagrant 1.3.2/1.3.3? I couldn't find anything in the Changelog.

I experience this problem with Vagrant 1.3.3, VirtualBox 4.2.18 r88780 on Debian Wheezy.

Same problem here, same versions as @aknrdureegaesr, on OSX 10.8.4.

Maybe this has something to do with https://github.com/net-ssh/net-ssh/issues/102.

@make: Could be. Two but-s:

But 1 The Vagrant + Virtualbox Setup that showed the problem didn't do anything multithreading, at least none I'm aware of.

But 2 My previous observation:

I have also been able to once or twice reproduce the crash by removing that shell provisioner from my Vagrantfile and instead using my hosts's native ssh command to connect to the guest, sudo -s there and then execute yum list installed manually.

I may be wrong, but my finger tentatively points to the Virtualbox side of things. Virtualbox may have a race condition in its network code.

Personally, I have since moved over to KVM, controlled by home-grown-rake. (If Vagrant works for you, that's not a path you want to follow. But, on the other hand, it _is_ navigable.)

We have been experiencing a very similar issue. We upgraded from vagrant 1.1.0 to 1.2.2 (running VitualBox 4.2.16r86992) and ever since the upgrade whenever we try to start up a VM that has a second (eth1) host-only NIC adapter vagrant throws an error while provisioning this second NIC:
"[default] Configuring and enabling network interfaces...
/opt/vagrant/embedded/lib/ruby/1.9.1/tempfile.rb:346:in `rmdir': No such file or directory - /tmp/vagrant20131017-30789-11m1r7h.lock (Errno::ENOENT)"
This never happened before the upgrade (so we even degraded back to vagrant 1.1.0 but then experienced the same issue so now we are back to using vagrant 1.2.2 with the same issue). Any help on how to resolve this issue [[Vagrant fails during the provisioning of the second host-only Virtual NIC]] would help. Thanks.

Here is the stack trace.
[email protected]:/opt/sociocast/operations/vagrant/vms/Centos64BoxyHostonly_test] vagrant reload
[default] Attempting graceful shutdown of VM...
[default] Setting the name of the VM...
[default] Clearing any previously set forwarded ports...
[default] Creating shared folders metadata...
[default] Clearing any previously set network interfaces...
[default] Preparing network interfaces based on configuration...
[default] Forwarding ports...
[default] -- 22 => 2222 (adapter 1)
[default] Booting VM...
[default] Waiting for VM to boot. This can take a few minutes.
[default] VM booted and ready for use!
[default] Configuring and enabling network interfaces...
/opt/vagrant/embedded/lib/ruby/1.9.1/tempfile.rb:346:in rmdir': No such file or directory - /tmp/vagrant20131017-30789-11m1r7h.lock (Errno::ENOENT) from /opt/vagrant/embedded/lib/ruby/1.9.1/tempfile.rb:346:inrmdir'
from /opt/vagrant/embedded/lib/ruby/1.9.1/tempfile.rb:338:in ensure in locking' from /opt/vagrant/embedded/lib/ruby/1.9.1/tempfile.rb:338:inlocking'
from /opt/vagrant/embedded/lib/ruby/1.9.1/tempfile.rb:144:in block in initialize' from /opt/vagrant/embedded/lib/ruby/1.9.1/tmpdir.rb:133:increate'
from /opt/vagrant/embedded/lib/ruby/1.9.1/tempfile.rb:134:in initialize' from /opt/vagrant/embedded/gems/gems/vagrant-1.2.2/plugins/guests/redhat/cap/configure_networks.rb:36:innew'
from /opt/vagrant/embedded/gems/gems/vagrant-1.2.2/plugins/guests/redhat/cap/configure_networks.rb:36:in block in configure_networks' from /opt/vagrant/embedded/gems/gems/vagrant-1.2.2/plugins/guests/redhat/cap/configure_networks.rb:21:ineach'
from /opt/vagrant/embedded/gems/gems/vagrant-1.2.2/plugins/guests/redhat/cap/configure_networks.rb:21:in configure_networks' from /opt/vagrant/embedded/gems/gems/vagrant-1.2.2/lib/vagrant/guest.rb:130:incall'
from /opt/vagrant/embedded/gems/gems/vagrant-1.2.2/lib/vagrant/guest.rb:130:in capability' from /opt/vagrant/embedded/gems/gems/vagrant-1.2.2/plugins/providers/virtualbox/action/network.rb:115:incall'
from /opt/vagrant/embedded/gems/gems/vagrant-1.2.2/lib/vagrant/action/warden.rb:34:in call' from /opt/vagrant/embedded/gems/gems/vagrant-1.2.2/plugins/providers/virtualbox/action/clear_network_interfaces.rb:26:incall'
from /opt/vagrant/embedded/gems/gems/vagrant-1.2.2/lib/vagrant/action/warden.rb:34:in call' from /opt/vagrant/embedded/gems/gems/vagrant-1.2.2/plugins/providers/virtualbox/action/share_folders.rb:25:incall'
from /opt/vagrant/embedded/gems/gems/vagrant-1.2.2/lib/vagrant/action/warden.rb:34:in call' from /opt/vagrant/embedded/gems/gems/vagrant-1.2.2/plugins/providers/virtualbox/action/clear_shared_folders.rb:12:incall'
from /opt/vagrant/embedded/gems/gems/vagrant-1.2.2/lib/vagrant/action/warden.rb:34:in call' from /opt/vagrant/embedded/gems/gems/vagrant-1.2.2/plugins/providers/virtualbox/action/prepare_nfs_settings.rb:11:incall'
from /opt/vagrant/embedded/gems/gems/vagrant-1.2.2/lib/vagrant/action/warden.rb:34:in call' from /opt/vagrant/embedded/gems/gems/vagrant-1.2.2/lib/vagrant/action/builtin/nfs.rb:28:incall'
from /opt/vagrant/embedded/gems/gems/vagrant-1.2.2/lib/vagrant/action/warden.rb:34:in call' from /opt/vagrant/embedded/gems/gems/vagrant-1.2.2/plugins/providers/virtualbox/action/prune_nfs_exports.rb:15:incall'
from /opt/vagrant/embedded/gems/gems/vagrant-1.2.2/lib/vagrant/action/warden.rb:34:in call' from /opt/vagrant/embedded/gems/gems/vagrant-1.2.2/lib/vagrant/action/builtin/handle_forwarded_port_collisions.rb:118:incall'
from /opt/vagrant/embedded/gems/gems/vagrant-1.2.2/lib/vagrant/action/warden.rb:34:in call' from /opt/vagrant/embedded/gems/gems/vagrant-1.2.2/plugins/providers/virtualbox/action/prepare_forwarded_port_collision_params.rb:30:incall'
from /opt/vagrant/embedded/gems/gems/vagrant-1.2.2/lib/vagrant/action/warden.rb:34:in call' from /opt/vagrant/embedded/gems/gems/vagrant-1.2.2/lib/vagrant/action/builtin/env_set.rb:19:incall'
from /opt/vagrant/embedded/gems/gems/vagrant-1.2.2/lib/vagrant/action/warden.rb:34:in call' from /opt/vagrant/embedded/gems/gems/vagrant-1.2.2/lib/vagrant/action/builtin/provision.rb:45:incall'
from /opt/vagrant/embedded/gems/gems/vagrant-1.2.2/lib/vagrant/action/warden.rb:34:in call' from /opt/vagrant/embedded/gems/gems/vagrant-1.2.2/plugins/providers/virtualbox/action/clear_forwarded_ports.rb:13:incall'
from /opt/vagrant/embedded/gems/gems/vagrant-1.2.2/lib/vagrant/action/warden.rb:34:in call' from /opt/vagrant/embedded/gems/gems/vagrant-1.2.2/plugins/providers/virtualbox/action/set_name.rb:35:incall'
from /opt/vagrant/embedded/gems/gems/vagrant-1.2.2/lib/vagrant/action/warden.rb:34:in call' from /opt/vagrant/embedded/gems/gems/vagrant-1.2.2/plugins/providers/virtualbox/action/clean_machine_folder.rb:17:incall'
from /opt/vagrant/embedded/gems/gems/vagrant-1.2.2/lib/vagrant/action/warden.rb:34:in call' from /opt/vagrant/embedded/gems/gems/vagrant-1.2.2/plugins/providers/virtualbox/action/check_accessible.rb:18:incall'
from /opt/vagrant/embedded/gems/gems/vagrant-1.2.2/lib/vagrant/action/warden.rb:34:in call' from /opt/vagrant/embedded/gems/gems/vagrant-1.2.2/lib/vagrant/action/runner.rb:61:inblock in run'
from /opt/vagrant/embedded/gems/gems/vagrant-1.2.2/lib/vagrant/util/busy.rb:19:in busy' from /opt/vagrant/embedded/gems/gems/vagrant-1.2.2/lib/vagrant/action/runner.rb:61:inrun'
from /opt/vagrant/embedded/gems/gems/vagrant-1.2.2/lib/vagrant/action/builtin/call.rb:51:in call' from /opt/vagrant/embedded/gems/gems/vagrant-1.2.2/lib/vagrant/action/warden.rb:34:incall'
from /opt/vagrant/embedded/gems/gems/vagrant-1.2.2/lib/vagrant/action/runner.rb:61:in block in run' from /opt/vagrant/embedded/gems/gems/vagrant-1.2.2/lib/vagrant/util/busy.rb:19:inbusy'
from /opt/vagrant/embedded/gems/gems/vagrant-1.2.2/lib/vagrant/action/runner.rb:61:in run' from /opt/vagrant/embedded/gems/gems/vagrant-1.2.2/lib/vagrant/action/builtin/call.rb:51:incall'
from /opt/vagrant/embedded/gems/gems/vagrant-1.2.2/lib/vagrant/action/warden.rb:34:in call' from /opt/vagrant/embedded/gems/gems/vagrant-1.2.2/lib/vagrant/action/runner.rb:61:inblock in run'
from /opt/vagrant/embedded/gems/gems/vagrant-1.2.2/lib/vagrant/util/busy.rb:19:in busy' from /opt/vagrant/embedded/gems/gems/vagrant-1.2.2/lib/vagrant/action/runner.rb:61:inrun'
from /opt/vagrant/embedded/gems/gems/vagrant-1.2.2/lib/vagrant/action/builtin/call.rb:51:in call' from /opt/vagrant/embedded/gems/gems/vagrant-1.2.2/lib/vagrant/action/warden.rb:34:incall'
from /opt/vagrant/embedded/gems/gems/vagrant-1.2.2/lib/vagrant/action/builtin/config_validate.rb:25:in call' from /opt/vagrant/embedded/gems/gems/vagrant-1.2.2/lib/vagrant/action/warden.rb:34:incall'
from /opt/vagrant/embedded/gems/gems/vagrant-1.2.2/plugins/providers/virtualbox/action/check_virtualbox.rb:17:in call' from /opt/vagrant/embedded/gems/gems/vagrant-1.2.2/lib/vagrant/action/warden.rb:34:incall'
from /opt/vagrant/embedded/gems/gems/vagrant-1.2.2/lib/vagrant/action/builtin/call.rb:57:in call' from /opt/vagrant/embedded/gems/gems/vagrant-1.2.2/lib/vagrant/action/warden.rb:34:incall'
from /opt/vagrant/embedded/gems/gems/vagrant-1.2.2/plugins/providers/virtualbox/action/check_virtualbox.rb:17:in call' from /opt/vagrant/embedded/gems/gems/vagrant-1.2.2/lib/vagrant/action/warden.rb:34:incall'
from /opt/vagrant/embedded/gems/gems/vagrant-1.2.2/lib/vagrant/action/builtin/config_validate.rb:25:in call' from /opt/vagrant/embedded/gems/gems/vagrant-1.2.2/lib/vagrant/action/warden.rb:34:incall'
from /opt/vagrant/embedded/gems/gems/vagrant-1.2.2/lib/vagrant/action/runner.rb:61:in block in run' from /opt/vagrant/embedded/gems/gems/vagrant-1.2.2/lib/vagrant/util/busy.rb:19:inbusy'
from /opt/vagrant/embedded/gems/gems/vagrant-1.2.2/lib/vagrant/action/runner.rb:61:in run' from /opt/vagrant/embedded/gems/gems/vagrant-1.2.2/lib/vagrant/action/builtin/call.rb:51:incall'
from /opt/vagrant/embedded/gems/gems/vagrant-1.2.2/lib/vagrant/action/warden.rb:34:in call' from /opt/vagrant/embedded/gems/gems/vagrant-1.2.2/plugins/providers/virtualbox/action/check_virtualbox.rb:17:incall'
from /opt/vagrant/embedded/gems/gems/vagrant-1.2.2/lib/vagrant/action/warden.rb:34:in call' from /opt/vagrant/embedded/gems/gems/vagrant-1.2.2/lib/vagrant/action/builder.rb:116:incall'
from /opt/vagrant/embedded/gems/gems/vagrant-1.2.2/lib/vagrant/action/runner.rb:61:in block in run' from /opt/vagrant/embedded/gems/gems/vagrant-1.2.2/lib/vagrant/util/busy.rb:19:inbusy'
from /opt/vagrant/embedded/gems/gems/vagrant-1.2.2/lib/vagrant/action/runner.rb:61:in run' from /opt/vagrant/embedded/gems/gems/vagrant-1.2.2/lib/vagrant/machine.rb:147:inaction'
from /opt/vagrant/embedded/gems/gems/vagrant-1.2.2/plugins/commands/reload/command.rb:29:in block in execute' from /opt/vagrant/embedded/gems/gems/vagrant-1.2.2/lib/vagrant/plugin/v2/command.rb:182:inblock in with_target_vms'
from /opt/vagrant/embedded/gems/gems/vagrant-1.2.2/lib/vagrant/plugin/v2/command.rb:180:in each' from /opt/vagrant/embedded/gems/gems/vagrant-1.2.2/lib/vagrant/plugin/v2/command.rb:180:inwith_target_vms'
from /opt/vagrant/embedded/gems/gems/vagrant-1.2.2/plugins/commands/reload/command.rb:28:in execute' from /opt/vagrant/embedded/gems/gems/vagrant-1.2.2/lib/vagrant/cli.rb:46:inexecute'
from /opt/vagrant/embedded/gems/gems/vagrant-1.2.2/lib/vagrant/environment.rb:467:in cli' from /opt/vagrant/embedded/gems/gems/vagrant-1.2.2/bin/vagrant:84:in from /opt/vagrant/bin/../embedded/gems/bin/vagrant:23:in load' from /opt/vagrant/bin/../embedded/gems/bin/vagrant:23:in

'

@arnaudlawson Does this happen every time? This is coming out the Ruby standard library...

Yes it is happening every time. It doesn't seem to be a ruby issue. This problem only started after we upgraded from Vagrant 1.1.0 to 1.2.2. We even tried to downgrade back to Vagrant 1.1.0 but then encountered the same issue still. So now we are back on Vagrant 1.2.2 and that is running on CentOS release 5.7 (Final) with VirtualBox 4.2.16r86992. Every time we do 'vagrant up' the problem comes up - and we have several boxes that use the second virtual NIC (eth1) to talk to each other. So any help would be appreciated. Thanks.

@mitchellh

Any thoughts?

@mitchellh

I found the problem, this was because the /tmp directory was full on this server. "A directory can have at most 31998 subdirectories" while using ext3 filesystems. http://en.wikipedia.org/wiki/Ext3#cite_note-0

Vagrant uses the /tmp directory to build boxes so as i cleaned up /tmp i was able to rebuild the box.

I'm unfortunately seeing this error again after updating form 1.8.1 to 1.8.4.
Host: Win7 Enterprise
Vagrant: 1.8.4
Chef DK: chefdk-0.13.21-1-x86
Virtualbox: Version 5.0.20 r106931
vagrant plugin list:
vagrant-berkshelf (4.1.0)
vagrant-omnibus (1.4.1)
vagrant-share (1.1.5, system)

Vagrantfile: http://pastie.org/10877872
vagrant up output: http://pastie.org/10877969

Was this page helpful?
0 / 5 - 0 ratings