OK,
Host: Windows 7 Ultimate x64
Guest: Ubuntu 12.04
Twenty mintues ago, I had 4 different VMs managed by Vagrant listed in VirtualBox. On one of them, I rebooted it via SSH, ran $ vagrant halt [in Git-Bash]. Then in VirtualBox, I powered off all of the other VMs and then exited VirtualBox manually [File -> Exit].
I then ran $ vagrant up on the VM I just halted. It proceeded to immediately try to setup and provision a new VM, but I cancelled it.
I opened up VirtualBox and all of the VMs were gone, except the new one being provisioned by Vagrant. $ VBoxManage list vms shows nothing but the new, erroneously provisioned one. $ vagrant status shows:
Current machine states:
default poweroff (virtualbox)
The VM is powered off. To restart the VM, simply run `vagrant up`
In the other vagrant projects, it says:
Current machine states:
default not created (virtualbox)
I found every single one of the virtual machines in C:\Users\username\VirtualBox VMs, so it's not like they've been deleted.
I have a highly-configured guest. I can't just run $ vagrant up and create a new one. I need to get the old one up.
I figured out my own workaround:
e.g.,
echo -n b5e14d9a-f416-4f2f-a989-aa4698b3613b > .vagrant/machines/default/virtualbox/id
Now "vagrant up" should work as normal, but it may fail with:
default: Warning: Connection timeout. Retrying...
default: Warning: Authentication failure. Retrying...
default: Warning: Authentication failure. Retrying...
default: Warning: Authentication failure. Retrying...
If so, run the following:
Vagrant may try to provision this machine once more. It will fail at
==> default: Running provisioner: chef_solo...
==> default: Detected Chef (latest) is already installed
Shared folders that Chef requires are missing on the virtual machine.
This is usually due to configuration changing after already booting the
machine. The fix is to run a `vagrant reload` so that the proper shared
folders will be prepared and mounted on the VM.
If so, run:
Your system should now be back to normal.
The step at Now "vagrant up" should work as normal, but it may fail with: fails for me. Vagrant just goes ahead and brings up a new machine.
This is one of the most annoying things about Vagrant, is loosing perfectly good machines. This usually tends to happen when my Vagrantfile has some changes to it.
I have had the same issue, I will give a go to the above workaround but this definitely needs fixing as it is a major failure and can cause problems not only to a specific machine but to all of them running in parallel in Virtualbox if you use more than one at the same time. Also the process of recovering them through putting in the UUID is better than nothing but cannot be something you want to deal with everyday
As I guessed, this is Windows. Rest assured, your VMs are not lost, but are just in a folder that VirtualBox isn't looking at right now. The issue on Windows is that only one "VirtualBox server" can be running at any given time. On Windows, it is tied to a COM port. Anyways, all of this is to say: you probably ran Vagrant in one shell, then in a different shell with a different home directory (Cygwin, UAC, etc.). This causes VirtualBox to "lose" the VMs.
This is REALLY a bug with VirtualBox. We've tried to work around it in many ways but its a really tricky one to detect. To avoid it going forward, never run Vagrant from multiple shells. We'll keep plugging along and try to think of a way to fix this.
So report this upstream? Thanks.
@hopeseekr I'm sure this has been reported to upstream for something like 6 years. We've seen this issue for years. There is nothing Vagrant can do about it except find a way to detect it and warn the user, but we haven't found a way to reliably do that yet. I'm going to try again for 1.8.0 to do this.
Maybe a command option like 'Resurrect VMs"?
@hopeseekr But how would Vagrant know? Here is how it works currently:
So the issue is that, how does Vagrant detect step 2 as: VirtualBox is running with a different data dir, vs. VirtualBox is actually correct? And therein is the super annoying bug.
I have an idea in 1.8.0 to also store some extra settings in the data directory that we can compare. At the very least, we can then warn, if not error out completely (ideal).
But, if you start the VirtualBox GUI back as the correct user, the VMs will still be there, just unattached to Vagrant. So, at the very least the silver lining is you don't lose any data.
Thank you very much for elucidating the problem, Mitchell. I will ponder this subconsciously for quite a bit. It took me hours just to figure out what happened to all my VMs, and it really hurt at a critical project juncture, so I lost a lot of sleep over this particular issue already.
Yeah I understand. I hope I can at least get you a warning for 1.8.0 so that this doesn't happen without notice again!
Please reference this bug report with either the warning or (better yet!) some solution, if you are ever able to work around their stupidity. Please also then direct me to a donations page that goes to you directly (dwolla preferred, paypal accepted). I will then donate to you $25 for a warning or $100 for a solution ($100 total for both warning plus solution).
Haha!
I was able to skip a bunch of the steps with the following:
vagrant ssh sessionsVBoxManage list vms{}secho -n __UUID__ > .vagrant/machines/default/virtualbox/idecho -n b5e14d9a-f416-4f2f-a989-aa4698b3613b > .vagrant/machines/default/virtualbox/idcp ~/.vagrant.d/insecure_private_key .vagrant/machines/default/virtualbox/private_keycurl https://raw.githubusercontent.com/mitchellh/vagrant/master/keys/vagrant.pub >> ~/.ssh/authorized_keysThis happened to me just now with Vagrant 2.0.0, in the same way as described above.
config.vm.provider "virtualbox" do |v|
v.customize ["modifyvm", :id, "--cpuexecutioncap", "50"]
v.cpus = 2
# Allow symlinks - you need to run "vagrant up" in admin-elevated console
v.customize ["setextradata", :id, "VBoxInternal2/SharedFoldersEnableSymlinksCreate/v-root", "1"]
end
PS C:\vm\ubuntu1604> vagrant status
Current machine states:
default not created (virtualbox)
The environment has not yet been created. Run `vagrant up` to
create the environment. If a machine is not created, only the
default provider will be shown. So if a provider is not listed,
then the machine is not created for that environment.
@Torniojaws - please open a new issue with your Vagrantfile and any other relevant information that the template asks for if you are experiencing this issue. Thanks!
I'm going to lock this issue because it has been closed for _30 days_ โณ. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
Most helpful comment
I figured out my own workaround:
echo -n
e.g.,
echo -n b5e14d9a-f416-4f2f-a989-aa4698b3613b > .vagrant/machines/default/virtualbox/id
Now "vagrant up" should work as normal, but it may fail with:
If so, run the following:
$ cp ~/.vagrant.d/insecure_private_key .vagrant/machines/default/virtualbox/pri
vate_key
$ vagrant halt
$ wget https://raw.githubusercontent.com/mitchellh/vagrant/master/keys/vagrant.pub
$ cat vagrant.pub >> .ssh/authorized_keys
Vagrant may try to provision this machine once more. It will fail at
If so, run:
Your system should now be back to normal.