$ vagrant -v
Vagrant 1.9.1
$ vagrant plugin list
vagrant-share (1.1.6, system)
$ vboxmanage -v
5.1.10r112026
$ vboxmanage list extpacks
Extension Packs: 1
Pack no. 0: Oracle VM VirtualBox Extension Pack
Version: 5.1.10
Revision: 112026
$ cat /etc/os-release
NAME="Ubuntu"
VERSION="14.04.5 LTS, Trusty Tahr"
$ uname -a
Linux ubuntu 3.13.0-85-generic #129-Ubuntu SMP Thu Mar 17 20:50:15 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
https://atlas.hashicorp.com/centos/boxes/7/versions/1610.01
# -*- mode: ruby -*-
# vi: set ft=ruby :
BOX = ENV.fetch('BOX', 'centos/7')
VERSION = ENV.fetch('VERSION', '1610.01')
CONTROLLER = ENV.fetch('CONTROLLER', 'IDE Controller')
Vagrant.configure(2) do |config|
config.vm.define "ideControllerProblem" do |machine|
machine.vm.box = BOX
machine.vm.box_url = machine.vm.box
machine.vm.box_version = VERSION
machine.vm.provider "virtualbox" do |v|
v.memory = 256
v.cpus = 1
disk = 'sdb.vdi'
if !File.exist?(disk)
v.customize ['createhd', '--filename', disk, '--size', 64, '--variant', 'Fixed']
v.customize ['modifyhd', disk, '--type', 'writethrough']
end
v.customize ['storageattach', :id, '--storagectl', CONTROLLER, '--port', 0, '--device', 1, '--type', 'hdd', '--medium', disk]
end
end
config.vm.define "ideControllerProblem" do |machine|
machine.vm.provision :shell, :inline => "hostname ideControllerProblem", run: "always"
end
end
Vagrantfile available also at https://gist.githubusercontent.com/marcindulak/1b0ee3eda0bc94617023e85a62e1cac6/raw/ec8706ad136ddac8dc86adb90c30c0e76752a414/Vagrantfile
Actually I'm not sure, but I'm expecting that vagrant destroy -f should bring me back to the initial state.
vagrant up followed by vagrant destroy -f and another vagrant up results in two different errors from these two vagrant up runs
rm -rf /tmp/t00
mkdir /tmp/t00
cd /tmp/t00
wget https://gist.githubusercontent.com/marcindulak/1b0ee3eda0bc94617023e85a62e1cac6/raw/ec8706ad136ddac8dc86adb90c30c0e76752a414/Vagrantfile
vagrant destroy -f; rm -f sdb.vdi; killall VBoxHeadless 2> /dev/null; rm -rf ~/VirtualBox\ VMs/t00_ideControllerProblem*
vagrant up
vagrant destroy -f; rm -f sdb.vdi
ls -d ~/VirtualBox\ VMs/t00_ideControllerProblem* 2>&1
#sleep 10
ls sdb.vdi
vagrant up
ls -d ~/VirtualBox\ VMs/t00_ideControllerProblem* 2>&1
vagrant destroy -f; rm -f sdb.vdi; killall VBoxHeadless 2> /dev/null; rm -rf ~/VirtualBox\ VMs/t00_ideControllerProblem*
As reported at https://github.com/mitchellh/vagrant/issues/8105 the first vagrant up results in
A customization command failed:
["storageattach", :id, "--storagectl", "IDE Controller", "--port", 0, "--device", 1, "--type", "hdd", "--medium", "sdb.vdi"]
The following error was experienced:
#<Vagrant::Errors::VBoxManageError: There was an error while executing `VBoxManage`, a CLI used by Vagrant
for controlling VirtualBox. The command and stderr is shown below.
Command: ["storageattach", "c59c8613-2580-4471-bb04-562737b1e516", "--storagectl", "IDE Controller", "--port", "0", "--device", "1", "--type", "hdd", "--medium", "sdb.vdi"]
Stderr: VBoxManage: error: Could not find a controller named 'IDE Controller'
After vagrant destroy -f; rm -f sdb.vdi there is no sdb.vdi in the current directory and no VM ~/VirtualBox VMs/t00_ideControllerProblem*, nevertheless the second vagrant up fails with
A customization command failed:
["createhd", "--filename", "sdb.vdi", "--size", 64, "--variant", "Fixed"]
The following error was experienced:
#<Vagrant::Errors::VBoxManageError: There was an error while executing `VBoxManage`, a CLI used by Vagrant
for controlling VirtualBox. The command and stderr is shown below.
Command: ["createhd", "--filename", "sdb.vdi", "--size", "64", "--variant", "Fixed"]
Stderr: 0%...
Progress state: VBOX_E_FILE_ERROR
VBoxManage: error: Failed to create medium
VBoxManage: error: Could not create the medium storage unit '/tmp/t00/sdb.vdi'.
VBoxManage: error: VDI: cannot create image '/tmp/t00/sdb.vdi' (VERR_ALREADY_EXISTS)
VBoxManage: error: Details: code VBOX_E_FILE_ERROR (0x80bb0004), component MediumWrap, interface IMedium
VBoxManage: error: Context: "RTEXITCODE handleCreateMedium(HandlerArg*)" at line 450 of file VBoxManageDisk.cpp
Note that this behavior is non-reproducible, and happens more frequently when running all the vagrant commands from a script. You may need to run the script several times in order for it to happen. Inserting a sleep 10 before the second vagrant up makes the problem appear less frequently.
The problem with VDI: cannot create image '*.vdi' (VERR_ALREADY_EXISTS) is common when attaching storage, and if the error becomes persistent (yes, it may happen) it is "solved" by changing the path to the storage file.
The problem seems to disappear also when moving the whole vagrant project (Vagrantfile) to another directory (with mv).
http://stackoverflow.com/questions/36861101/vagrant-up-failing-when-calling-createhd-with-error-verr-already-exists-on-new-v
https://github.com/aidanns/vagrant-reload/issues/6
Another strange mention of this issue https://github.com/mitchellh/vagrant/issues/7743
The first error is due to the storageattach command failing to find the IDE Controller. These devices are different depending on underlying virtualbox machine (some call them simply "IDE", some others "IDE Controller", while others use "SATA" or "SATA Controller"). A way of enumerating and/or standardizing the attachment would be very nice.
The second error is because you failed to delete the sdb.vdi from virtualbox most likely. This sometimes happens when vagrant fails and is a real annoyance imho. vagrant destroy should be able to pick it up and destroy it on its own (not sure which party performs this, but the virtualbox machine is destroyed by it, and at the same time the vdi disappears so...). I think if storageattach fails, the vdi will just stay there and not be deleted.
Edit: finding the name of the storage controller can be done by doing:
vboxmanage showvminfo boxname|grep "Storage Controller Name"
I have the same problem, except I can reproduce it 100% of the time. Every time I "vagrant up" this one, it says the node1_disk1.vdi file already exists. It does not, but when it dies the vdi files have been created (both of them).
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure("2") do |config|
# Spin up the VM's and make sure they are updated.
(1..3).each do |i|
config.vm.define "node#{i}" do |node|
node.vm.box = "ubuntu/xenial64"
node.vm.hostname = "node#{i}"
node.vm.network "private_network", ip: "192.168.50.1#{i}"
# Each VM needs to have two block devices. 1GB local storage each
# for the purpose of this exercise, and attached to the VM's.
disk1 = "./node#{i}_disk1.vdi"
disk2 = "./node#{i}_disk2.vdi"
node.vm.provider "virtualbox" do |vb|
# If disks don't exist, create them
unless FileTest.exist?(disk1)
vb.customize ['createhd', '--filename', disk1, '--variant', 'Fixed', '--size', 1 * 1024]
end
unless FileTest.exist?(disk2)
vb.customize ['createhd', '--filename', disk2, '--variant', 'Fixed', '--size', 1 * 1024]
end
# Attach the drives to the SCSI controller
vb.customize ['storageattach', :id, '--storagectl', 'SCSI', '--port', 2, '--device', 0, '--type', 'hdd', '--medium', disk1]
vb.customize ['storageattach', :id, '--storagectl', 'SCSI', '--port', 3, '--device', 0, '--type', 'hdd', '--medium', disk2]
end
# Start doing some real work
# If the block device won't mount then it needs an FS
node.vm.provision "shell", inline: <<-SHELL
sudo mkdir /mnt/persistent1
sudo mkdir /mnt/persistent2
if ! (sudo mount /dev/sdc /mnt/persistent1); then sudo mkfs.ext4 /dev/sdc; sudo mount /dev/sdc /mnt/persistent1; fi
if ! (sudo mount /dev/sdd /mnt/persistent2); then sudo mkfs.ext4 /dev/sdd; sudo mount /dev/sdd /mnt/persistent2; fi
# Update and upgrade
sudo apt-get update
sudo apt-get upgrade -y
# Install Docker stable
sudo apt-get install -y apt-transport-https ca-certificates curl software-properties-common
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
sudo apt-get update
sudo apt-get install -y docker-ce
SHELL
end
end
end
I see the same behavior tonight. The work around that worked was to rename the file.
And leave a wayward .vdi of who knows what size just hanging there? Not much of a workaround if you ask me...
When this happens I just "vagrant destroy", delete the whole directory, assign a new name to the disk and retry, the second time succeeds. Not very pleasant experience indeed @flybd5
+1, same on OSX with vagrant 2.0.1 and VirtualBox 5.2.4. Happens after using the reload plugin when starting the VM for the second time. Workaround is moving the working dir...
This bug is still pretty annoying. And it is present since December 2016? Wow...
This seems to be a bug in Virtualbox. Still present in latest Windows version at this time; 5.2.8. Error can be reproduced just running the command vboxmanage.exe createdhd --filename foo.vdi --size 10240. If you then delete that .vdi file and run the same command again, you get VERR_ALREADY_EXISTS.
To workaround this, you can run vboxmanage.exe list hdds which will give you a list of virtual hard disks along with their UUIDs, then select the problem disk's UUID and run...
vboxmanage.exe closemedium
After doing this you can then create the disk again using the createhd option.
Just to clarify the previous comment with an example -
`PS C:\Users\ksvietme\Documents\Projectsvagrant\virtualbox\ceph> vboxmanage list hdds
UUID: 73296b3f-99e2-4384-929d-f68d9b2d1633
Parent UUID: base
State: locked write
Type: normal (base)
Location: C:\Users\ksvietme\VirtualBox VMs\CentosAdminSystem\CentosAdminSystem Clone-disk1.vdi
Storage format: VDI
Capacity: 35720 MBytes
Encryption: disabled
UUID: 90b633b2-b672-445d-9a1b-36549c370785
Parent UUID: base
State: inaccessible
Type: normal (base)
Location: C:\Users\ksvietme\Documents\Projectsvagrant\virtualbox\centos\Disk-0.vdi
Storage format: VDI
Capacity: 512 MBytes
Encryption: disabled`
PS C:\Users\ksvietme\Documents\Projects\vagrant\virtualbox\ceph> vboxmanage closemedium disk 90b633b2-b672-445d-9a1b-36549c370785 --delete
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
PS C:\Users\ksvietme\Documents\Projects\vagrant\virtualbox\ceph>
And the "createhdd" command is wrong - correct syntax with newer Virtualbox - on Windows is:
`PS C:\Users\ksvietme\Documents\Projectsvagrant\virtualbox\MultiServer> vboxmanage.exe createmedium --filename foo.vdi --size 10240
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Medium created. UUID: 1d5b7e7b-0cc1-4ca5-b879-4d70bf6fb184`
I don't believe this is a Virtualbox issue.
Why?
I was able to successfullly run the "createmedium" subcommand in the project directory
I deleted the .vagrant directory - no change
I have other projects using the exact same syntax for disk creation that work fine
Deleting all of the stale disks did not help (see pervious comment) - so not a VBox state issue. Unless VBox is storing state somewhere else wse can't see with "list hdd".
Moving the Vagrantfile to a new folder resolved the issue.
Vagrant is storing this state somewhere. I thought it might be in .vagrant but deleting and recreating it didn't help. I dug through the vagrant.d directories and didn't see anything storing disk information.
Vagrant - "current_version":"2.1.2","current_release":1530046733
VBox - 5.2.12r122591
Perhaps a seperate issue but I'm wondering if "createhd" isn't deprecated now. There is no longer a "createhd" subcommand in the latest Virtualbox and "createhd" and "createmedium" are interchangable in the Vagrantfile. I am using createhd in my customizations because that is what all the examples out there use.
Just to clarify, I was specifically commenting on this being an issue with Virtualbox 5.2.8 which I think was the latest version at the time I posted. Possible things have changed on the Virtualbox side of things with newer versions, but I haven't checked this in a while.
Vagrant4Windows can end up leaving a bit of cruft behind. I find that removing the .vagrant folder (Windows) fixes most situations. So does making sure you use the latest VBox, plugins, and extension pack.
Vagrant 2.2.3
$>vboxmanage -v
6.0.4r128413
Host: macOS mojave, version 10.14.2
The same as above, and the BUG still exists there, it exhausts me.
Probably there are still references in the ~/.config/VirtualBox/VirtualBox.xml
Instead using vagrant destroy, delete the machine manually in VBox.
Check whether the
vboxmanage list hdds
if any items with
vboxmanage closemedium <UUID> --delete
Normally, the problem may be solved.
Most helpful comment
The first error is due to the storageattach command failing to find the IDE Controller. These devices are different depending on underlying virtualbox machine (some call them simply "IDE", some others "IDE Controller", while others use "SATA" or "SATA Controller"). A way of enumerating and/or standardizing the attachment would be very nice.
The second error is because you failed to delete the sdb.vdi from virtualbox most likely. This sometimes happens when vagrant fails and is a real annoyance imho. vagrant destroy should be able to pick it up and destroy it on its own (not sure which party performs this, but the virtualbox machine is destroyed by it, and at the same time the vdi disappears so...). I think if storageattach fails, the vdi will just stay there and not be deleted.
Edit: finding the name of the storage controller can be done by doing:
vboxmanage showvminfo boxname|grep "Storage Controller Name"