We made a copy of our existing vhds. We needed to spin them up in a new resource group. We get an error with osProfile not allowed, but terraform requires it. If attaching a OSdisk we should probably eliminate the requirement for osProfile. Might need
Terraform v0.8.2
Your version of Terraform is out of date! The latest version
is 0.8.3. You can update by downloading from www.terraform.io
(Yes it is out of date, but there isn't a Windows build yet. I don't see any references in the CHANGELOG which fix this bug)
resource "azurerm_network_interface" "network" {
name = "computer-nic"
location = "East US 2"
resource_group_name = "${azurerm_resource_group.rg.name}"
ip_configuration {
name = "computer-ip"
subnet_id = "${azurerm_subnet.nonprodnet.id}"
private_ip_address_allocation = "dynamic"
}
}
resource "azurerm_virtual_machine" "computer" {
name = "computer"
location = "East US 2"
resource_group_name = "${azurerm_resource_group.rg.name}"
network_interface_ids = ["${azurerm_network_interface.interface.id}"]
vm_size = "Standard_DS5_v2"
storage_os_disk {
name = "osdisk"
os_type = "linux"
vhd_uri = "${azurerm_storage_account.storageaccount.primary_blob_endpoint}vhds/osdisk.vhd"
caching = "ReadWrite"
create_option = "attach"
}
storage_data_disk {
name = "disk01"
vhd_uri = "${azurerm_storage_account.storageaccount.primary_blob_endpoint}vhds/disk01.vhd"
create_option = "attach"
disk_size_gb = "512"
lun = "1"
}
storage_data_disk {
name = "disk02"
vhd_uri = "${azurerm_storage_account.storargeaccount.primary_blob_endpoint}vhds/disk02.vhd"
create_option = "attach"
disk_size_gb = "512"
lun = "2"
}
os_profile {
computer_name = "computer"
admin_username = "username"
admin_password = "password"
}
os_profile_linux_config {
disable_password_authentication = false
ssh_keys = {
path="/home/username/.ssh/authorized_keys"
key_data="ssh-rsa clipped"
}
}
}
Should be able to attached existing OSDisk. Apparently when you do thaat Azure complains about osProfile being passed. But the osProfile is required.
Bombs out with a REST error.
Please list the steps required to reproduce the issue, for example:
terraform applyWe are going to try pulling out the os_type on the storage_os_disk to see if this fixes the problem.
We did try copying FromImage the existing disk and it bombed out.
Dropping os_type gives
azurerm_virtual_machine.computer: compute.VirtualMachinesClient#CreateOrUpdate: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="InvalidParameter" Message="Required parameter 'osDisk.osType' is missing (null)."
FYI We recompiled removing properties.osProfile from the func resourceArmVirtualMachine() just to validate it would function and it does. We probably need to turn this into a feature request? If create_option = "attach" on storage_os_disk then don't allow os_profile to be set and don't pass it in with request.
I just independently stumbled upon the same issue. Pretty much killed my whole morning trying to figure out how to make a module that supported a VM that could be re-attached to a disk. This completely blocked me.
From my experience, this is an Azure Resource Manager template limitation:
createOption: Attach and osProfile are mutally exclusive:
for example:
https://github.com/Azure/azure-quickstart-templates/blob/87ec9dedf8b78eda09fcaa9c0a1e2a99c935c5b4/201-vm-os-disk-and-data-disk-existing-vnet/azuredeploy.json#L134-L141
Anyone know how to handle the schema? Should I just make it optional? But error out if createOption is not attach?
Any news here??
I've been generally avoiding re-attaching VMs to disks with Terraform because of this, and the fact that we don't have a "Create or Attach" option to be used in modules. Having templates be so different between creating and attaching makes it virtually impossible to use Azure VMs in a module for production purposes. Too many settings require a VM destroy.
It seems to me that the Azure options and the Terraform Template limitations are at odds with each other. I suspect terraform needs to do some more intelligent things with Azure VM settings in order to meet it's design goals for this resource type. It may need to peek into the storage account and see if the blob exists to change settings accordingly? Perhaps add a terraform-level CreateOrAttach option for disks?
It does add complexity, but right now it's a pretty awful user experience.
I ran into this exact same issue. I am trying to use the create_option = "Attach" and get the os_profile is not allowed error. Any progress on this?
Should we just make os profile optional? Let the API call handle any schema errors generated when it is required? We can make clear in the documentation when os_profile will be needed? Anyone have another solution?
@nbering We didn't necessarily need attach for production purposes but we needed to do some testing. It can in handy to be able to attach. I wouldn't want to limit our ability to use terraform in this matter.
Running into the same issue. @pearcec I would say drop the os_profile requirement from terraform and let the Azure API give the error. That seems the least complicated solution to me.
@erikvdbergh agreed, I am taking my patching and submitting it shortly. Just want to update the documentation and run the azurerm tests.
Closed via #14176
I'm going to lock this issue because it has been closed for _30 days_ โณ. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
Most helpful comment
I've been generally avoiding re-attaching VMs to disks with Terraform because of this, and the fact that we don't have a "Create or Attach" option to be used in modules. Having templates be so different between creating and attaching makes it virtually impossible to use Azure VMs in a module for production purposes. Too many settings require a VM destroy.
It seems to me that the Azure options and the Terraform Template limitations are at odds with each other. I suspect terraform needs to do some more intelligent things with Azure VM settings in order to meet it's design goals for this resource type. It may need to peek into the storage account and see if the blob exists to change settings accordingly? Perhaps add a terraform-level CreateOrAttach option for disks?
It does add complexity, but right now it's a pretty awful user experience.