0.10.4
# Copy-paste your Terraform configurations here - for large Terraform configs,
# please use a service like Dropbox and share a link to the ZIP file. For
# security, you can also encrypt the files using our GPG public key: https://keybase.io/hashicorp
+ module.elastic16.azurerm_virtual_machine_data_disk_attachment.esddisk_attach_0[0]
+ module.elastic16.azurerm_virtual_machine_data_disk_attachment.esddisk_attach_0[1]
+ module.elastic16.azurerm_virtual_machine_data_disk_attachment.esddisk_attach_1[0]
+ module.elastic16.azurerm_virtual_machine_data_disk_attachment.esddisk_attach_1[1]
+ module.elastic16.azurerm_virtual_machine_data_disk_attachment.esddisk_attach_2[0]
+ module.elastic16.azurerm_virtual_machine_data_disk_attachment.esddisk_attach_2[1]
+ module.elastic16.azurerm_virtual_machine_data_disk_attachment.esddisk_attach_3[0]
+ module.elastic16.azurerm_virtual_machine_data_disk_attachment.esddisk_attach_3[1]
module.elastic16.azurerm_virtual_machine_data_disk_attachment.esddisk_attach_0[0]: Creating...
module.elastic16.azurerm_virtual_machine_data_disk_attachment.esddisk_attach_0[1]: Creating...
module.elastic16.azurerm_virtual_machine_data_disk_attachment.esddisk_attach_0[1]: Creation complete after 1m1s (ID: /subscriptions/02a2bcea-0861-437c-9a66-...02/dataDisks/usw90cavesd1602_datadisk0)
module.elastic16.azurerm_virtual_machine_data_disk_attachment.esddisk_attach_0[0]: Creation complete after 1m36s (ID: /subscriptions/02a2bcea-0861-437c-9a66-...01/dataDisks/usw90cavesd1601_datadisk0)
module.elastic16.azurerm_virtual_machine_data_disk_attachment.esddisk_attach_1[0]: Creating...
module.elastic16.azurerm_virtual_machine_data_disk_attachment.esddisk_attach_1[1]: Creating...
* module.elastic16.azurerm_virtual_machine_data_disk_attachment.esddisk_attach_1[1]: 1 error(s) occurred:
* azurerm_virtual_machine_data_disk_attachment.esddisk_attach_1.1: Error updating Virtual Machine "usw90cavesd1602" (Resource Group "usw90cavelastic16_rg") with Disk "usw90cavesd1602_datadisk1": compute.VirtualMachinesClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code="InvalidRequestContent" Message="The request content was invalid and could not be deserialized: 'Could not find member 'resources' on object of type 'ResourceDefinition'. Path 'resources', line 1, position 6309.'."
* module.elastic16.azurerm_virtual_machine_data_disk_attachment.esddisk_attach_1[0]: 1 error(s) occurred:
* azurerm_virtual_machine_data_disk_attachment.esddisk_attach_1.0: Error updating Virtual Machine "usw90cavesd1601" (Resource Group "usw90cavelastic16_rg") with Disk "usw90cavesd1601_datadisk1": compute.VirtualMachinesClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code="InvalidRequestContent" Message="The request content was invalid and could not be deserialized: 'Could not find member 'resources' on object of type 'ResourceDefinition'. Path 'resources', line 1, position 6309.'."
Trying to create 2 VMS and 4 data disks on each and attach them.
All 4 disks should be attached successfully.
Both the VMs get 1st disk attached successfully and for other disks get the following error:
* module.elastic16.azurerm_virtual_machine_data_disk_attachment.esddisk_attach_1[1]: 1 error(s) occurred:
* azurerm_virtual_machine_data_disk_attachment.esddisk_attach_1.1: Error updating Virtual Machine "usw90cavesd1602" (Resource Group "usw90cavelastic16_rg") with Disk "usw90cavesd1602_datadisk1": compute.VirtualMachinesClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code="InvalidRequestContent" Message="The request content was invalid and could not be deserialized: 'Could not find member 'resources' on object of type 'ResourceDefinition'. Path 'resources', line 1, position 6309.'."
* module.elastic16.azurerm_virtual_machine_data_disk_attachment.esddisk_attach_1[0]: 1 error(s) occurred:
terraform apply
Hi @sukhmeet
Thanks for opening this issue :)
So that we can take a look into this - is it possible for you to provide us with the Terraform configuration that you’re seeing this error with?
Thanks!
Hi Tom,
I destroyed the resources. I am reproducing it now and I will send you the details ASAP.
Do you want to see the code meanwhile ?
Sent from my iPhone
On Jul 18, 2018, at 11:29 AM, Tom Harvey notifications@github.com wrote:
Hi @sukhmeet
Thanks for opening this issue :)
So that we can take a look into this - is it possible for you to provide us with the Terraform configuration that you’re seeing this error with?
Thanks!
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
Hi Tom,
Below is the relevant part of the code.
resource "azurerm_virtual_machine" "esd" {
availability_set_id = "${azurerm_availability_set.esd.id}"
count = "${var.count_esd}"
location = "${var.location}"
name = "${var.envid}${var.appid}v${replace(var.role,"elastic","esd")}${format("%02d",count.index + 1)}"
network_interface_ids = ["${element(azurerm_network_interface.esd.*.id, count.index)}"]
resource_group_name = "${azurerm_resource_group.elastic.name}"
vm_size = "${var.vm_size_esd}"
delete_os_disk_on_termination = true
delete_data_disks_on_termination = true
lifecycle {
prevent_destroy = "false"
ignore_changes = ["storage_image_reference", "os_profile"]
}
storage_image_reference {
id = "${data.azurerm_resource_group.images.id}/providers/Microsoft.Compute/images/${var.envid}${var.vm_image_name}"
}
storage_os_disk {
name = "${var.envid}${var.appid}v${replace(var.role,"elastic","esd")}${format("%02d",count.index + 1)}_osdisk0"
managed_disk_type = "${var.os_managed_disk_type}"
create_option = "FromImage"
}
os_profile {
computer_name = "${var.envid}${var.appid}v${replace(var.role,"elastic","esd")}${format("%02d",count.index + 1)}.${var.domain_name}"
admin_username = "${var.admin_username}"
admin_password = "${var.admin_password}"
}
os_profile_linux_config {
disable_password_authentication = "false"
}
tags {
envid = "${var.envid}"
created_by = "${var.created_by}"
environment = "${var.environment}"
product = "${var.product}"
role = "${replace(var.role,"elastic","esd")}"
}
}
/* Add data disks to Elastic data nodes */
# data disk index 0
resource "azurerm_managed_disk" "datadisk0" {
count = "${var.datadisk_count_esd > 0 ? var.count_esd : 0}"
name = "${var.envid}${var.appid}v${replace(var.role,"elastic","esd")}${format("%02d",count.index + 1)}_datadisk0"
location = "${var.location}"
resource_group_name = "${azurerm_resource_group.elastic.name}"
storage_account_type = "${var.data_managed_disk_type}"
create_option = "Empty"
disk_size_gb = "${var.data_disk_size_gb}"
}
resource "azurerm_virtual_machine_data_disk_attachment" "esddisk_attach_0" {
count = "${var.datadisk_count_esd > 0 ? var.count_esd : 0}"
managed_disk_id = "${element(azurerm_managed_disk.datadisk0.*.id , count.index)}"
virtual_machine_id = "${element(azurerm_virtual_machine.esd.*.id, count.index)}"
lun = "0"
caching = "ReadWrite"
depends_on = ["azurerm_virtual_machine.esd" , "azurerm_managed_disk.datadisk0"]
}
# data disk index 1
resource "azurerm_managed_disk" "datadisk1" {
count = "${var.datadisk_count_esd > 1 ? var.count_esd : 0}"
name = "${var.envid}${var.appid}v${replace(var.role,"elastic","esd")}${format("%02d",count.index + 1)}_datadisk1"
location = "${var.location}"
resource_group_name = "${azurerm_resource_group.elastic.name}"
storage_account_type = "${var.data_managed_disk_type}"
create_option = "Empty"
disk_size_gb = "${var.data_disk_size_gb}"
}
resource "azurerm_virtual_machine_data_disk_attachment" "esddisk_attach_1" {
count = "${var.datadisk_count_esd > 1 ? var.count_esd : 0}"
managed_disk_id = "${azurerm_managed_disk.datadisk1.*.id[count.index]}"
virtual_machine_id = "${azurerm_virtual_machine.esd.*.id[count.index]}"
lun = "1"
caching = "ReadWrite"
depends_on = ["azurerm_virtual_machine.esd" , "azurerm_managed_disk.datadisk1" , "azurerm_virtual_machine_data_disk_attachment.esddisk_attach_0"]
}
# data disk index 2
resource "azurerm_managed_disk" "datadisk2" {
count = "${var.datadisk_count_esd > 2 ? var.count_esd : 0}"
name = "${var.envid}${var.appid}v${replace(var.role,"elastic","esd")}${format("%02d",count.index + 1)}_datadisk2"
location = "${var.location}"
resource_group_name = "${azurerm_resource_group.elastic.name}"
storage_account_type = "${var.data_managed_disk_type}"
create_option = "Empty"
disk_size_gb = "${var.data_disk_size_gb}"
}
resource "azurerm_virtual_machine_data_disk_attachment" "esddisk_attach_2" {
count = "${var.datadisk_count_esd > 2 ? var.count_esd : 0}"
managed_disk_id = "${element(azurerm_managed_disk.datadisk2.*.id , count.index)}"
virtual_machine_id = "${element(azurerm_virtual_machine.esd.*.id, count.index)}"
lun = "2"
caching = "ReadWrite"
depends_on = ["azurerm_virtual_machine.esd" , "azurerm_managed_disk.datadisk2" , "azurerm_virtual_machine_data_disk_attachment.esddisk_attach_1"]
}
# data disk index 3
resource "azurerm_managed_disk" "datadisk3" {
count = "${var.datadisk_count_esd > 3 ? var.count_esd : 0}"
name = "${var.envid}${var.appid}v${replace(var.role,"elastic","esd")}${format("%02d",count.index + 1)}_datadisk3"
location = "${var.location}"
resource_group_name = "${azurerm_resource_group.elastic.name}"
storage_account_type = "${var.data_managed_disk_type}"
create_option = "Empty"
disk_size_gb = "${var.data_disk_size_gb}"
}
resource "azurerm_virtual_machine_data_disk_attachment" "esddisk_attach_3" {
count = "${var.datadisk_count_esd > 3 ? var.count_esd : 0}"
managed_disk_id = "${element(azurerm_managed_disk.datadisk3.*.id , count.index)}"
virtual_machine_id = "${element(azurerm_virtual_machine.esd.*.id, count.index)}"
lun = "3"
caching = "ReadWrite"
depends_on = ["azurerm_virtual_machine.esd" , "azurerm_managed_disk.datadisk3" , "azurerm_virtual_machine_data_disk_attachment.esddisk_attach_2"]
}
# data disk index 4
resource "azurerm_managed_disk" "datadisk4" {
count = "${var.datadisk_count_esd > 4 ? var.count_esd : 0}"
name = "${var.envid}${var.appid}v${replace(var.role,"elastic","esd")}${format("%02d",count.index + 1)}_datadisk4"
location = "${var.location}"
resource_group_name = "${azurerm_resource_group.elastic.name}"
storage_account_type = "${var.data_managed_disk_type}"
create_option = "Empty"
disk_size_gb = "${var.data_disk_size_gb}"
}
resource "azurerm_virtual_machine_data_disk_attachment" "esddisk_attach_4" {
count = "${var.datadisk_count_esd > 4 ? var.count_esd : 0}"
managed_disk_id = "${element(azurerm_managed_disk.datadisk4.*.id , count.index)}"
virtual_machine_id = "${element(azurerm_virtual_machine.esd.*.id, count.index)}"
lun = "4"
caching = "ReadWrite"
depends_on = ["azurerm_virtual_machine.esd" , "azurerm_managed_disk.datadisk4", "azurerm_virtual_machine_data_disk_attachment.esddisk_attach_3"]
}
@tombuildsstuff any updates on this? is there any more information needed for this.
I am able to use the disk attachment, but upon deletion in terraform I am getting the same error. I can get around it by using az group delete
, but it would be a lot cleaner if it could be done in terraform
hey @sukhmeet @pabowers
I've taken a quick look into this but have been unable to repro this issue directly - that said digging into this I've noticed this comment from @marstr in this issue. Based on this information I've pushed a branch vm-data-disk-attachment
- which I'm wondering if you'd be able to test (in a non-production environment) and see if this is still an issue for you? The Disk Attachment tests pass on this branch fwiw, so this _ seems_ fine:
Thanks!
I've opened #1855 which includes the fix from this branch since the tests pass and that's the only inconsistency I can see between the Data Disk Attachment resource and the other VM interactions
FWIW I was just hit by the same issue when adding a data disk through the azurerm_virtual_machine_data_disk_attachment resource type to existing VMs that have OS disks created from vhd images (I tried but gave up on trying to get managed OS disks created from packer vhd to work).
storage_os_disk {
name = "vsts-disk"
caching = "ReadWrite"
create_option = "FromImage"
image_uri = "${var.source_vhd_uri}"
vhd_uri = "${local.vhd_uri_base}-${count.index}.vhd"
os_type = "Windows"
}
* azurerm_virtual_machine_data_disk_attachment.data_disk.1: Error updating Virtual Machine "vsts-vm-1" (Resource Group "VSTS-Resources") with Disk "LargeDataDisk": compute.VirtualMachinesClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code="InvalidRequestContent" Message="The request content was invalid and could not be deserialized: 'Could not find member 'resources' on object of type 'ResourceDefinition'. Path 'resources', line 1, position 5058.'."
Full config snippets:
resource "azurerm_virtual_machine" "vsts_vm" {
count = "${var.num_instances}"
name = "vsts-vm-${count.index}"
location = "${local.vm_loc}"
resource_group_name = "${local.vm_rg}"
network_interface_ids = ["${element(azurerm_network_interface.for_vm.*.id,count.index)}"]
vm_size = "${var.vm_size}"
delete_os_disk_on_termination = true
storage_os_disk {
name = "vsts-disk"
caching = "ReadWrite"
create_option = "FromImage"
image_uri = "${var.source_vhd_uri}"
vhd_uri = "${local.vhd_uri_base}-${count.index}.vhd"
os_type = "Windows"
}
os_profile {
computer_name = "${var.vm_computer_name_base}-${count.index}"
admin_username = "${var.admin_user}"
admin_password = "${var.admin_password}"
}
os_profile_windows_config {
provision_vm_agent = true
enable_automatic_upgrades = true
}
identity = {
type = "SystemAssigned"
}
tags = "${var.tags}"
lifecycle {
# It seems that terraform can't read out the old value of image_uri so it always sets it back to what it actually was
ignore_changes = [
"storage_os_disk.0.image_uri",
]
}
}
data "azurerm_managed_disk" "data_disk" {
name = "LargeDataDisk" # cf the scripts in the datadisk directory
resource_group_name = "${local.vm_rg}"
}
resource "azurerm_virtual_machine_data_disk_attachment" "data_disk" {
count = "${var.num_instances}"
managed_disk_id = "${data.azurerm_managed_disk.data_disk.id}"
virtual_machine_id = "${element(azurerm_virtual_machine.vsts_vm.*.id,count.index)}"
lun = "10"
caching = "ReadWrite"
}
I'm also encountering this issue. I'm attaching 16 disks. The problem affects attachments at random, and then I'm unable to tf destroy the resource due to that error.
Would it matter if I changed the parallelism? When will .14 be available? I am using .13 but still seeing this problem.
@tombuildsstuff how do I specify my provider to use the vm-data-disk-attachment branch?
thanks
hey @sukhmeet @pabowers @perbergland @stonefury
Just to let you know that this has been released in v1.14.0 of the AzureRM Provider which is now available: https://github.com/terraform-providers/terraform-provider-azurerm/blob/v1.14.0/CHANGELOG.md
Thanks!
Awesome @tombuildsstuff. Trying it right now. Really, thank you for pushing this out.
@tombuildsstuff I seem to still have the issue with attaching disks and I just had my destroy operation fail with the same operation. But re-running the tf plan cleaned up the ones that bombed, which is OK, at least a re-run cleaned it.
So here is my take on the error seen when attaching data disks. I have perhaps a special case, I am spinning up VMs with SQL server on them, and I am installing SqlIaasExtension. It seems like I must make this a dependency on my entire VM. Only when it's created and all disks are attached should I apply the extension. My suspicion is if the extension is applied while the data disks are being attached, then it creates that error condition. In my case I am applying 15+ disks, so it takes quite a while and gives enough time for this to happen.
}
resource "azurerm_virtual_machine_extension" "AZW1WSQLSAS_sql_extension" {
depends_on = ["module.AZW1WSQLSAS"] . # need this it seems to avoid attach failure.
name = "SqlIaasExtension"
location = "${local.workspace["location"]}"
....
I can put my code up on github soon so the problem can be reproduced.
I would really like to know how this works for everyone else, just to make sure I'm not making some stupid mistake. terraform --version does show me 1.14.0. :)
@tombuildsstuff
https://github.com/stonefury/terraform-azure-vm
Run the example.tf, it will fail. See what you think.
thanks
Having a similar problem as @stonefury , I am destroying a VM with a data disk attachment, I am now using 1.14.0 as well and still get the error on first destroy, I get this error.
I am also running a azurerm_virtual_machine_extension resource that pushes a powershell script via a template_file resource
publisher = "Microsoft.Compute"
type = "CustomScriptExtension"
type_handler_version = "1.9"
first pass at destroy fails, second one works.
@jamespatetz - so I'm getting a stable deployment now, because I am not applying the extension until all the disks are attached. I am calling a module which is creating my attachments, so I use depends_on =[] to ensure the extension is applied AFTER all disks are attached.
This is a subtle problem. Now, as for destroy, I'm not sure. I got a failure on destroy yesterday but not today. Maybe my dependency option also fixes that during destroy. Destroy is less of a concern for me though, I can deal with that if I just need to re-run destroy. My issue was a very hard-stuck problem with the VM where re-runs of apply failed every time.
So try my suggestion and let me know how it goes. See my github link above with the depends example if you haven't used this option before.
cheers
@tombuildsstuff I am facing issue with Destroying the disk.
since this issue is close I open another one #1942
This is happening on version 12, 14 and 15 of terraform azure provider. I skipped 13 to go on latest.
@stonefury thanks for the repro - I've re-opened this for the moment
hey @stonefury
Thanks for the repro - I've opened #1950 which includes a fix for this (and an acceptance test now we've been able to isolate it) - thanks for the help here :)
Thanks!
I compiled the provider using the commit in #1950, but I still get the error on first destroy
module.avm_sql.azurerm_virtual_machine_data_disk_attachment.avm-datadisk (destroy): 1 error(s) occurred:
azurerm_virtual_machine_data_disk_attachment.avm-datadisk: Error removing Disk "avm-55661-sql-data-disk1-dev-0-ae2d0c" from Virtual Machine "avm-55661-sql-dev-0-ae2d0c" (Resource Group "at288corp-55661-dev-591bc2"): compute.VirtualMachinesClient#CreateOrUpdate: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="InvalidRequestContent" Message="The request content was invalid and could not be deserialized: 'Could not find member 'resources' on object of type 'ResourceDefinition'. Path 'resources', line 1, position 1567.'."
module.avm_web.azurerm_virtual_machine_data_disk_attachment.avm-datadisk (destroy): 1 error(s) occurred:
azurerm_virtual_machine_data_disk_attachment.avm-datadisk: Error removing Disk "avm-55661-web-data-disk1-dev-0-657ff8" from Virtual Machine "avm-55661-web-dev-0-657ff8" (Resource Group "at288corp-55661-dev-591bc2"): compute.VirtualMachinesClient#CreateOrUpdate: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="InvalidRequestContent" Message="The request content was invalid and could not be deserialized: 'Could not find member 'resources' on object of type 'ResourceDefinition'. Path 'resources', line 1, position 1574.'."
Hi @tombuildsstuff & @JunyiYi
We are now getting this error as well.
azurerm_virtual_machine_data_disk_attachment.data_disk_attach.0: Error updating Virtual Machine "xyz" (Resource Group "xyz-Data") with Disk "xyz-datadisk-01": compute.VirtualMachinesClient#CreateOrUpdate: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="InvalidRequestContent" Message="The request content was invalid and could not be deserialized: 'Could not find member 'resources' on object of type 'ResourceDefinition'. Path 'resources', line 1, position 2924.'."
We had no luck removing the CustomScriptExtension and applying the attach disk resource. We are currently on v1.15 for the provider.
Is there any work around?
@jamespatetz @rohrerb , thanks for reporting it. And it would be helpful to provide your HCL scripts here.
Hi @tombuildsstuff & @JunyiYi
We are now getting this error as well.
azurerm_virtual_machine_data_disk_attachment.data_disk_attach.0: Error updating Virtual Machine "xyz" (Resource Group "xyz-Data") with Disk "xyz-datadisk-01": compute.VirtualMachinesClient#CreateOrUpdate: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="InvalidRequestContent" Message="The request content was invalid and could not be deserialized: 'Could not find member 'resources' on object of type 'ResourceDefinition'. Path 'resources', line 1, position 2924.'."
We had no luck removing the CustomScriptExtension and applying the attach disk resource. We are currently on v1.15 for the provider.
Is there any work around?
You removed the CustomScriptExtension from the VM using the web portal, right? It has to be removed from the VM by hand, and then taken out of the terraform plan before running the attach disks. At least my experience has been consistent.
@rohrerb @stonefury can confirm if upgrading to v1.16 of the Provider fixes this for you?
Thanks!
@tombuildsstuff using 1.16 is working for me, several times I tested and I did not get the error subsequently
Thanks!
I deployed a VM with 1 attached disk. I manually added an extension (just picked datadog agent at random). Updated terraform to add a second disk.
Behavior is hangs on still destroying/creating. It's been running for 15 mins.
@tombuildsstuff using 1.16 is working for me, several times I tested and I did not get the error subsequently
Thanks!
Could you describe your test, @jamespatetz ?
Hi @tombuildsstuff we were able to attach disks with 1.16. Thank you for helping get this resolve.
I guess this ticket was in regards to multiple disk attachments. I never had that problem. My issue is strictly related to adding disks after an extension is added to a VM. Has anyone else tested this scenerio?
Result: just hangs forever.
@rohrerb @jamespatetz great, thanks for confirming that 👍
@stonefury thanks for confirming - since this is a separate issue (which is related to, but different to this issue); would you mind opening a new issue specifically for that?
Thanks!
I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.
If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks!
Most helpful comment
hey @stonefury
Thanks for the repro - I've opened #1950 which includes a fix for this (and an acceptance test now we've been able to isolate it) - thanks for the help here :)
Thanks!