Terraform-provider-azurerm: azurerm - Terraform wants to recreate VM when attaching existing managed data disk

Created on 13 Jun 2017  ยท  9Comments  ยท  Source: terraform-providers/terraform-provider-azurerm

_This issue was originally opened by @marratj as hashicorp/terraform#14268. It was migrated here as part of the provider split. The original body of the issue is below._


Terraform Version

0.9.4

Affected Resource(s)

azurerm_virtual_machine
azurerm_managed_disk

Terraform Configuration Files

resource "azurerm_managed_disk" "db1appdisk2" {
  name                 = "db1appdisk2"
  location             = "${azurerm_resource_group.testdemo.location}"
  resource_group_name  = "${azurerm_resource_group.testdemo.name}"
  storage_account_type = "Premium_LRS"
  create_option        = "Empty"
  disk_size_gb         = "127"
}

resource "azurerm_virtual_machine" "db1" {
  name                  = "db1"
  location              = "${azurerm_resource_group.testdemo.location}"
  resource_group_name   = "${azurerm_resource_group.testdemo.name}"
  network_interface_ids = ["${azurerm_network_interface.db1nic1.id}"]
  vm_size               = "Standard_DS1_v2"

...

  storage_data_disk {
    name            = "${azurerm_managed_disk.db1appdisk2.name}"
    managed_disk_id = "${azurerm_managed_disk.db1appdisk2.id}"
    create_option   = "Attach"
    lun             = 1
    disk_size_gb    = "${azurerm_managed_disk.db1appdisk2.disk_size_gb}"
  }
}

Debug Output

Panic Output

Expected Behavior

Existing Managed disk gets attached to VM without recreating the VM.

Actual Behavior

The VM gets destroyed and recreated from scratch.

When adding a Managed Disk with the "Empty" option however, it works as expected -> here the VM just gets reconfigured, not recreated.

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:
1 terraform plan

  1. terraform apply

Important Factoids

References

bug servicdisks

Most helpful comment

๐Ÿ‘‹ hey @marratj

Thanks for opening this issue :)

I've taken a look into this issue and unfortunately this appears to be a limitation at Azure's end where it's not possible to update the ordering of disks through the API; where attempting to change either the managed_disk_id field or the lun field (to re-order the disks) causes an error to be returned from the API. I also took a look into stopping the VM, then modifying the disks and then starting it again but this doesn't appear to be a viable route either.

As such - I've raised an issue about this on the Azure SDK for Go repository - to find out how this should be achieved, since I believe it should be possible (since the portal allows for this to be done) - and will update when I've heard back.

Thanks!

All 9 comments

Is there any update on that? In my opinion the simplest solution would be to have ForceNew: false instead of true when changing the managed_disk_idof storage_data_disk.

Or would this currently not be working due to Azure restrictions?

It's really important for fix! I hope will be done asap!

๐Ÿ‘‹ hey @marratj

Thanks for opening this issue :)

I've taken a look into this issue and unfortunately this appears to be a limitation at Azure's end where it's not possible to update the ordering of disks through the API; where attempting to change either the managed_disk_id field or the lun field (to re-order the disks) causes an error to be returned from the API. I also took a look into stopping the VM, then modifying the disks and then starting it again but this doesn't appear to be a viable route either.

As such - I've raised an issue about this on the Azure SDK for Go repository - to find out how this should be achieved, since I believe it should be possible (since the portal allows for this to be done) - and will update when I've heard back.

Thanks!

@tombuildsstuff I noticed that the issue you opened at the Azure SDK repo is closed almost a month ago and that you removed your assignment from this issue 10 days ago. So does that mean this is now fixed? I could not find a related PR, but maybe I'm just overlooking something here...

And if it's not yet fixed, is there something in the works already?

@svanharmelen as I read this, it's not a bug upstream but you have to order operations in a certain manner which @tombuildsstuff could then use to make this work correctly (or whoever picks up the bug).

Though would sure like this too as hitting our site at present also!

@tombuildsstuff This is not upstream bug I did test using the latest go sdk to find if the VM was getting reprovisioned when attaching a new disk .None of tools like arm template, azcli arm powershell or sdk has this behaviour . This is only to do with how terraform handle the new data disk . As @marratj said using ForceNew: false instead of true when changing the managed_disk_idof storage_data_disk may fix the issue but may be not the clearest one . This is really important and it is affecting us badly and your help is highly appreciated

๐Ÿ‘‹ hey @marratj @3guboff @svanharmelen @imcdnzl

This has previously been fixed since ForceNew has been resolved, in addition I believe another solution to this will be available in the new Data Disk Attachment resource which has been requested in #795 and is being worked on in #1207.

Thanks!

Sounds good! Thanks for the update @tombuildsstuff!

I'm going to lock this issue because it has been closed for _30 days_ โณ. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error ๐Ÿค– ๐Ÿ™‰ , please reach out to my human friends ๐Ÿ‘‰ [email protected]. Thanks!

Was this page helpful?
0 / 5 - 0 ratings