Azure allows VMs to be booted with managed data disks pre-attached/attached-on-boot. This enables use cases where cloud-init and/or other "on-launch" configuration management tooling is able to prepare them for use as part of the initialisation process.
This provider currently only supports this case for individual VMs with the older, deprecated azurerm_virtual_machine resource. The new azurerm_linux_virtual_machine and azurerm_windows_virtual_machine resources instead opt to push users towards the separate azurerm_virtual_machine_data_disk_attachment which only attaches data disks to an existing VM post-boot, which fails to service the use case laid out above.
This is in contrast to the respective *_scale_set providers which (albeit out of necessity) support this behaviour.
Please could a repeatable data_disk block be added to the new VM resources (analogous to the same block in their scale_set counterparts) in order to allow VMs to be started with managed data disks pre-attached.
Thanks! 馃榿
azurerm_linux_virtual_machineazurerm_windows_virtual_machineresource "azurerm_linux_virtual_machine" "example" {
[...]
os_disk {
name = "example-os"
caching = "ReadWrite"
storage_account_type = "StandardSSD_LRS"
}
data_disk {
name = "example-data"
caching = "ReadWrite"
disk_size_gb = 4096
lun = 0
storage_account_type = "StandardSSD_LRS"
}
[...]
}
https://docs.microsoft.com/en-us/azure/virtual-machines/linux/tutorial-manage-disks#create-and-attach-disks
azurerm_virtual_machine_data_disk_attachment which only attaches data disks to an existing VM post-boot
Oh, that is really unfortunate... I wish I could try this but I'm not even able to create a managed disk due to https://github.com/terraform-providers/terraform-provider-azurerm/issues/6029
If I'm following this thread correctly (as we are still using the legacy disk system and were looking to move over) can you not deploy VMs with disks already attached? Is it truly rebooting VMs for each disk (thread in #6314 above)? This feels like a HUGE step backwards especially if the legacy mode we are using is being deprecated.
Also how do you deploy and configure a data disk that is in the source reference image if the data disk block is no longer valid?
@lightdrive, I've worked around it by using ansible at https://github.com/rgl/terraform-ansible-azure-vagrant
This is something I just ran across as well, I'd like to be able to use cloud-init to configure the disks. Any news on a resolution?
This item is next on my list, no ETA yet though sorry. I'll link it to a milestone when I've had chance to size and scope it.
It seems that the work done by @jackofallops have been closed with a note that it needs to be implemented in a different way.
Does anyone have a possible work-around for this?
My use-case are like others have pointed out:
Writing my own scripts to make this instead of using cloud-init seems like a waste.
Using the workaround mentioned in https://github.com/terraform-providers/terraform-provider-azurerm/issues/6074#issuecomment-626523919 might be possible, but seems to hacky indeed, and require some large changes to how resources are created.
Alas, was really looking forward to an official fix for this. 馃檨
In lieu of that however, here's what I came up with about six months ago having had no option but to _make_ this work at minimum for newly booted VMs (note: this has not been tested with changes to, or replacements of the disks - literally just booting new VMs). I'm also not really a Go person, and as a result this is definitely a hack and nothing even approaching a "good" solution, much less sane contents for a PR. Given that be warned that whatever state is generated is almost certainly destined to be incompatible with whatever shape the official implementation yields should it ever land, but on the off chance it does prove useful in some capacity or simply the embers to spark someone else's imagination, here's the horrible change I made to allow for booting VMs with disk attached such that cloud-init could run correctly: https://github.com/terraform-providers/terraform-provider-azurerm/commit/6e19897658bb5b79418231ca1c004fde83698b40.
Usage:
resource "azurerm_linux_virtual_machine" "example" {
[...]
data_disk {
name = "example-data"
caching = "ReadWrite"
disk_size_gb = 320
lun = 0
storage_account_type = "StandardSSD_LRS"
}
[...]
}
@mal FWIW this is being worked on, however the edge-cases make this more complicated than it appears - in particular we're trying to avoid several limitations from the older VM resources, which is why this isn't being lifted over 1:1 and is taking longer here.
Thanks for the insight @tombuildsstuff, great to know it's still being actively worked on. I put that commit out there in response to the request for possible work-arounds in case it was useful to someone that finds themself in the position I was in previously, where waiting for something to cover all the cases wasn't an option. Please don't take that as any kind of slight or indictment of the ongoing efforts, I definitely support any official solution covering all the cases, in my case it just wasn't possible to wait for it, but I'll be first in line to move definitions over to it when it does land. 馃榿
Most helpful comment
@mal FWIW this is being worked on, however the edge-cases make this more complicated than it appears - in particular we're trying to avoid several limitations from the older VM resources, which is why this isn't being lifted over 1:1 and is taking longer here.