Terraform v0.11.3
provider "azurerm" {
version = ">= 1.1.1"
subscription_id = "x"
client_id = "x"
client_secret = "x"
tenant_id = "x"
}
resource "azurerm_resource_group" "test_rg" {
name = "test-rg"
location = "Brazil South"
}
resource "azurerm_virtual_network" "vnet" {
name = "vnet-test"
location = "Brazil South"
address_space = ["10.1.0.0/16"]
resource_group_name = "${azurerm_resource_group.test_rg.name}"
}
resource "azurerm_subnet" "subnet" {
name = "subnet-test"
resource_group_name = "test-rg"
virtual_network_name = "${azurerm_virtual_network.vnet.name}"
address_prefix = "10.1.1.0/24"
}
resource "azurerm_network_interface" "test_ni" {
name = "test-ni"
location = "Brazil South"
resource_group_name = "${azurerm_resource_group.test_rg.name}"
ip_configuration {
name = "test-config"
private_ip_address_allocation = "dynamic"
subnet_id = "${azurerm_subnet.subnet.id}"
}
}
resource "azurerm_virtual_machine" "teste_instance" {
name = "test-virtual-machine"
location = "Brazil South"
resource_group_name = "${azurerm_resource_group.test_rg.name}"
network_interface_ids = ["${azurerm_network_interface.test_ni.id}"]
vm_size = "Standard_DS2_V2"
delete_os_disk_on_termination = true
delete_data_disks_on_termination = false
os_profile {
computer_name = "teste-computer"
admin_username = "testUser"
admin_password = "Test1234@Abcd"
}
os_profile_linux_config {
disable_password_authentication = false
}
storage_os_disk {
name = "test-local-disk"
image_uri = "https://xxxxxxxx.blob.core.windows.net/system/Microsoft.Compute/Images/images/test.vhd"
vhd_uri = "https://xxxxxxxx.blob.core.windows.net/vhds/test.vhd"
create_option = "FromImage"
os_type = "Linux"
}
}
When you execute terraform apply terraform previews correctly the storage_os_disk.0.image_uri state:
storage_os_disk.0.image_uri: "https://xxxx.blob.core.windows.net/system/Microsoft.Compute/Images/images/xxxx.vhd"
Then you confirm with yes and all the resources are properly created.
When you execute terraform show storage_os_disk.0.image_uri should be listed with the desired state.
When you execute terraform show storage_os_disk.0.image_uri is empty:
storage_os_disk.0.image_uri =
This implies that future executions of terraform apply will try to update the state, even we don't have anything to update.
Create a terraform file as the example above
terraform applyterraform showI am also seeing this behavior.
Observed also in 0.11.5 with azurerm v1.3.0
I'm also experiencing this behaviour, which is annoying and complicates my life big time.
Is there any hack, that prevents vm rebuild?
@jakubigla
You can just mark the property as ignored:
resource "azurerm_virtual_machine" "virtual_machine" {
...
lifecycle {
ignore_changes = [
"storage_os_disk" // Terraform tries to change empty storage_os_disk.0.image_uri on every provisioning.
// To avoid this we just ignore the changes to this attribute when diffing.
]
}
}
Amazing. Thanks for the quick answer, this is the type of the hack I was expected.
But then I experienced the following limitation:
https://social.msdn.microsoft.com/Forums/en-US/6a205f68-cfe3-4a90-9e76-828fc884c37a/arm-create-vm-from-custom-image-error-due-to-source-and-destination-storage-accounts-for-disk?forum=WAVirtualMachinesforWindows
Haha, Now my journey with Packer can be over very soon.
Imagine you have hundred subscriptions and you did a small change in the golden image.
In our current setup we reprovision all VMs whenever the golden image changes. AFAIK there's no way to do that without destroying the VMs.
We use a script like this for rolling update:
#!/usr/bin/env bash
set -eu
setup-remote-state.sh prod
VM_COUNT=${1:-20}
echo "Number of VMs in the cluster: $VM_COUNT"
# Bump number of VMs temporarily so number of available VMs is the same
# This is to avoid affecting currently running jobs
BUMPED_VM_COUNT=$(($VM_COUNT+1))
terraform apply -var-file=env-prod.tfvars -var vm_count=$BUMPED_VM_COUNT
LAST_VM_INDEX=$(($VM_COUNT-1))
for i in `seq 0 ${LAST_VM_INDEX}`
do
echo "azurerm_virtual_machine.virtual_machine.$i"
terraform taint "azurerm_virtual_machine.virtual_machine.$i"
terraform apply -var-file=env-prod.tfvars -var vm_count=${VM_COUNT}
done
# decrease number of VMs back to original
terraform apply -var-file=env-prod.tfvars -var vm_count=${VM_COUNT}
echo "Done."
It can be modified obviously to taint and reprovision VMs in batches if you have a lot of machines.
That's fine.
The issue is, that to deploy VM from custom image, the source and destination image need to be in the same storage account.
In enterprise area, when you have 100 subscriptions and one master golden image, you would have to copy the master image to 100 storage accounts, so they can be used.
Sorry, I haven't read the link properly.
Yes, that's a known limitation with custom images that golden image must be in the same storage account where you keep the VMs.
We currently copy golden image between accounts and try to keep its size to minimum.
An alternative would be to use standard image and install everything via custom script extension.
But that could be slow and very flaky :( Copying the image between accounts is not that bad in that case.
A better approach would be to use Docker containers, but that depends on your use-case, obviously.
Yea, copying is not the worst (and 100x better than standard image and custom script / ansible).
And yea docker is completely different story :)
Also I'm quite new to Azure. Do you have a single storage account for subscription? Or per resource group? I'm asking how you logically keep things isolated (if there's a need for that).
Let say I have a dev that has an access only to RG1 and another dev that can only access RG2. How would they access the golden image for their new VMs?
Hi Jakub. In my case i have one golden image per subscription/storage account. We do not have needs to have all vm's on same level as Dmitry mentioned, so periodically I am creating a new image (just to shorten deploys).
@jakubigla In our case we structure things like this:
In our case one team owns all the clusters, so there's no requirement to have separate storage accounts per team.
Thanks guys, this was very very helpful.
@dstori is this still an issue for you?
@metacpp to be honest the project requirements changed (as usual) so we are not using azure for now, I will return to this subject later.
I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.
If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks!
Most helpful comment
@jakubigla
You can just mark the property as ignored: