Terraform v0.9.10
azurerm_virtual_machine
provider "azurerm" {
# use environment variables
}
resource "azurerm_virtual_machine" "example" {
name = "example-delete-me"
location = "West US 2"
resource_group_name = "${azurerm_resource_group.example.name}"
network_interface_ids = ["${azurerm_network_interface.example.id}"]
vm_size = "Basic_A0"
storage_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "16.04-LTS"
version = "latest"
}
storage_os_disk {
name = "example-os_disk"
caching = "ReadWrite"
create_option = "FromImage"
managed_disk_type = "Standard_LRS"
}
os_profile {
computer_name = "example"
admin_username = "example"
admin_password = "Gu8da3Aeshiefee0CHANGED"
custom_data = <<-EOF
#!/bin/bash
echo THIS_CHANGED
EOF
}
os_profile_linux_config {
disable_password_authentication = false
}
delete_os_disk_on_termination = true
}
resource "azurerm_resource_group" "example" {
name = "example-delete-me"
location = "West US 2"
}
resource "azurerm_virtual_network" "example" {
name = "example-network"
address_space = ["10.42.0.0/16"]
location = "West US 2"
resource_group_name = "${azurerm_resource_group.example.name}"
}
resource "azurerm_subnet" "example" {
name = "example-subnet_default"
resource_group_name = "${azurerm_resource_group.example.name}"
virtual_network_name = "${azurerm_virtual_network.example.name}"
address_prefix = "10.42.2.0/24"
}
resource "azurerm_network_interface" "example" {
name = "example-nic"
location = "West US 2"
resource_group_name = "${azurerm_resource_group.example.name}"
ip_configuration {
name = "example-nic-ip_configuration"
subnet_id = "${azurerm_subnet.example.id}"
private_ip_address_allocation = "dynamic"
}
}
Not including log output here because it doesn't seem critical to this issue and I'm concerned about secrets leakage. (For example, the TF_LOG=TRACE
output contains an Authorization: Bearer...
header. Is that safe?)
After the HCL above is apply
ed, if any attribute including
azurerm_virtual_machine.os_profile.custom_data
azurerm_virtual_machine.os_profile.admin_password
are changed, then re-planning should identify the changes made and trigger a destroy/re-create of the VM.
terraform plan
reports that it "did not detect any differences", even though changes were made:
diff --git a/test-case.tf.original b/test-case.new.tf
index 7ed0377..525fea6 100644
--- a/test-case.tf.original
+++ b/test-case.new.tf
@@ -26,10 +26,10 @@ resource "azurerm_virtual_machine" "example" {
os_profile {
computer_name = "example"
admin_username = "example"
- admin_password = "Gu8da3Aeshiefee0"
+ admin_password = "Gu8da3Aeshiefee0CHANGED"
custom_data = <<-EOF
#!/bin/bash
- echo foo
+ echo THIS_CHANGED
EOF
}
Changes to some other attributes, such as .os_profile.admin_username
, did result in the expected destroy/re-create plan.
terraform apply
with the HCL aboveterraform plan
and observe "did not detect any differences"When I use terraform apply -refresh=false
it detects diff.
Then I get an azure error:
Status=409 Code="PropertyChangeNotAllowed" Message="Changing property 'customData' is not allowed."
Should ForceNew: true
be added to os_profile
schema ?
I'm getting the same issue in 0.10.0 with the latest azurerm provider
Hey @bsilverthorn
Thanks for opening this issue - apologies for the delayed response on this.
Taking a look into this issue - this is because the custom_data
value isn't included in the hash associated with the os_profile
Set. I've investigated adding this quickly and there's an issue where changes where any change made to the os_profile
object at all force a recreation by making the Hostname field believes it's changed.
Instead - I think we should fix this by converting the os_profile
to using a List which would solve this and allow changes to the fields to be detected as expected; however this is a slightly larger task.
In the interim I'm going to merge #211 as making the custom_data
field ForceNew
is a valid change at this point (even if changes won't be detected with a refresh) - however I'll leave this issue open until the os_profile
block has been converted over to a List and changes to custom_data
are detected; and will pick this up in the near future.
Thanks!
+1 for fixing this. Hope sdk will be fixed soon.
+1
+1
Hello Azure Terraform Community,
just today, I stumbled onto this issue. I reported earlier this month to my account manager at Microsoft that scale sets do not recognize changes in the user data as a reason to roll out new instances. This is why I switched to bare VM for now just to find out, that they do not recognize changes to the user data at all :rofl:
I have exactly the same issue as the reporter of this issue, any changes to custom_data are ignored by terraform / azure.
Expected behavior would be (as other cloud providers would) that the VM gets re-provisioned once the user data has changed.
Are there any updates on this issue?
Many thanks
Jakob
Any updates on this issue?
Any updates? This is really annoying.
hi @holderbaum @calvix
This behaviour is for historical reasons where VM's and VM Scale Sets previously didn't consider Custom Data to be updatable, and for a long time this information wasn't returned at all - as such it's ignored by Terraform. We have some plans in the future to make some major changes to the azurerm_virtual_machine
and azurerm_virtual_machine_scale_set
resources to fix this issue (and a bunch more) - but they require some more thought before proceeding (for example, should we be splitting the VM resource into one for Linux and one for Windows - to handle the name validation requirements being substantially different, or the fact that SSH Keys can't be used on Windows etc).
Whilst unfortunately I'm unable to give a timeframe for this - it's something we're thinking about (and will post an update for when we have more information). Although I realise this isn't an ideal solution, it's possible to work around this in the interim by tainting the Virtual Machine/Scale Set using terraform taint azurerm_virtual_machine.test
.
Thanks!
I am getting this issue as well. It is an issue for some of the items in the profile but not all and unfortunately it is an issue with the more commonly updated items like a password. In my case the ssh_keys key_data is not detected. I tested the password mentioned here is also an issue. The short term fix forcing the new resource is terrible. I don't want to rebuild a vm when I'm applying a new pub key.
I'm running into same issue. It's fair to assume that ssh_keys and custom_data will be updated frequently. So, this is major bug from my perspective. It should be at least mentioned in the documentation.
any plans to fix this soon?
@zepptron we're planning on fixing this as a part of the new Virtual Machine/Virtual Machine Scale Set resources which will form part of v2.0 (more info can be found here) - which we're working towards at the moment.
hi @bsilverthorn @romlinch @josmo @r7vme @brodriguesneto @calvix @holderbaum @warrenackerman @svetozar02 @zepptron
We're currently working on version 2.0 of the Azure Provider which we previously announced in #2807.
As a part of this we're introducing five new resources which will supersede the existing azurerm_virtual_machine
and azurerm_virtual_machine_scale_set
resources:
azurerm_linux_virtual_machine
azurerm_linux_virtual_machine_scale_set
azurerm_virtual_machine_scale_set_extension
azurerm_windows_virtual_machine
azurerm_windows_virtual_machine_scale_set
We recently opened #5550 which adds support for the new Virtual Machine resources - and I'm able to confirm that this is fixed in the new Virtual Machine resources - however unfortunately we have no plans to backport this to the existing azurerm_virtual_machine
resource.
In order to get feedback on these new resources we'll be launching support for these new resources as an opt-in Beta in an upcoming 1.x release of the Azure Provider and ultimately release these as "GA" in the upcoming 2.0 release. We'll post an update in #2807 when both the opt-in Beta (1.x) & GA (2.0) are available - as such I'd recommend subscribing to that issue for updates.
This issue's been assigned to the milestone "2.0" since this is where this will ship - however (due to the way that closing Github Issues from PR's works, to be able to track this back for future users) this issue will be closed once the first of the new resources have been merged.
Thanks!
This has been released in version 2.0.0 of the provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading. As an example:
provider "azurerm" {
version = "~> 2.0.0"
}
# ... other configuration ...
I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.
If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks!
Most helpful comment
I'm running into same issue. It's fair to assume that ssh_keys and custom_data will be updated frequently. So, this is major bug from my perspective. It should be at least mentioned in the documentation.