Terraform-provider-azurerm: azurerm_shared_image_version infinite applies

Created on 29 May 2019  ยท  4Comments  ยท  Source: terraform-providers/terraform-provider-azurerm

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Terraform (and AzureRM Provider) Version

Terraform v0.11.14

Affected Resource(s)

  • azurerm_shared_image_version

Terraform Configuration Files

resource "azurerm_shared_image_version" "version" {
  count               = "${var.create ? 1 : 0}"
  name                = "${var.global_image_version}"
  gallery_name        = "${var.gallery_name}"
  image_name          = "${var.image_name}"
  resource_group_name = "${var.resource_group_name}"
  location            = "${var.location}"
  managed_image_id    = "${var.managed_image_id}"

  target_region {
    name                   = "Australia East"
    regional_replica_count = 1
  }


  target_region {
    name                   = "Canada Central"
    regional_replica_count = 1
  }

  target_region {
    name                   = "East US 2"
    regional_replica_count = 1
  }

  target_region {
    name                   = "UK South"
    regional_replica_count = 1
  }

  target_region {
    name                   = "West US"
    regional_replica_count = 1
  }

  target_region {
    name                   = "West US 2"
    regional_replica_count = 1
  }

  target_region {
    name                   = "West Europe"
    regional_replica_count = 1
  }
}

Expected Behavior

A plan after the resource is applied should not show changes.

Actual Behavior

If you apply a azurerm_shared_image_version with more than a few target_region it will continue to prompt for changes. Reordering the field resources in alpha had no affected.

First apply:

+ module.version-windows-com.azurerm_shared_image_version.version
      id:                                              <computed>
      exclude_from_latest:                             "false"
      gallery_name:                                    "global_image_gallery"
      image_name:                                      "Windows"
      location:                                        "uksouth"
      managed_image_id:                                "/subscriptions/417ff278-xxxx-xxxx-xxxx-905364f236a7/resourceGroups/Global/providers/Microsoft.Compute/images/2019-Datacenter-base-2019_05_28_19_30"
      name:                                            "1.0.0"
      resource_group_name:                             "Global"
      tags.%:                                          <computed>
      target_region.#:                                 "7"
      target_region.1157418205.name:                   "australiaeast"
      target_region.1157418205.regional_replica_count: "1"
      target_region.1533331174.name:                   "westus2"
      target_region.1533331174.regional_replica_count: "1"
      target_region.3269128100.name:                   "eastus2"
      target_region.3269128100.regional_replica_count: "1"
      target_region.3572295013.name:                   "canadacentral"
      target_region.3572295013.regional_replica_count: "1"
      target_region.4212816651.name:                   "westus"
      target_region.4212816651.regional_replica_count: "1"
      target_region.467875310.name:                    "westeurope"
      target_region.467875310.regional_replica_count:  "1"
      target_region.60231537.name:                     "uksouth"
      target_region.60231537.regional_replica_count:   "1"

Post apply plan shows no matter how many times you apply it:

~ module.version-windows-com.azurerm_shared_image_version.version
      target_region.1157418205.name:                   "" => "australiaeast"
      target_region.1157418205.regional_replica_count: "" => "1"
      target_region.1533331174.name:                   "" => "westus2"
      target_region.1533331174.regional_replica_count: "" => "1"
      target_region.1725755731.name:                   "westeurope" => ""
      target_region.1725755731.regional_replica_count: "1" => "0"
      target_region.19712036.name:                     "canadacentral" => ""
      target_region.19712036.regional_replica_count:   "1" => "0"
      target_region.3269128100.name:                   "" => "eastus2"
      target_region.3269128100.regional_replica_count: "" => "1"
      target_region.338180377.name:                    "westus2" => ""
      target_region.338180377.regional_replica_count:  "1" => "0"
      target_region.3515613543.name:                   "eastus2" => ""
      target_region.3515613543.regional_replica_count: "1" => "0"
      target_region.3572295013.name:                   "" => "canadacentral"
      target_region.3572295013.regional_replica_count: "" => "1"
      target_region.4098476443.name:                   "westus" => ""
      target_region.4098476443.regional_replica_count: "1" => "0"
      target_region.4212816651.name:                   "" => "westus"
      target_region.4212816651.regional_replica_count: "" => "1"
      target_region.467875310.name:                    "" => "westeurope"
      target_region.467875310.regional_replica_count:  "" => "1"
      target_region.60231537.name:                     "" => "uksouth"
      target_region.60231537.regional_replica_count:   "" => "1"
      target_region.669605826.name:                    "australiaeast" => ""
      target_region.669605826.regional_replica_count:  "1" => "0"
      target_region.954806164.name:                    "uksouth" => ""
      target_region.954806164.regional_replica_count:  "1" => "0"
bug servicimages waiting-response

Most helpful comment

Hitting a similar issue with this resource. My problem is that the azurerm_shared_image_version has to be re-created when a new target_region for replication is added or removed. This can be done live over portal (changing replica_count and replica_region) so why not with terraform?

Order is preserved in my case and there are no infinite applies, after the apply is done. Plan shows no changes until we actually do modify the regions.

Providing the example code and versions output:

Versions

Terraform v0.12.20

terraform-provider-azurerm_v2.8.0_x5

Terraform code

  "image_definitions": [
    {
      "name": "XX-Linux-CentOS-7",
      "os_type": "Linux",
      "hyper_v_generation": "V1",
      "publisher": "XXXX",
      "sku": "Standard",
      "zone_resilient": true,
      "size_gb": 30,
      "os_state": "Generalized",
      "caching": "ReadWrite",
      "versions": [
        {
          "name": "2020-5",
          "version_level": "1.0.0",
          "replication": {
            "westeurope": {
              "replica_count": 3
            },
            "northeurope": {
              "replica_count": 3
            }
          },
          "blob_uri": "https://.....vhd"
        }
      ]
    }
  ],
resource "azurerm_shared_image_version" "image_version" {
  for_each = toset(keys(local.images))

  name                = lookup(local.images[each.key], "image_version_level", null)
  gallery_name        = azurerm_shared_image_gallery.image_gallery.name
  image_name          = azurerm_shared_image.image_definition[lookup(local.images[each.key], "name", null)].name
  managed_image_id    = azurerm_image.image[each.key].id
  exclude_from_latest = false
  tags                = var.tags

  resource_group_name = data.terraform_remote_state.resource_group.outputs.info["name"]
  location            = data.terraform_remote_state.resource_group.outputs.info["location"]

  dynamic "target_region" {
    iterator = target
    for_each = toset(keys(local.images[each.key]["image_replication"]))

    content {
      name                   = target.key
      regional_replica_count = lookup(local.images[each.key]["image_replication"][target.key], "replica_count", 1)
      storage_account_type   = "Standard_LRS"
    }
  }

  timeouts {
    create = "3h"
    update = "3h"
    delete = "3h"
  }
}

Changing the replica_count for example in


          "replication": {
            "westeurope": {
              "replica_count": 3 -> 2
            },

results in this plan

-/+ destroy and then create replacement

Terraform will perform the following actions:

  # azurerm_shared_image_version.image_version["XX-Linux-CentOS-7--2020-5"] must be replaced
-/+ resource "azurerm_shared_image_version" "image_version" {
        exclude_from_latest = false
        gallery_name        = "XXX"
      ~ id                  = "...XX-Linux-CentOS-7/versions/1.0.0" -> (known after apply)
        image_name          = "XX-Linux-CentOS-7"
        location            = "westeurope"
        managed_image_id    = "XXX"
        name                = "1.0.0"
        resource_group_name = "XXX"
        tags                = {
            "environment"      = "prod"
            "location"         = "westeurope"
            "service"          = "XXX"
            "service_location" = "XXX"
            "team_owner"       = "XXX"
        }

        target_region {
            name                   = "northeurope"
            regional_replica_count = 3
            storage_account_type   = "Standard_LRS"
        }
      + target_region { # forces replacement
          + name                   = "westeurope"
          + regional_replica_count = 2
          + storage_account_type   = "Standard_LRS"
        }
      - target_region { # forces replacement
          - name                   = "westeurope" -> null
          - regional_replica_count = 3 -> null
          - storage_account_type   = "Standard_LRS" -> null
        }

        timeouts {
            create = "3h"
            delete = "3h"
            update = "3h"
        }
    }

Plan: 1 to add, 0 to change, 1 to destroy.

------------------------------------------------------------------------

Any advice on this? SInce re-recreating an entire image version for replicating in a new region is an ugly way - it could take more than a few hours and could result in service disruption - cannot deploy new VMs with that image version.

All 4 comments

Hitting a similar issue with this resource. My problem is that the azurerm_shared_image_version has to be re-created when a new target_region for replication is added or removed. This can be done live over portal (changing replica_count and replica_region) so why not with terraform?

Order is preserved in my case and there are no infinite applies, after the apply is done. Plan shows no changes until we actually do modify the regions.

Providing the example code and versions output:

Versions

Terraform v0.12.20

terraform-provider-azurerm_v2.8.0_x5

Terraform code

  "image_definitions": [
    {
      "name": "XX-Linux-CentOS-7",
      "os_type": "Linux",
      "hyper_v_generation": "V1",
      "publisher": "XXXX",
      "sku": "Standard",
      "zone_resilient": true,
      "size_gb": 30,
      "os_state": "Generalized",
      "caching": "ReadWrite",
      "versions": [
        {
          "name": "2020-5",
          "version_level": "1.0.0",
          "replication": {
            "westeurope": {
              "replica_count": 3
            },
            "northeurope": {
              "replica_count": 3
            }
          },
          "blob_uri": "https://.....vhd"
        }
      ]
    }
  ],
resource "azurerm_shared_image_version" "image_version" {
  for_each = toset(keys(local.images))

  name                = lookup(local.images[each.key], "image_version_level", null)
  gallery_name        = azurerm_shared_image_gallery.image_gallery.name
  image_name          = azurerm_shared_image.image_definition[lookup(local.images[each.key], "name", null)].name
  managed_image_id    = azurerm_image.image[each.key].id
  exclude_from_latest = false
  tags                = var.tags

  resource_group_name = data.terraform_remote_state.resource_group.outputs.info["name"]
  location            = data.terraform_remote_state.resource_group.outputs.info["location"]

  dynamic "target_region" {
    iterator = target
    for_each = toset(keys(local.images[each.key]["image_replication"]))

    content {
      name                   = target.key
      regional_replica_count = lookup(local.images[each.key]["image_replication"][target.key], "replica_count", 1)
      storage_account_type   = "Standard_LRS"
    }
  }

  timeouts {
    create = "3h"
    update = "3h"
    delete = "3h"
  }
}

Changing the replica_count for example in


          "replication": {
            "westeurope": {
              "replica_count": 3 -> 2
            },

results in this plan

-/+ destroy and then create replacement

Terraform will perform the following actions:

  # azurerm_shared_image_version.image_version["XX-Linux-CentOS-7--2020-5"] must be replaced
-/+ resource "azurerm_shared_image_version" "image_version" {
        exclude_from_latest = false
        gallery_name        = "XXX"
      ~ id                  = "...XX-Linux-CentOS-7/versions/1.0.0" -> (known after apply)
        image_name          = "XX-Linux-CentOS-7"
        location            = "westeurope"
        managed_image_id    = "XXX"
        name                = "1.0.0"
        resource_group_name = "XXX"
        tags                = {
            "environment"      = "prod"
            "location"         = "westeurope"
            "service"          = "XXX"
            "service_location" = "XXX"
            "team_owner"       = "XXX"
        }

        target_region {
            name                   = "northeurope"
            regional_replica_count = 3
            storage_account_type   = "Standard_LRS"
        }
      + target_region { # forces replacement
          + name                   = "westeurope"
          + regional_replica_count = 2
          + storage_account_type   = "Standard_LRS"
        }
      - target_region { # forces replacement
          - name                   = "westeurope" -> null
          - regional_replica_count = 3 -> null
          - storage_account_type   = "Standard_LRS" -> null
        }

        timeouts {
            create = "3h"
            delete = "3h"
            update = "3h"
        }
    }

Plan: 1 to add, 0 to change, 1 to destroy.

------------------------------------------------------------------------

Any advice on this? SInce re-recreating an entire image version for replicating in a new region is an ugly way - it could take more than a few hours and could result in service disruption - cannot deploy new VMs with that image version.

@rohrerb , thanks for opening this issue. Seems the issue has gone since I cannot repro it anymore with below tfconfig and latest azurerm. Could you have a try below tfconfig and latest azurerm to check whether the issue still exists? Thanks.

provider "azurerm" {
  features {}
}

resource "azurerm_resource_group" "test" {
  name     = "neil-resources-si3"
  location = "eastus2"
}

resource "azurerm_virtual_network" "test" {
  name                = "batch-custom-img-vnet3"
  address_space       = ["10.0.0.0/16"]
  location            = azurerm_resource_group.test.location
  resource_group_name = azurerm_resource_group.test.name
}

resource "azurerm_subnet" "test" {
  name                 = "internaltestsub3"
  resource_group_name  = azurerm_resource_group.test.name
  virtual_network_name = azurerm_virtual_network.test.name
  address_prefixes       = ["10.0.2.0/24"]
}

resource "azurerm_public_ip" "test" {
  name                = "batch-custom-img-ip3"
  location            = azurerm_resource_group.test.location
  resource_group_name = azurerm_resource_group.test.name
  allocation_method   = "Dynamic"
  domain_name_label   = "batch-custom-img3"
}

resource "azurerm_network_interface" "test" {
  name                = "batch-custom-img-nic3"
  location            = azurerm_resource_group.test.location
  resource_group_name = azurerm_resource_group.test.name

  ip_configuration {
    name                          = "testconfigurationsource"
    subnet_id                     = azurerm_subnet.test.id
    private_ip_address_allocation = "Dynamic"
    public_ip_address_id          = azurerm_public_ip.test.id
  }
}

resource "azurerm_storage_account" "test" {
  name                     = "batchcustomimgstore3"
  resource_group_name      = azurerm_resource_group.test.name
  location                 = azurerm_resource_group.test.location
  account_tier             = "Standard"
  account_replication_type = "LRS"
}

resource "azurerm_storage_container" "test" {
  name                  = "vhds3"
  storage_account_name  = azurerm_storage_account.test.name
  container_access_type = "private"
}

resource "azurerm_virtual_machine" "test" {
  name                  = "batch-custom-img-vm3"
  location              = azurerm_resource_group.test.location
  resource_group_name   = azurerm_resource_group.test.name
  network_interface_ids = [azurerm_network_interface.test.id]
  vm_size               = "Standard_D1_v2"

  storage_image_reference {
    publisher = "Canonical"
    offer     = "UbuntuServer"
    sku       = "16.04-LTS"
    version   = "latest"
  }

  storage_os_disk {
    name          = "myosdisk1"
    vhd_uri       = "${azurerm_storage_account.test.primary_blob_endpoint}${azurerm_storage_container.test.name}/myosdisk1.vhd"
    caching       = "ReadWrite"
    create_option = "FromImage"
    disk_size_gb  = "30"
  }

  os_profile {
    computer_name  = "batch-custom-img-vm"
    admin_username = "abuser"
    admin_password = "P@ssW0RD6543"
  }

  os_profile_linux_config {
    disable_password_authentication = false
  }
}

resource "azurerm_image" "test" {
  name                = "batch-custom-img3"
  location            = azurerm_resource_group.test.location
  resource_group_name = azurerm_resource_group.test.name

  os_disk {
    os_type  = "Linux"
    os_state = "Generalized"
    blob_uri = azurerm_virtual_machine.test.storage_os_disk.0.vhd_uri
    size_gb  = 30
    caching  = "None"
  }
}

resource "azurerm_shared_image_gallery" "test" {
  name                = "acctestsigtest3"
  resource_group_name = azurerm_resource_group.test.name
  location            = azurerm_resource_group.test.location
}

resource "azurerm_shared_image" "test" {
  name                = "acctestimgtest3"
  gallery_name        = azurerm_shared_image_gallery.test.name
  resource_group_name = azurerm_resource_group.test.name
  location            = azurerm_resource_group.test.location
  os_type             = "Linux"

  identifier {
    publisher = "AccTesPublisher1"
    offer     = "AccTesOffer1"
    sku       = "AccTesSku1"
  }
}

resource "azurerm_shared_image_version" "test" {
  name                = "0.0.5"
  gallery_name        = azurerm_shared_image_gallery.test.name
  image_name          = azurerm_shared_image.test.name
  resource_group_name = azurerm_resource_group.test.name
  location            = azurerm_resource_group.test.location
  managed_image_id    = azurerm_image.test.id  

  target_region {
    name                   = "eastus2"
    regional_replica_count = 4
    storage_account_type   = "Standard_ZRS"
  }

  target_region {
    name                   = "westus2"
    regional_replica_count = 3
    storage_account_type   = "Standard_ZRS"
  }

  target_region {
    name                   = "westeurope"
    regional_replica_count = 2
    storage_account_type   = "Standard_ZRS"
  }

  target_region {
    name                   = "eastus"
    regional_replica_count = 4
    storage_account_type   = "Standard_ZRS"
  }
}

๐Ÿ‘‹

Since we've not heard back here I'm going to close this issue for the moment - but please let us know if you're still seeing this when using the latest version of Terraform/the Azure Provider and we'll take another look.

Thanks!

I'm going to lock this issue because it has been closed for _30 days_ โณ. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error ๐Ÿค– ๐Ÿ™‰ , please reach out to my human friends ๐Ÿ‘‰ [email protected]. Thanks!

Was this page helpful?
0 / 5 - 0 ratings