Terraform: Unable to destroy machines if data azurerm_shared_image_version has been removed since deployment

Created on 13 Aug 2020  ยท  5Comments  ยท  Source: hashicorp/terraform

The issue I experienced was trying to destroy a machine that was deployed with an imported image using data "azurerm_shared_image_version". The image definition was deleted between the time of deployment and deletion due to clean up exercises. I would not have expected Terraform to care whether the shared image gallery definition was present or not at the time of destruction - it had already deployed it so it's job was done. But it seems if you use a data import for an image from a shared image galley, that image needs to be present at the time of deletion also, even if it was deployed months ago.

Terraform Version

0.12.28

Terraform Configuration Files

data "azurerm_shared_image_version" "aaaa_bbbb_cccc_dddd" {
provider = azurerm.aaaa-sub

name = "latest"
image_name = "aaa-bbb-ccc-ddd"
gallery_name = "aaaaaaaaa"
resource_group_name = "rg-aaa-bbb-ccc-dddd"
}

Debug Output

Refreshing Terraform state in-memory prior to plan...
<>
Error: A Version was not found for Shared Image "aaa-bbb-ccc-ddd" / Gallery "aaaaaaaaa" / Resource Group "g-aaa-bbb-ccc-dddd"

Expected Behavior

I would expect that Terraform does not care whether an imported resource is there or not when doing a destroy

Actual Behavior

Terraform would not run a destroy plan as the image definition used to create a machine was no longer present.

Steps to Reproduce

Set up Shared image gallery, create shared image definition and populate with images.
Deploy a virtual machine or virtual machine scale set with the latest image from this gallery via a Terraform cloud workspace.
Verify it deployed correctly.
Delete the image definition from the shared image gallery for clean up operation.
Destroy the infrastructure via the same Terraform cloud workspace.

core enhancement

Most helpful comment

Re-opening since this is a usage/expectation question for Terraform Core with Data Sources during deletion - rather than something specific to AzureRM

All 5 comments

This issue has been automatically migrated to terraform-providers/terraform-provider-azurerm#8114 because it looks like an issue with that provider. If you believe this is _not_ an issue with the provider, please reply to terraform-providers/terraform-provider-azurerm#8114.

Re-opening since this is a usage/expectation question for Terraform Core with Data Sources during deletion - rather than something specific to AzureRM

Hi @eodonoghue!

Terraform always reads data resources as part of creating any plan, even a destroy plan. This is by design, because data resource results can potentially be used in contexts that _are_ evaluated during destroy, such as an a provider configuration.

With that said, you are right that in principle Terraform could recognize that a particular data resource is only used to populate configurations of other resources and thus detect that it's unnecessary to read it when creating a destroy plan, because destroying a resource uses only the resource's prior state, not its configuration. For that reason, I'm going to relabel this as an enhancement request.

It's not practical for us to do that in the current design of Terraform planning because the data source read is happening during the step when Terraform resynchronizes the state with the remote objects, which is always a consistent series of steps regardless of whether the plan is to destroy.

It may become more practical to implement after some planned refactoring to merge the refresh step and the planning step into a single process, which would then potentially allow Terraform to decide whether a particular data resource is used only as part of the configuration of objects that are planned to be destroyed anyway. However, it still remains to be seen whether the order of operations of even the merged refresh+plan operation will have enough information to conditionally omit data resources, so we'll have to revisit this once the technical design for that merger is complete in order to see whether this optimization is implementable.

Thanks for reporting this!

This is fixed in master by the new data lifecycle changes, and will be included in the 0.14 release.

  • #26270
  • #26285
  • #26321

Thanks!

I'm going to lock this issue because it has been closed for _30 days_ โณ. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

Was this page helpful?
0 / 5 - 0 ratings