Terraform: Terraform doesn't dissociate NIC from VM before deletion and deletion fails

Created on 25 Aug 2020  ·  6Comments  ·  Source: hashicorp/terraform

Reopening here per @tombuildsstuff 's comment on https://github.com/terraform-providers/terraform-provider-azurerm/issues/8105

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Terraform (and AzureRM Provider) Version


tf 0.13.0
azurerm 2.22.0

Affected Resource(s)

  • azurerm_network_interface

Expected Behavior


Similar to #2566, Terraform should

  • Dissociate the NIC from the VM
  • Delete the NIC

Actual Behavior


It simply tries to delete the NI from the VM and errors out:

Error: Error deleting Network Interface "k6-nic-7" (Resource Group "sam-load-testing-sender"): network.InterfacesClient#Delete: Failure sending request: StatusCode=400 -- Original Error: Code="NicInUse" Message="Network Interface /subscriptions/xxx/resourceGroups/xxx/providers/Microsoft.Network/networkInterfaces/nic-7 is used by existing resource /subscriptions/xxx/resourceGroups/xxx/providers/Microsoft.Compute/virtualMachines/vm1. In order to delete the network interface, it must be dissociated from the resource. To learn more, see aka.ms/deletenic." Details=[]

Steps to Reproduce

  1. terraform apply --var number_of_vms=2 --var nics_per_vm=1
  2. terraform apply --var number_of_vms=1 --var nics_per_vm=1

Terraform configuration

variable "vm-size" {
  type        = string
  description = "Preferred VM Size"
  default     = "Standard_E8_v3"
}

variable "number_of_vms" {
  type        = number
  description = "Number of VMs to create"
}

variable "nics_per_vm" {
  type        = number
  description = "Number of NICs to attach to each created VM"
}

resource "azurerm_resource_group" "rg" {
  name     = "myrg"
  location = "westus"
}

resource "azurerm_virtual_network" "vm_network" {
  name                = "my_network"
  address_space       = ["10.0.0.0/16"]
  location            = azurerm_resource_group.rg.location
  resource_group_name = azurerm_resource_group.rg.name
}

resource "azurerm_subnet" "vm_subnet" {
  name                 = "internal"
  resource_group_name  = azurerm_resource_group.rg.name
  virtual_network_name = azurerm_virtual_network.vm_network.name
  address_prefixes     = ["10.0.2.0/24"]
}

resource "azurerm_public_ip" "pip" {
  count               = (var.number_of_vms * var.nics_per_vm)
  name                = "pip-${count.index}"
  resource_group_name = azurerm_resource_group.rg.name
  location            = azurerm_resource_group.rg.location
  allocation_method   = "Dynamic"
}

resource "azurerm_network_interface" "sender_ni" {
  count               = (var.number_of_vms * var.nics_per_vm)
  name                = "my-nic-${count.index}"
  location            = azurerm_resource_group.rg.location
  resource_group_name = azurerm_resource_group.rg.name

  enable_accelerated_networking = true

  ip_configuration {
    name                          = "internal"
    subnet_id                     = azurerm_subnet.vm_subnet.id
    private_ip_address_allocation = "Dynamic"
    public_ip_address_id          = azurerm_public_ip.pip[count.index].id
  }
}

resource "azurerm_linux_virtual_machine" "vm" {
  count = var.number_of_vms

  name                = "myvm.${count.index}"
  resource_group_name = azurerm_resource_group.rg.name
  location            = azurerm_resource_group.rg.location
  size                = var.vm-size
  admin_username      = "adminuser"

  network_interface_ids = slice(azurerm_network_interface.ni[*].id, var.nics_per_vm * count.index, (var.nics_per_vm * count.index) + var.nics_per_vm)

  admin_ssh_key {
    username   = "systemadmin"
    public_key = data.azurerm_key_vault_secret.ssh_public_key.value
  }

  os_disk {
    caching              = "ReadWrite"
    storage_account_type = "Standard_LRS"
  }

  source_image_reference {
    publisher = "Canonical"
    offer     = "UbuntuServer"
    sku       = "18.04-LTS"
    version   = "latest"
  }
}
bug new

Most helpful comment

shenanigannery fills me with joy thank you for that

All 6 comments

Hi @brandonh-msft , thanks and sorry you're experiencing this!

I'm curious if you tried @tombuildsstuff 's suggestion, and if so if that worked? His explanation about the order of items created makes sense and this does feel like a configuration issue and not a terraform core issue, but I'd like to verify that.

When possible, using for_each instead of count can really help with situations like this one.

I'm curious if you tried @tombuildsstuff 's suggestion, and if so if that worked? His explanation about the order of items created makes sense and this does feel like a configuration issue and not a terraform core issue, but I'd like to verify that.

We've changed our definition to use this suggestion but have not yet executed an "up/down" deployment to see if it solved the problem.

When possible, using for_each instead of count can really help with situations like this one.

I don't disagree however in the scenario above - as you can see - we are creating multiple VMs each with multiple NIs. Because Terraform doesn't allow count and for_each at the same time, we had to resort to the split() shenanigannery.

shenanigannery fills me with joy thank you for that

Hi @brandonh-msft

The default order of operations for terraform resource replacement is delete then create, hence if there is a dependent resource it must be updated _after_ both those operations have succeeded. When you need to "unregister" a resource from another in order to delete it, you need to use the create_before_destroy lifecycle option to have a new resource created before the old one is destroyed. This would result in an overall ordering of create [the NIC], update [the VM], delete [the NIC]; allowing the NIC to be disassociated from the VM before deletion.

Unfortunately this configuration won't work at the moment due to #25631. We can continue to follow the status on that issue.

Thanks!

Thanks for the explanation, suggestion, and update. I'll follow the other issue.

I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

Was this page helpful?
0 / 5 - 0 ratings