Terraform v0.11.10
+ provider.azurerm v1.19.0
resource "azurerm_resource_group" "test" {
name = "example-resources"
location = "West Europe"
}
resource "azurerm_virtual_network" "test" {
name = "example-network"
address_space = ["10.0.0.0/16"]
location = "${azurerm_resource_group.test.location}"
resource_group_name = "${azurerm_resource_group.test.name}"
}
resource "azurerm_subnet" "test" {
name = "frontend"
resource_group_name = "${azurerm_resource_group.test.name}"
virtual_network_name = "${azurerm_virtual_network.test.name}"
address_prefix = "10.0.2.0/24"
route_table_id = "${azurerm_route_table.test.id}"
}
resource "azurerm_route_table" "test" {
name = "example-routetable"
location = "${azurerm_resource_group.test.location}"
resource_group_name = "${azurerm_resource_group.test.name}"
route {
name = "example"
address_prefix = "10.100.0.0/14"
next_hop_type = "VirtualAppliance"
next_hop_in_ip_address = "10.10.1.1"
}
}
resource "azurerm_subnet_route_table_association" "test" {
subnet_id = "${azurerm_subnet.test.id}"
route_table_id = "${azurerm_route_table.test.id}"
}
No warnings to appear
A warning appears
Warning: azurerm_subnet.test: "route_table_id": [DEPRECATED] Use the `azurerm_subnet_route_table_association` resource instead.
terraform initterraform planterraform applyterraform planI have taken the Terraform config above straight from the documentation, if you remove the deprecated route_table_id from the azurerm_subnet resource then you get the following plan after running step 4. in the reproduction steps above:-
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
~ update in-place
Terraform will perform the following actions:
~ azurerm_subnet.test
route_table_id: "/subscriptions/[REDACTED]/resourceGroups/example-resources/providers/Microsoft.Network/routeTables/example-routetable" => ""
Plan: 0 to add, 1 to change, 0 to destroy.
hey @steve-hawkins
Thanks for opening this issue :)
Until the 2.0 release of the AzureRM Provider unfortunately both the route_table_id and the external azurerm_subnet_route_table_association resource need to be configured. Whilst we originally planned to only require the azurerm_subnet_route_table_association resource to be configured - this introduces a breaking change where it's not possible to remove the Route Table using the azurerm_subnet resource - so whilst we realise this isn't ideal, we're planning on making the breaking change (which will remove the route_table_id field from the azurerm_subnet resource) as a part of the 2.0 release.
For the moment that means we've documented this behaviour on the documentation for this resource - but I'm not sure if there's anything else we could do prior to 2.0 without introducing a breaking change?
Thanks!
hey @steve-hawkins
As a workaround I use a lifecycle in azurerm_subnet :
resource "azurerm_subnet" "test" {
name = "frontend"
resource_group_name = "${azurerm_resource_group.test.name}"
virtual_network_name = "${azurerm_virtual_network.test.name}"
address_prefix = "10.0.2.0/24"
lifecycle {
ignore_changes = ["route_table_id"]
}
}
I don't know if this is the best way, but I have not seen any problem with this workaround yet.
Hope it will help you.
Jérémy.
thank @tombuildsstuff I agree with the approach and the note in the documentation makes sense. I raised this issue mostly because of the misleading warning regarding deprecated "route_table_id" from the plan output, would it make sense to update the wording of this warning to deprecating "route_table_id" from 2.x?
@jmapro thanks for the workaround, that does the job
@tombuildsstuff , if i were to do it the way you suggested, it prompts me with an error:
λ terraform apply
Error: Error asking for user input: 1 error(s) occurred:
* Cycle: azurerm_route_table.route-table-trust, azurerm_subnet.trust, azurerm_lb.int-lb
If I comment (#) out the route_table_id in Subnet, it will work once, but the next run it will remove the route_table_association. Neither work for me.
See my example;
resource "azurerm_resource_group" "rg" {
name = "LoadBalancerRG"
location = "${var.location}"
}
# NETWORK
resource "azurerm_virtual_network" "vnet" {
name = "vnet"
address_space = ["10.0.0.0/16"]
location = "${azurerm_resource_group.rg.location}"
resource_group_name = "${azurerm_resource_group.rg.name}"
}
resource "azurerm_subnet" "untrust" {
name = "untrust"
resource_group_name = "${azurerm_resource_group.rg.name}"
virtual_network_name = "${azurerm_virtual_network.vnet.name}"
address_prefix = "10.0.1.0/24"
}
resource "azurerm_subnet" "mgmt" {
name = "mgmt"
resource_group_name = "${azurerm_resource_group.rg.name}"
virtual_network_name = "${azurerm_virtual_network.vnet.name}"
address_prefix = "10.0.2.0/24"
#route_table_id = "${azurerm_route_table.route-table-mgt.id}"
}
resource "azurerm_subnet" "trust" {
name = "trust"
resource_group_name = "${azurerm_resource_group.rg.name}"
virtual_network_name = "${azurerm_virtual_network.vnet.name}"
address_prefix = "10.0.3.0/24"
#route_table_id = "${azurerm_route_table.route-table-trust.id}"
}
# EXTERNAL LB
resource "azurerm_public_ip" "lb-pubip" {
name = "PublicIPForLB"
location = "${var.location}"
resource_group_name = "${azurerm_resource_group.rg.name}"
allocation_method = "Static"
}
resource "azurerm_lb" "ext-lb" {
name = "ext-lb"
location = "${var.location}"
resource_group_name = "${azurerm_resource_group.rg.name}"
frontend_ip_configuration {
name = "PublicIPAddress"
public_ip_address_id = "${azurerm_public_ip.lb-pubip.id}"
}
}
resource "azurerm_lb_backend_address_pool" "ext-lb-pool" {
resource_group_name = "${azurerm_resource_group.rg.name}"
loadbalancer_id = "${azurerm_lb.ext-lb.id}"
name = "BackEndAddressPool"
}
resource "azurerm_lb_probe" "ext-lb-probe" {
resource_group_name = "${azurerm_resource_group.rg.name}"
loadbalancer_id = "${azurerm_lb.ext-lb.id}"
name = "ssh-running-probe"
port = 22
}
#INTERNAL LB
resource "azurerm_lb" "int-lb" {
name = "int-lb"
location = "${var.location}"
resource_group_name = "${azurerm_resource_group.rg.name}"
frontend_ip_configuration {
name = "PrivateIPAddress"
subnet_id = "${azurerm_subnet.trust.id}"
private_ip_address_allocation = "Dynamic"
}
}
resource "azurerm_lb_backend_address_pool" "int-lb-pool" {
resource_group_name = "${azurerm_resource_group.rg.name}"
loadbalancer_id = "${azurerm_lb.int-lb.id}"
name = "BackEndAddressPool"
}
resource "azurerm_lb_probe" "int-lb-probe" {
resource_group_name = "${azurerm_resource_group.rg.name}"
loadbalancer_id = "${azurerm_lb.int-lb.id}"
name = "ssh-running-probe"
port = 22
}
# ROUTE TABLES
resource "azurerm_route_table" "route-table-mgt" {
name = "route-table-mgt"
location = "${azurerm_resource_group.rg.location}"
resource_group_name = "${azurerm_resource_group.rg.name}"
route {
name = "route-mgmt"
address_prefix = "0.0.0.0/0"
next_hop_type = "VirtualAppliance"
next_hop_in_ip_address = "${azurerm_lb.int-lb.private_ip_address}"
}
}
resource "azurerm_subnet_route_table_association" "route-table-mgt-assoc" {
subnet_id = "${azurerm_subnet.mgmt.id}"
route_table_id = "${azurerm_route_table.route-table-mgt.id}"
}
resource "azurerm_route_table" "route-table-trust" {
name = "route-table-trust"
location = "${azurerm_resource_group.rg.location}"
resource_group_name = "${azurerm_resource_group.rg.name}"
route {
name = "route-trust"
address_prefix = "0.0.0.0/0"
next_hop_type = "VirtualAppliance"
next_hop_in_ip_address = "${azurerm_lb.int-lb.private_ip_address}"
}
}
resource "azurerm_subnet_route_table_association" "route-table-trust-assoc" {
subnet_id = "${azurerm_subnet.trust.id}"
route_table_id = "${azurerm_route_table.route-table-trust.id}"
}
# VM STUFF
resource "azurerm_network_interface" "fw-nic1" {
name = "${var.fw-prefix}-nic1"
location = "${azurerm_resource_group.rg.location}"
resource_group_name = "${azurerm_resource_group.rg.name}"
ip_configuration {
name = "testconfiguration1"
subnet_id = "${azurerm_subnet.untrust.id}"
private_ip_address_allocation = "Dynamic"
primary = true
}
}
resource "azurerm_network_interface_backend_address_pool_association" "be-add-pool-untrust-nic1" {
network_interface_id = "${azurerm_network_interface.fw-nic1.id}"
ip_configuration_name = "testconfiguration1"
backend_address_pool_id = "${azurerm_lb_backend_address_pool.ext-lb-pool.id}"
}
resource "azurerm_network_interface" "fw-nic2" {
name = "${var.fw-prefix}-nic2"
location = "${azurerm_resource_group.rg.location}"
resource_group_name = "${azurerm_resource_group.rg.name}"
ip_configuration {
name = "testconfiguration1"
subnet_id = "${azurerm_subnet.mgmt.id}"
private_ip_address_allocation = "Dynamic"
}
}
resource "azurerm_network_interface_backend_address_pool_association" "be-add-pool-trust-nic2" {
network_interface_id = "${azurerm_network_interface.fw-nic2.id}"
ip_configuration_name = "testconfiguration1"
backend_address_pool_id = "${azurerm_lb_backend_address_pool.int-lb-pool.id}"
}
resource "azurerm_network_interface" "fw-nic3" {
name = "${var.fw-prefix}-nic3"
location = "${azurerm_resource_group.rg.location}"
resource_group_name = "${azurerm_resource_group.rg.name}"
ip_configuration {
name = "testconfiguration1"
subnet_id = "${azurerm_subnet.trust.id}"
private_ip_address_allocation = "Dynamic"
}
}
resource "azurerm_network_interface_backend_address_pool_association" "be-add-pool-trust-nic3" {
network_interface_id = "${azurerm_network_interface.fw-nic3.id}"
ip_configuration_name = "testconfiguration1"
backend_address_pool_id = "${azurerm_lb_backend_address_pool.int-lb-pool.id}"
}
resource "azurerm_availability_set" "availset1" {
name = "acceptanceTestAvailabilitySet1"
location = "${azurerm_resource_group.rg.location}"
resource_group_name = "${azurerm_resource_group.rg.name}"
managed = true
}
hey @steve-hawkins
As a workaround I use a lifecycle in azurerm_subnet :
resource "azurerm_subnet" "test" { name = "frontend" resource_group_name = "${azurerm_resource_group.test.name}" virtual_network_name = "${azurerm_virtual_network.test.name}" address_prefix = "10.0.2.0/24" lifecycle { ignore_changes = ["route_table_id"] } }I don't know if this is the best way, but I have not seen any problem with this workaround yet.
Hope it will help you.
Jérémy.
Thanks a lot @jmapro it works for me! It would be nice to have an "official" confirmation that this solution doesn't introduce any other problem. This could be use also to solve the "network_security_group_id" versus "azurerm_subnet_network_security_group_association" problem until AzureRM version 2 is published.
👋
For the moment this needs to be configured in both places, e.g.
resource "azurerm_resource_group" "test" {
name = "example-resources"
location = "West Europe"
}
resource "azurerm_virtual_network" "test" {
name = "example-network"
address_space = ["10.0.0.0/16"]
location = "${azurerm_resource_group.test.location}"
resource_group_name = "${azurerm_resource_group.test.name}"
}
resource "azurerm_route_table" "test" {
name = "example-routetable"
location = "${azurerm_resource_group.test.location}"
resource_group_name = "${azurerm_resource_group.test.name}"
route {
name = "example"
address_prefix = "10.100.0.0/14"
next_hop_type = "VirtualAppliance"
next_hop_in_ip_address = "10.10.1.1"
}
}
resource "azurerm_subnet" "test" {
name = "frontend"
resource_group_name = "${azurerm_resource_group.test.name}"
virtual_network_name = "${azurerm_virtual_network.test.name}"
address_prefix = "10.0.2.0/24"
route_table_id = "${azurerm_route_table.test.id}"
}
resource "azurerm_subnet_route_table_association" "test" {
subnet_id = "${azurerm_subnet.test.id}"
route_table_id = "${azurerm_route_table.test.id}"
}
As mentioned above once 2.0 it out you should be able to drop the route_table_id field within the azurerm_subnet resource and just use the new azurerm_subnet_route_table_association to manage this association instead :)
Since there doesn't appear to be anything remaining in this task until 2.0 - I'm going to close this issue for the moment.
Thanks!
This has been released in version 1.29.0 of the provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading. As an example:
provider "azurerm" {
version = "~> 1.29.0"
}
# ... other configuration ...
I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.
If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks!
Most helpful comment
hey @steve-hawkins
As a workaround I use a lifecycle in azurerm_subnet :
I don't know if this is the best way, but I have not seen any problem with this workaround yet.
Hope it will help you.
Jérémy.