Terraform-provider-azurerm: Ordering of Frontdoor Resources not Predictable

Created on 18 Nov 2020  路  4Comments  路  Source: terraform-providers/terraform-provider-azurerm

Community Note

  • Please vote on this issue by adding a 馃憤 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Terraform (and AzureRM Provider) Version

Affected Resource(s)

  • azurerm_2.35.0

Terraform Configuration Files

variable endpoints {
  type = map(object({
    host_name = string
    routing_patterns_to_match = list(string)
    health_probe_path = string
    backends = map(object({
      priority = number
      weight = number
    }))
  }))
  description = "The set of backend pools"
}
resource "azurerm_frontdoor" "afd" {
  name = var.account_name
  resource_group_name = var.resource_group_name
  enforce_backend_pools_certificate_name_check = false

  frontend_endpoint {
    name = "default"
    host_name = "${var.account_name}.azurefd.net"
    custom_https_provisioning_enabled = false
  }

  dynamic "frontend_endpoint" {
    for_each = var.endpoints
    content {
      name = frontend_endpoint.key
      host_name = frontend_endpoint.value.host_name
    }
  }

  backend_pool_load_balancing {
    name = "load-balancing"
    additional_latency_milliseconds = 100
  }

  dynamic "backend_pool_health_probe" {
    for_each = var.endpoints
    content {
      name = backend_pool_health_probe.key
      path = backend_pool_health_probe.value.health_probe_path
      protocol = "Https"
      probe_method = "GET"
      interval_in_seconds = 10
    }
  }

  dynamic "backend_pool" {
    for_each = var.endpoints
    content {
      name = backend_pool.key
      load_balancing_name = "load-balancing"
      health_probe_name = backend_pool.key
      dynamic "backend" {
        for_each = backend_pool.value.backends
        content {
          host_header = backend.key
          address = backend.key
          http_port = 80
          https_port = 443
          priority = backend.value.priority
          weight = backend.value.weight
        }
      }
    }
  }

  dynamic "routing_rule" {
    for_each = var.endpoints
    content {
      name = "${routing_rule.key}-redirect"
      accepted_protocols = ["Http"]
      patterns_to_match = ["/*"]
      frontend_endpoints = [routing_rule.key]
      redirect_configuration {
        redirect_protocol = "HttpsOnly"
        redirect_type = "Moved"
      }
    }
  }

  dynamic "routing_rule" {
    for_each = var.endpoints
    content {
      name = routing_rule.key
      accepted_protocols = ["Https"]
      patterns_to_match = routing_rule.value.routing_patterns_to_match
      frontend_endpoints = [routing_rule.key]
      forwarding_configuration {
        backend_pool_name = routing_rule.key
        forwarding_protocol = "HttpsOnly"
      }
    }
  }
}

Debug Output

https://gist.github.com/matthawley/33ef3f7c3734e88c51a8f44cb3532767

Panic Output


N/A

Expected Behaviour

When planning without changing any resources, the front door resource should be detected as not being modified.

Actual Behaviour

It is detecting that our frontend_endpoint and backend_pool are changing as Azure FrontDoor seems to have a specified ordering in place that is inconsistent across AFD resources. This requires us to re-apply the AFD resource every deployment, which takes upwards of 10 minutes, without any actual changes being applied.

Either having these sub resources lexically ordered, or at least consistent from the initial creation, would allow us to skip no-op configurations.

Steps to Reproduce

  1. terraform plan --var-file demo.tfvars

Important Factoids


N/A

References

N/A

bug duplicate servicfrontdoor

Most helpful comment

I see the same exact thing. It is switching the order for frontend_endpoint, causing a major change that takes a long time.

All 4 comments

I see the same exact thing. It is switching the order for frontend_endpoint, causing a major change that takes a long time.

What's even worse, is the re-ordering of the frontend endpoints are causing our ssl configurations to drop pretty consistently.

Same here, although seeing an error at times as well as it's trying top assign a custom certificate to the mandatory Front End (the xxx.azurefd.net front end)

waiting to enable Custom Domain HTTPS for Frontend Endpoint: Code="CustomDomainSecureDeliveryKeyVaultCustomDomainHostnameNotIncluded" Message="The certificate doesn't include the hostname to be secured."

馃憢

Taking a look through here this appears to be the same root cause as #9153 - to ensure we only have one issue tracking the same thing I'm going to close this issue in favour of that one, would you mind subscribing to #9153 for updates?

Thanks

Was this page helpful?
0 / 5 - 0 ratings