Terraform-provider-azurerm: Adding an additional `geo_location` to an `azurerm_cosmosdb_account` should not require replacement

Created on 27 May 2019  ·  7Comments  ·  Source: terraform-providers/terraform-provider-azurerm

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Terraform (and AzureRM Provider) Version

$ terraform -v
Terraform v0.12.0
+ provider.azurerm v1.28.0
+ provider.random v2.1.2

Affected Resource(s)

  • azurerm_cosmosdb_account

Terraform Configuration Files

resource "azurerm_resource_group" "rg" {
  name     = "rg123"
  location = "WestUS"
}

resource "random_integer" "ri" {
  min = 10000
  max = 99999
}

resource "azurerm_cosmosdb_account" "db" {
  name                = "tfex-cosmos-db-${random_integer.ri.result}"
  kind                = "MongoDB"
  resource_group_name = "${azurerm_resource_group.rg.name}"
  location            = "${azurerm_resource_group.rg.location}"
  consistency_policy {
    consistency_level       = "BoundedStaleness"
    max_interval_in_seconds = 10
    max_staleness_prefix    = 200
  }
  offer_type          = "Standard"
  enable_automatic_failover = true
  geo_location {
    location          = "WestUS"
    failover_priority = 0
  }
#  geo_location {
#    location          = "EastUS"
#    failover_priority = 1
#  }
}

Expected Behavior

After deploying the code above, uncommenting the second geo_location and rerunning apply should have updated the existing CosmosDB database.

Actual Behavior

Instead, it requires a replacement.

  # azurerm_cosmosdb_account.db must be replaced
-/+ resource "azurerm_cosmosdb_account" "db" {
      ~ connection_strings                = (sensitive value)
        enable_automatic_failover         = true
        enable_multiple_write_locations   = false
      ~ endpoint                          = "https://tfex-cosmos-db-95504.documents.azure.com:443/" -> (known after apply)
      ~ id                                = "/subscriptions/8bafcca2-660a-4459-a503-b785cf317a3a/resourceGroups/rg123/providers/Microsoft.DocumentDB/databaseAccounts/tfex-cosmos-db-95504" -> (known after apply)
        is_virtual_network_filter_enabled = false
        kind                              = "MongoDB"
        location                          = "westus"
        name                              = "tfex-cosmos-db-95504"
        offer_type                        = "Standard"
      ~ primary_master_key                = (sensitive value)
      ~ primary_readonly_master_key       = (sensitive value)
      ~ read_endpoints                    = [
          - "https://tfex-cosmos-db-95504-westus.documents.azure.com:443/",
        ] -> (known after apply)
        resource_group_name               = "rg123"
      ~ secondary_master_key              = (sensitive value)
      ~ secondary_readonly_master_key     = (sensitive value)
      ~ tags                              = {} -> (known after apply)
      ~ write_endpoints                   = [
          - "https://tfex-cosmos-db-95504-westus.documents.azure.com:443/",
        ] -> (known after apply)

        consistency_policy {
            consistency_level       = "BoundedStaleness"
            max_interval_in_seconds = 10
            max_staleness_prefix    = 200
        }

      - geo_location { # forces replacement
          - failover_priority = 0 -> null
          - id                = "tfex-cosmos-db-95504-westus" -> null
          - location          = "westus" -> null
        }
      + geo_location { # forces replacement
          + failover_priority = 0
          + id                = (known after apply)
          + location          = "westus"
        }
      + geo_location { # forces replacement
          + failover_priority = 1
          + id                = (known after apply)
          + location          = "eastus"
        }
    }

Steps to Reproduce

  1. terraform apply
  2. Uncomment the second geo_location
  3. terraform apply again

Notes

  • Additional geo locations can be added to a CosmosDB account without downtime or replacement using the API or console.
bug serviccosmosdb

Most helpful comment

I have a similar issue. In my case I tried using the _dynamic_ functionality to allow a variable number of geo_location blocks e.g. per environment, but this appears to require a full replace of the CosmosDB account on each apply. So not usable currently.

All 7 comments

So I have a similar scenario where I haven't added a new geo_location. We already had 2. Our plan output looks like more this:

      - geo_location { # forces replacement
          - failover_priority = 0 -> null
          - id                = "aca-pre-neu-csvi-geoloc2-1323" -> null
          - location          = "westeurope" -> null
          - prefix             = "aca-pre-neu-csvi-geoloc2-1323" -> null
        }
      + geo_location { # forces replacement
          + failover_priority = 0
          + id                = (known after apply)
          + location          = "westeurope"
          + prefix             = "aca-pre-neu-csvi-geoloc2-1323"
        }
      - geo_location { # forces replacement
          - failover_priority = 1 -> null
          - id                = "aca-pre-neu-csvi-1323-northeurope" -> null
          - location          = "northeurope" -> null
        }
      + geo_location { # forces replacement
          + failover_priority = 1
          + id                = (known after apply)
          + location          = "northeurope"
          + prefix             = "aca-pre-neu-csvi-1323-northeurope"
        }

Now the first geoLoc stanza to us looks identical so I don't understand why it needs to -/+ at all

The second one didn't have a "prefix" in it previously, but I believe in adding a prefix which matches - it should be the same right? Yet it is still trying to -/+. We have modules and multiple environments, so we are struggling to have a prefix for the first one but not for the second without at least destroying the CosmosDB in at least ONE of our environments.

Ultimately my question is - Would any of the referenced PR's resolve this?

If not - and if this is indeed caused by the prefix addition to the second geo_location stanza - is there any way to set the prefix on the second geo_location stanza without causing a -/+ ?

I have a similar issue. In my case I tried using the _dynamic_ functionality to allow a variable number of geo_location blocks e.g. per environment, but this appears to require a full replace of the CosmosDB account on each apply. So not usable currently.

Having the same issue here.

Changing geo settings replaces the entire db

Has anyone looked into this? This becomes a highly destructive action.

I am also having this issue today. I currently only have a single geo_location with priority 0. I am now trying to add a second region for geo_location, without updating the primary location which is already defined and it is wanting to completely destroy my primary location to recreate it, whilst adding the secondary location.

This has been released in version 2.14.0 of the provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading. As an example:

provider "azurerm" {
    version = "~> 2.14.0"
}
# ... other configuration ...

I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks!

Was this page helpful?
0 / 5 - 0 ratings