Terraform-provider-azurerm: Tags for databricks's managed resources are not updating

Created on 16 Oct 2020  ·  3Comments  ·  Source: terraform-providers/terraform-provider-azurerm

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Terraform (and AzureRM Provider) Version

Terraform v0.13.4
hashicorp/azurerm v2.32.0

Affected Resource(s)

  • azurerm_databricks_workspace

Terraform Configuration Files

  resource "azurerm_resource_group" "test" {
    name     = "test_resource_group"
    location = "northeurope"
  }

  resource "azurerm_databricks_workspace" "test" {
    name                        = "test"
    sku                         = "premium"
    location                    = "northeurope"
    resource_group_name         = azurerm_resource_group.test.name
    managed_resource_group_name = "test_managed_resource_group"
    tags = {
      key1 = "value1"
    }
  }

Debug Output


NA

Panic Output


NA

Expected Behavior


When updating the tags map in the example configuration above, the tags for the managed Databricks resources should be updated.

Actual Behavior


When the Databricks workspace resource is created initially tags are correctly applied to the managed Databricks resources. However, after creation if the tags map is updated these tags changes are not applied to the managed Databricks resources which remain with the original tags.

Steps to Reproduce


With the example config above

  1. terraform apply
  2. Update the tags map with a new key=value pair
  3. terraform apply

Important Factoids


None

References

NA

question servicdatabricks

All 3 comments

Terraform won't be able to do anything about this unfortunately. The only thing TF provider does is provision a workspace. It's a 3rd party service that further on manages all the resources by firing off ARM templates against Azure APIs on its own.

Nodes in clusters will gain latest tags as they are recycled, but that recycling will largely depend on your clusters configuration. The best option you have is to ignore databricks managed resource groups in any policies.

@nfx Does databricks have anything to say on the matter? :)

hey @dansanabria

Thanks for opening this issue.

As @favoretti has mentioned unfortunately this is an issue in the Azure API, since these changes should be being propagated by the Azure API here (and Terraform supports updating these, albeit the API is opting to roll them out when the resources are cycled).

Since this is a bug in the Azure API - I'm going to suggest opening an issue on the Rest API Specs Repository where somebody from the DataBricks service team should be able to take a look and get this fixed.

Thanks!

I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks!

Was this page helpful?
0 / 5 - 0 ratings