Terraform-provider-azurerm: Error when running against existing setup with access policies defined

Created on 3 Dec 2020  路  6Comments  路  Source: terraform-providers/terraform-provider-azurerm

Terraform v0.14.0
+ provider registry.terraform.io/hashicorp/azurerm v2.38.0

When trying to provision with a key vault access policy defined, I get:

Error: expected "access_policy.0.application_id" to be a valid UUID, got 

_(Nothing comes after "got" there.)_

Not all my policies are for applications, so I don't understand why I'm being forced to provide one?


This wasn't an issue prior to upgrading from 0.13.0 to 0.14.0, azurerm provider doesn't seem to necessarily impact this. But I guess that means it's a problem between the provider and the latest version of terraform?

question

Most helpful comment

Terraform v0.14.0
+ provider registry.terraform.io/hashicorp/azurerm v2.39.0
+ provider registry.terraform.io/hashicorp/kubernetes v1.13.3
+ provider registry.terraform.io/hashicorp/random v3.0.0

Create azure_key_vault:

resource "azurerm_key_vault" "main" {
  name                        = "vault-${random_id.keyvault.hex}"
  location                    = var.resource_group.location
  resource_group_name         = var.resource_group.name
  enabled_for_deployment      = true
  enabled_for_disk_encryption = true
  tenant_id                   = data.azurerm_client_config.current.tenant_id

  soft_delete_enabled         = true
  soft_delete_retention_days  = 30
  purge_protection_enabled    = false

  sku_name = "standard"

  lifecycle {
    ignore_changes = [
      access_policy,
    ]
  }

  access_policy {
    tenant_id = data.azurerm_client_config.current.tenant_id
    object_id = data.azurerm_client_config.current.object_id
    key_permissions = [
      "get",
      "list",
      "create",
      "delete",
      "update",
      "wrapKey",
      "unwrapKey",
    ]
    secret_permissions = [
      "backup",
      "delete",
      "get",
      "list",
      "purge",
      "recover",
      "restore",
      "set"
    ]
  }

  network_acls {
    default_action = "Allow"
    bypass         = "AzureServices"
  }
}

Do a terraform plan -out plan.out afterwards and there are no issues. Do a plan after the initial apply and the following error happens:

Error: expected "access_policy.0.application_id" to be a valid UUID, got 

  on ../modules/environment/keyvault/main.tf line 12, in resource "azurerm_key_vault" "main":
  12: resource "azurerm_key_vault" "main" {



Error: expected "access_policy.1.application_id" to be a valid UUID, got 

  on ../modules/environment/keyvault/main.tf line 12, in resource "azurerm_key_vault" "main":
  12: resource "azurerm_key_vault" "main" {

terraform state show shows the application_id set to "" .

resource "azurerm_key_vault" "main" {
    access_policy                   = [
        {
            application_id          = ""
            certificate_permissions = null
            key_permissions         = [
                "get",
                "list",
                "create",
                "delete",
                "update",
                "wrapKey",
                "unwrapKey",
            ]
            object_id               = "<snip>"
            secret_permissions      = [
                "backup",
                "delete",
                "get",
                "list",
                "purge",
                "recover",
                "restore",
                "set",
            ]
            storage_permissions     = null
            tenant_id               = "<snip>"
        },
    ]
    enable_rbac_authorization       = false
    enabled_for_deployment          = true
    enabled_for_disk_encryption     = true
    enabled_for_template_deployment = false
    id                              = "/subscriptions/<snip>/resourceGroups/<snip>/providers/Microsoft.KeyVault/vaults/<snip>"
    location                        = "<snip>"
    name                            = "<snip>"
    purge_protection_enabled        = false
    resource_group_name             = "<snip>"
    sku_name                        = "standard"
    soft_delete_enabled             = true
    soft_delete_retention_days      = 30
    tenant_id                       = "<snip>"
    vault_uri                       = "https://<snip>.vault.azure.net/"

    network_acls {
        bypass         = "AzureServices"
        default_action = "Allow"
    }
}

If this is the initial plan , it works fine, but if the resource has already been created, plan fails with the above error.

All 6 comments

hi @atrauzzi

Thanks for opening this issue.

Unfortunately there's not enough information in the description above for us to diagnose the specific resource in question here - would you mind updating the description to include all of the information from the bug report template so that we can take a look into this?

Thanks!

@tombuildsstuff - I'm pretty sure I included everything that I have information for. What do you need specifically? For security reasons I can't just dump all my tf files in here. You should have enough to go off of from my detailed explanation.

Terraform v0.14.0
+ provider registry.terraform.io/hashicorp/azurerm v2.39.0
+ provider registry.terraform.io/hashicorp/kubernetes v1.13.3
+ provider registry.terraform.io/hashicorp/random v3.0.0

Create azure_key_vault:

resource "azurerm_key_vault" "main" {
  name                        = "vault-${random_id.keyvault.hex}"
  location                    = var.resource_group.location
  resource_group_name         = var.resource_group.name
  enabled_for_deployment      = true
  enabled_for_disk_encryption = true
  tenant_id                   = data.azurerm_client_config.current.tenant_id

  soft_delete_enabled         = true
  soft_delete_retention_days  = 30
  purge_protection_enabled    = false

  sku_name = "standard"

  lifecycle {
    ignore_changes = [
      access_policy,
    ]
  }

  access_policy {
    tenant_id = data.azurerm_client_config.current.tenant_id
    object_id = data.azurerm_client_config.current.object_id
    key_permissions = [
      "get",
      "list",
      "create",
      "delete",
      "update",
      "wrapKey",
      "unwrapKey",
    ]
    secret_permissions = [
      "backup",
      "delete",
      "get",
      "list",
      "purge",
      "recover",
      "restore",
      "set"
    ]
  }

  network_acls {
    default_action = "Allow"
    bypass         = "AzureServices"
  }
}

Do a terraform plan -out plan.out afterwards and there are no issues. Do a plan after the initial apply and the following error happens:

Error: expected "access_policy.0.application_id" to be a valid UUID, got 

  on ../modules/environment/keyvault/main.tf line 12, in resource "azurerm_key_vault" "main":
  12: resource "azurerm_key_vault" "main" {



Error: expected "access_policy.1.application_id" to be a valid UUID, got 

  on ../modules/environment/keyvault/main.tf line 12, in resource "azurerm_key_vault" "main":
  12: resource "azurerm_key_vault" "main" {

terraform state show shows the application_id set to "" .

resource "azurerm_key_vault" "main" {
    access_policy                   = [
        {
            application_id          = ""
            certificate_permissions = null
            key_permissions         = [
                "get",
                "list",
                "create",
                "delete",
                "update",
                "wrapKey",
                "unwrapKey",
            ]
            object_id               = "<snip>"
            secret_permissions      = [
                "backup",
                "delete",
                "get",
                "list",
                "purge",
                "recover",
                "restore",
                "set",
            ]
            storage_permissions     = null
            tenant_id               = "<snip>"
        },
    ]
    enable_rbac_authorization       = false
    enabled_for_deployment          = true
    enabled_for_disk_encryption     = true
    enabled_for_template_deployment = false
    id                              = "/subscriptions/<snip>/resourceGroups/<snip>/providers/Microsoft.KeyVault/vaults/<snip>"
    location                        = "<snip>"
    name                            = "<snip>"
    purge_protection_enabled        = false
    resource_group_name             = "<snip>"
    sku_name                        = "standard"
    soft_delete_enabled             = true
    soft_delete_retention_days      = 30
    tenant_id                       = "<snip>"
    vault_uri                       = "https://<snip>.vault.azure.net/"

    network_acls {
        bypass         = "AzureServices"
        default_action = "Allow"
    }
}

If this is the initial plan , it works fine, but if the resource has already been created, plan fails with the above error.

Same code from above with no changes.

Terraform v0.13.5
+ provider registry.terraform.io/hashicorp/azurerm v2.39.0
+ provider registry.terraform.io/hashicorp/kubernetes v1.13.3
+ provider registry.terraform.io/hashicorp/random v3.0.0

Your version of Terraform is out of date! The latest version
is 0.14.0. You can update by downloading from https://www.terraform.io/downloads.html

Initial plan:

Plan: 34 to add, 0 to change, 0 to destroy.

------------------------------------------------------------------------

This plan was saved to: plan.out

To perform exactly these actions, run the following command to apply:
    terraform apply "plan.out"

After applying: Apply complete! Resources: 34 added, 0 changed, 0 destroyed.

Doing another terraform plan -out plan.out:

No changes. Infrastructure is up-to-date.

This means that Terraform did not detect any differences between your
configuration and real physical resources that exist. As a result, no
actions need to be performed.

This is the Expected behavior, but the Actual is my previous comment where error is thrown.

Wow, thank you @malik-muratovic!

@tombuildsstuff - Lucky for anyone encountering this issue, the examples above should get through your boilerplate triage, although I seriously question the process! Even though it seems like conventional practice for open source projects nowadays, it is a disingenuous antipattern to expect people to effectively lay out the solution with their ideas/issues/questions.

I confirm having the same issue with latest terraform 0.14. Working fine in 0.13.5

Was this page helpful?
0 / 5 - 0 ratings