Terraform-provider-azurerm: Allow Keyvault Network ACL to be set independently

Created on 27 Mar 2019  路  5Comments  路  Source: terraform-providers/terraform-provider-azurerm

Community Note

  • Please vote on this issue by adding a 馃憤 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Description

It would be nice if we could set the Network ACLs (especially for the IP rules) config block from Azure Keyvault independantly from the Keyvault resource itself (like with Azure SQL Server). This would prevent some dependancy cycle issues between resources. (Let's say a WebApp that needs the Keyvault URI in his app settings and the keyvault that needs the oubound IP from the Webapp in his network ACLs.)

New or Affected Resource(s)

  • azurerm_key_vault

Potential Terraform Configuration

Examples below of the dependancy cycle I often encounter.

resource "azurerm_app_service" "hub_webapp" {
  name                = "hub-webapp-${var.deployment_suffix}"
  location            = "${azurerm_resource_group.main_rg.location}"
  resource_group_name = "${azurerm_resource_group.main_rg.name}"
  app_service_plan_id = "${azurerm_app_service_plan.hub_webapp_plan.id}"

  site_config {
    always_on                = true
    dotnet_framework_version = "v4.0"
  }
  app_settings {

    "azure:KeyVaultURL"                         = "${azurerm_key_vault.regional-keyvault.vault_uri}"
  }

resource "azurerm_key_vault" "regional-keyvault" {
    name                            = "hub-keyvault-${var.deployment_suffix}"
    location                        = "${azurerm_resource_group.main_rg.location}"
    resource_group_name             = "${azurerm_resource_group.main_rg.name}"
    enabled_for_disk_encryption     = false
    enabled_for_template_deployment = true
    tenant_id                       = "${var.tenantid}"

    sku {
        name = "standard"
    }

    network_acls {
        default_action             = "Deny"
        bypass                     = "AzureServices"
        ip_rules                   = ["${azurerm_app_service.hub_webapp.possible_outbound_ip addresses}"]
    }
}

References

enhancement servicapp-service servickeyvault

Most helpful comment

@Shr3ps that is one solution I had not considered, trying it now, thank you.

All 5 comments

We need this too! For now to break this dep cycle, we need to use a terraform null_resource which execute az cli to update those Network ACLs.

my use case is slightly different:
My terraform is applied in layers, multiple environments are then constructed from different subsets of the layers.
I create my key vault in a base layer, subsequent layers then create a subnet for their resources.
To permit access to the key vault I now have to create all my subnets upfront so that I may access the key vault from each, this imposes a dependency between the layers and makes it difficult to build different configurations from a subset of the layers.
Ideally I would be able to create each subnet as required then add it to the network acls for the key vault and preserve a modular approach.

@Shr3ps that is one solution I had not considered, trying it now, thank you.

solution in case in helps anyone else in the same situation:

resource "azurerm_subnet" "manage_subnet" {
  count                = length(var.manage_subnets)
  name                 = "${var.environment}-manage-subnet-${count.index}"
  resource_group_name  = data.terraform_remote_state.project.outputs.project_resource_group_name
  virtual_network_name = data.terraform_remote_state.project.outputs.project_vn_name
  address_prefix       = element(var.manage_subnets, count.index)

  service_endpoints = ["Microsoft.KeyVault"]

  provisioner "local-exec" {
    command = "az keyvault network-rule add --name ${data.terraform_remote_state.project.outputs.keyvault_name} --subnet ${azurerm_subnet.manage_subnet[count.index].id}"
  }
}

Any update on this? Presumably a new resource definition similar to that used to apply "azurerm_key_vault_access_policy" rules would work?

A resource like "azurerm_key_vault_network_allowed_subnet" or similar?

Was this page helpful?
0 / 5 - 0 ratings