Terraform v0.12.23
provider "azurerm" {
version = "~> 1.44"
subscription_id = var.subscription_id
tenant_id = var.tenant_id
}
resource "azurerm_resource_group" "main" {
count = var.rg_count
name = "${var.prefix}-${element(var.instance, count.index)}-dc-${var.environment}"
location = element(var.location, count.index)
tags = local.tags
}
resource "azurerm_storage_account" "main" {
count = var.storage_account_enabled ? var.rg_count : 0
name = "${var.prefix}main${var.environment}${element(var.instance_short, count.index)}"
resource_group_name = element(azurerm_resource_group.main.*.name, count.index)
location = element(azurerm_resource_group.main.*.location, count.index)
tags = local.tags
account_replication_type = "LRS"
account_tier = "Standard"
account_kind = "StorageV2"
// Force Cool, or do we want to create a policy?
access_tier = "Cool"
enable_https_traffic_only = true
}
// Create storage account firewall rules
resource "azurerm_storage_account_network_rules" "main" {
count = var.storage_account_enabled ? var.rg_count : 0
resource_group_name = element(azurerm_resource_group.main.*.name, count.index)
storage_account_name = element(azurerm_storage_account.main.*.name, count.index)
default_action = "Deny"
ip_rules = var.client_ip
bypass = ["AzureServices"]
}
data "azurerm_monitor_diagnostic_categories" "storagemain_log_cats" {
count = var.storage_account_enabled ? var.rg_count : 0
resource_id = element(azurerm_storage_account.main.*.id, count.index)
}
resource "azurerm_monitor_diagnostic_setting" "storagemain_logs" {
count = var.storage_account_enabled ? var.rg_count : 0
name = "storagemain-logs-${var.environment}"
target_resource_id = element(azurerm_storage_account.main.*.id, count.index)
storage_account_id = element(azurerm_storage_account.diagnostics.*.id, count.index)
// This will cycle through all possible values, and log them all
dynamic "log" {
for_each = toset(flatten(data.azurerm_monitor_diagnostic_categories.storagemain_log_cats.*.logs))
content {
category = log.value
retention_policy {
enabled = true
days = 30
}
}
}
metric {
category = "AllMetrics"
retention_policy {
enabled = true
days = 30
}
}
}
Error: Provider produced inconsistent final plan
When expanding the plan for
azurerm_monitor_diagnostic_setting.storagemain_logs[0] to include new values
learned so far during apply, provider "registry.terraform.io/-/azurerm"
produced an invalid new value for .log: planned set element
cty.ObjectVal(map[string]cty.Value{"category":cty.UnknownVal(cty.String),
"enabled":cty.True,
"retention_policy":cty.ListVal([]cty.Value{cty.ObjectVal(map[string]cty.Value{"days":cty.UnknownVal(cty.Number),
"enabled":cty.UnknownVal(cty.Bool)})})}) does not correlate with any element
in actual.
This is a bug in the provider, which should be reported in the provider's own
issue tracker.
Diagnostic logs are enabled on the storage resource.
Storage is created, but there are no diagnostic logs enabled
Very similar configuration for a key vault works fine:
data "azurerm_monitor_diagnostic_categories" "kv_log_cats" {
count = var.kv_enabled ? 1 : 0
resource_id = element(azurerm_key_vault.main.*.id, count.index)
}
resource "azurerm_monitor_diagnostic_setting" "kv_logs" {
count = var.kv_enabled && var.storage_account_enabled ? 1 : 0
name = "kv-logs-${var.environment}"
target_resource_id = element(azurerm_key_vault.main.*.id, count.index)
storage_account_id = element(azurerm_storage_account.diagnostics.*.id, count.index)
// This will cycle through all possible values, and log them all
dynamic "log" {
for_each = toset(flatten(data.azurerm_monitor_diagnostic_categories.kv_log_cats.*.logs))
content {
category = log.value
retention_policy {
enabled = true
days = 30
}
}
}
metric {
category = "AllMetrics"
retention_policy {
enabled = true
days = 30
}
}
}
I have since upgraded my terraform version and provider, and the error is still present:
Terraform v0.12.24
provider.azurerm v2.10.0
The error message is slightly different:
2020-05-28T21:08:20.1053488Z Error: Provider produced inconsistent final plan
2020-05-28T21:08:20.1054182Z
2020-05-28T21:08:20.1054843Z When expanding the plan for azurerm_monitor_diagnostic_setting.df_logs[0] to
2020-05-28T21:08:20.1055663Z include new values learned so far during apply, provider
2020-05-28T21:08:20.1056301Z "registry.terraform.io/-/azurerm" produced an invalid new value for .log:
2020-05-28T21:08:20.1057034Z block set length changed from 1 to 3.
2020-05-28T21:08:20.1058345Z
2020-05-28T21:08:20.1058846Z This is a bug in the provider, which should be reported in the provider's own
2020-05-28T21:08:20.1059282Z issue tracker.
@ubenmackin Thank you for submitting this!
At first, I think rather than using toset(flatten()) in the dynamic block, you might simply use element() as #7620 does. Even though the issue still remains.
The reason for this error is that during the planning stage, terraform doesn't know how many "logs" ("metrics") entries the target resource has, as the data source azurerm_monitor_diagnostic_categories can only get the exact resource_id in applying stage. So during planning, terraform can only assume dynamic block in azurerm_monitor_diagnostic_setting is of one "log" and one "metric" block.
However, later in applying stage, as the predefined entries of "log" for storage account is three, the results into an inconsistency between the actual provisioned "log" entries (3) against the planed one (1).
There are some workarounds for this:
terraform apply shall be successful.Besides, the reason why key vault works are because it happens to only contain one predefined "log" and "metric".
You are correct that a second run of apply is successful, and that is what I have been doing as part of my process.
It is unfortunate, as each resource type has different combinations of log and metric categories. So using a method like this to be able to dynamically determine and apply them was great for many reasons. Having too explicitly define the log and metric entries means spending the time to discover what the options are for each resource type, and having to explicitly define those resource blocks countless times.
Would it be possible, with some other kind of a code change, have terraform discover and apply all log and metric options via another option in the azuretm_monitor_diagnostic_setting resource? Something like an "apply_all_log" or akin to that? I am not familiar with what can/can't be done in the various stages, so maybe this is not possible to do. But figured I'd ask the question.
@ubenmackin Unfortunately, adding a apply_all_log option is not an idiomatic schema for terraform resource, as we shall allow user to be able to configure and only configure the concrete settings of the resource, not something "hypothetic".
The inconsistent issue (IMHO) stems from the core, hence I have open the issue for that. As a workaround, if the amount of your resource types are not too high, you can manually figure out the log and metric entries, define a local map to hold those entries, then iterate the maps in the dynamic block (instead of iterating the output of the data source). I think that will work (though you still have to spend time to discover what the options are for each resource type)
hashicorp/terraform#25600 should be addressed in terraform v0.13.1, which should address this issue.
Having the same issue on TF 0.13.2.
@csdaraujo hashicorp/terraform#25600 is reopened, let's wait for the fix in core.
hashicorp/terraform#25600 is closed again, which should has addressed this issue.
Confirming this is still not working in:
Terraform v0.13.3
azurerm v2.36.0
I guess this PR will be in part of v0.14.0 release (at least post beta1).