Terraform v0.12.28
provider "azurerm" {
features {}
}
variable "prefix" {
default = "magodoinconsist"
}
variable "location" {
default = "West Europe"
}
resource "azurerm_resource_group" "test" {
name = "${var.prefix}-rg"
location = var.location
}
resource "azurerm_storage_account" "test" {
name = "${var.prefix}sa"
resource_group_name = azurerm_resource_group.test.name
location = azurerm_resource_group.test.location
account_replication_type = "LRS"
account_tier = "Standard"
}
data "azurerm_monitor_diagnostic_categories" "test" {
resource_id = azurerm_storage_account.test.id
}
resource "azurerm_monitor_diagnostic_setting" "test" {
name = "${var.prefix}-ds"
target_resource_id = azurerm_storage_account.test.id
storage_account_id = azurerm_storage_account.test.id
dynamic "log" {
for_each = data.azurerm_monitor_diagnostic_categories.test.logs
content {
category = log.key
}
}
dynamic "metric" {
for_each = data.azurerm_monitor_diagnostic_categories.test.metrics
content {
category = metric.key
}
}
}
https://gist.github.com/magodo/3a2221ef7cb0af4e4d2d6e1fae5537e7
Error: Provider produced inconsistent final plan
When expanding the plan for azurerm_monitor_diagnostic_setting.test to include
new values learned so far during apply, provider
"registry.terraform.io/-/azurerm" produced an invalid new value for .log:
planned set element
cty.ObjectVal(map[string]cty.Value{"category":cty.UnknownVal(cty.String),
"enabled":cty.True,
"retention_policy":cty.ListValEmpty(cty.Object(map[string]cty.Type{"days":cty.Number,
"enabled":cty.Bool}))}) does not correlate with any element in actual.
This is a bug in the provider, which should be reported in the provider's own
issue tracker.
Error: Provider produced inconsistent final plan
When expanding the plan for azurerm_monitor_diagnostic_setting.test to include
new values learned so far during apply, provider
"registry.terraform.io/-/azurerm" produced an invalid new value for .metric:
planned set element
cty.ObjectVal(map[string]cty.Value{"category":cty.UnknownVal(cty.String),
"enabled":cty.True,
"retention_policy":cty.ListValEmpty(cty.Object(map[string]cty.Type{"days":cty.Number,
"enabled":cty.Bool}))}) does not correlate with any element in actual.
This is a bug in the provider, which should be reported in the provider's own
issue tracker.
The provision shall be successful with one terraform apply.
The provision will panic at first apply. While a second apply will succeed.
terraform initterraform applyThe azurerm provider version is v2.18.0.
I understand the reason of the issue here, while I want to figure out if this is an official solution for this, or if this is some known issue to be fixed in core?
Hi @magodo , thank you for submitting this and I'm sorry you are experiencing it.
This is a duplicate of #22409, so I am going to close this issue in favor of that one. Thanks!
I realized this issue had the very helpful debug log and there wasn't any conversation on the other one, so I'm going to leave this open instead. Sorry for the noise!
For anyone else reading this issue: The workaround for this situation is to run apply twice. You can also get the same result without a panic by running a targeted apply to first create the resource that's being referenced in the data source (terraform apply -target azurerm_storage_account.test) and then running a normal apply afterwards.
We are looking into improving both the workflow so and the error message so we aren't sending a misleading error message in this situation.
https://github.com/hashicorp/terraform/issues/4149 is one of the proposals that might resolve this kind of issue
Hi @magodo
Thanks for the example!
The bug here looks is being triggered by the data source not returning any values for logs (which is the case the linked duplicate issue mentions specifically). This is a bug caused by mishandling the special case of "dynamic" set blocks, which cannot represent a number of unknown values other than 1.
If you were expecting values for data.azurerm_monitor_diagnostic_categories.test.logs, it likely is a common situation of eventual consistency in the remote cloud API. You may be able to test for this by adding a long delay in a local-exec provisioner on azurerm_storage_account.test allowing the data to propagate. If it is an eventual consistency problem, it's not something that terraform will be able to solve. Fixing the bug will allow terraform to apply correctly, however it will take a subsequent plan and apply to update the delayed data.
Hi @jbardin @mildwonkey
With both terraform v0.13.1 and v0.13.2, this issue still remains:
Error: Provider produced inconsistent final plan
When expanding the plan for azurerm_monitor_diagnostic_setting.test to include
new values learned so far during apply, provider
"registry.terraform.io/hashicorp/azurerm" produced an invalid new value for
.log: planned set element
cty.ObjectVal(map[string]cty.Value{"category":cty.UnknownVal(cty.String),
"enabled":cty.True,
"retention_policy":cty.ListValEmpty(cty.Object(map[string]cty.Type{"days":cty.Number,
"enabled":cty.Bool}))}) does not correlate with any element in actual.
This is a bug in the provider, which should be reported in the provider's own
issue tracker.
Error: Provider produced inconsistent final plan
When expanding the plan for azurerm_monitor_diagnostic_setting.test to include
new values learned so far during apply, provider
"registry.terraform.io/hashicorp/azurerm" produced an invalid new value for
.metric: block set length changed from 1 to 2.
This is a bug in the provider, which should be reported in the provider's own
issue tracker.
Hi @magodo! Thank you for letting us know, and I'm sorry this is still cropping up for you. There was a bugfix in 0.13.2, which we just released yesterday, that I think might solve your problem. Can you try it again on 0.13.2? If it is still a problem then, please open a new GH issue and fill out the issue template, so we can see what's different about your problem. Thanks and sorry again!
Here's the recent merge in question, that's in 0.13.2: https://github.com/hashicorp/terraform/pull/26028
Thanks @magodo
It looks like the fix corrected the initial problem, but that provider and resource still have some unusual properties. The culprit now is that the azurerm_monitor_diagnostic_setting log block has a default value for enabled, which is breaking the dynamic block analysis.
I'll re-open this as a placeholder to research if there is any possibility of detecting and handling this situation directly within terraform itself.
We ran into a similar looking problem when using a resource in the Helm provider. Here's a simplified reproduction case:
locals {
settings = {
static = {
key1 = "static1"
key2 = "static2"
},
dynamic = {
key1 = random_id.this.b64_url
}
}
}
resource "random_id" "this" {
byte_length = 8
}
resource "helm_release" "example" {
name = "redis"
repository = "https://charts.bitnami.com/bitnami"
chart = "redis"
dynamic "set" {
for_each = lookup(local.settings, "unknown", {})
content {
name = set.key
value = set.value
}
}
}
Note this seems similar to a case filed in the Helm provider repo here: https://github.com/hashicorp/terraform-provider-helm/issues/541.
The first apply fails with the following error:
random_id.this: Creating...
random_id.this: Creation complete after 0s [id=MT_lN8gIAN4]
Error: Provider produced inconsistent final plan
When expanding the plan for helm_release.example to include new values learned
so far during apply, provider "registry.terraform.io/hashicorp/helm" produced
an invalid new value for .set: planned set element
cty.ObjectVal(map[string]cty.Value{"name":cty.UnknownVal(cty.String),
"type":cty.StringVal(""), "value":cty.UnknownVal(cty.String)}) does not
correlate with any element in actual.
This is a bug in the provider, which should be reported in the provider's own
issue tracker.
The second apply was successful - presumably because the random_id resource had been created in state during the first apply and so could be resolved at plan time during the second apply.
We were testing with the latest Helm provider, v1.3.2, and with Terraform 0.13.4. Note that a similar issue appears to have been filed at https://github.com/hashicorp/terraform-provider-helm/issues/541.
With a similar approach but using a different resource, an AWS S3 bucket with the AWS provider, we were not able to reproduce the same problem:
locals {
settings = {
static = {
key1 = "static1"
key2 = "static2"
},
dynamic = {
key1 = random_id.this.b64_url
}
}
}
resource "random_id" "this" {
byte_length = 8
}
resource "aws_s3_bucket" "test" {
bucket = "test"
dynamic "lifecycle_rule" {
for_each = lookup(local.settings, "unknown", {})
content {
id = lifecycle_rule.key
prefix = lifecycle_rule.value
enabled = true
}
}
}
In this case, Terraform created the S3 bucket properly and the initial apply succeeded.
For this second case, we tested with Terraform AWS provider versions 2.70 and 3.11 on Terraform 0.12.20 and 0.13.4, with success in each case.
We're curious about the difference in behavior between these two cases. The "provider produced inconsistent final plan" error in the Helm case could make sense if there were a fundamental limitation in Terraform in being able to reconcile the original plan at apply time due to the unresolvable reference (even though it seems like in theory that this could be avoidable since the unresolved variable isn't actually used in the apply anyway). Since this doesn't happen for the AWS resource, though, it seems like the failure that we see for the Helm resource might not be expected behavior and that instead there might be an issue in the Helm provider.
Does the Helm failure in this case seem to be specific to the Helm provider?
Thanks @camlow325,
The helm_release resource and aws_s3_bucket work differently because set is a block of type set, while lifecycle_rule is a list.
From my initial look, the helm_release resource does not appear to be at fault here. That does however give me something else to look for to try and catch this condition in core.
Thanks!
Thanks for the quick response on this, @jbardin. I can confirm that with tests in my environment that the problem with the terraform apply failing on the first attempted apply of the helm_release resource with a dynamic set does seem to be fixed with #26638. I tested with a local build of terraform and was no longer able to reproduce the original issue. Thanks again!
I'm going to lock this issue because it has been closed for _30 days_ โณ. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.