This is part of a bigger module, I will try to copy out all the relevant parts
resource "aws_autoscaling_group" "elb" {
count = var.lb_type=="elb" ? 1 : 0
lifecycle {
create_before_destroy = true
}
availability_zones = local.azs
name = aws_launch_configuration.launch_configuration.name
min_size = lookup(var.asg_properties,"min_size", 1)
max_size = lookup(var.asg_properties,"max_size", 1)
min_elb_capacity = lookup(var.asg_properties,"min_elb_capacity", 1)
health_check_grace_period = lookup(var.asg_properties,"health_check_grace_period", 300)
desired_capacity = lookup(var.asg_properties,"desired_capacity", 1)
health_check_type = lookup(var.asg_properties,"health_check_type", "ELB")
launch_configuration = aws_launch_configuration.launch_configuration.name
vpc_zone_identifier = split(",",local.private_subnets)
load_balancers = [aws_elb.elb[0].id]
wait_for_capacity_timeout = lookup(var.asg_properties,"wait_for_capacity_timeout", "300s")
dynamic "tag" {
for_each = local.tags
content {
propagate_at_launch = true
key = element(keys(tag),0)
value = element(values(tag),0)
}
}
}
locals {
subnets_choice = split(",",(var.lb_internal ? local.private_subnets : local.public_subnets))
eip_choice = length(var.nlb_elastic_ip_allocation_id) == 0 ? aws_eip.nlb.*.allocation_id : var.nlb_elastic_ip_allocation_id
}
locals {
public_subnets = join(",",data.terraform_remote_state.network.outputs["subnets-public.subnet_id"])
private_subnets = join(",",data.terraform_remote_state.network.outputs["subnets-private.subnet_id"])
database_subnets = join(",",data.terraform_remote_state.network.outputs["subnets-database.subnet_id"])
tags = merge(
{
"Name"= var.application_name,
"Region" = var.region,
"Deployment_folder_in_repo" = replace(path.root, "/^(.*)(/.*/platform/apps/.*)/", "$2"),
"Deployment_time_UTC" = timestamp(),
},
var.unspecific_tags)
azs = slice(data.aws_availability_zones.available.names,0,var.amount_azs)
}
variable "asg_properties" {
description = "Properties map for the autoscaling group. If a map is provided with keys missing, the following defaults will be used. For explanation of the keys, reference terraform documentation for aws_auto_scaling_group"
type = map(string)
default = {
min_size = "1"
max_size = "1"
min_elb_capacity = "1"
health_check_grace_period = "300"
desired_capacity = "1"
health_check_type = "ELB"
wait_for_capacity_timeout = "300"
}
}
variable "unspecific_tags" {
description = "Tags that are feeded to the module for more extensive tagging of resources. They are unspecific to the deployment and defined environment-wide in the tfvars for this deployment"
default = {}
type = map(string)
}
variable "build_uri" {
description = "Build_uri of the jenkins-job. Is feeded to the module by jenkins"
default = ""
type = string
}
variable "lb_type" {
description = "Important. Can be strings of value 'elb','alb','nlb' or 'none'. Decides, what type of loadbalancer will be created. The variables specific to ELB or ALB/NLB deployment must be provided, or the deployment will fail"
default = "elb"
type = string
}
variable "lb_internal" {
description = "Important. Can be true or false. Toggles, whether loadbalancers are supposed to be placed in public or private subnets. For 'lb_type=none', it decides whether the instances of the ASG are placed in public or private subnets"
default = true
type = bool
}
the following block:
dynamic "tag" {
for_each = local.tags
content {
propagate_at_launch = true
key = element(keys(tag),0)
value = element(values(tag),0)
}
}
Should have iterated through each key-value-pair of the local.tags map and create a tag-block for each one, including the "propagate_at_launch=true" key-value-pair.
10:49:57 Error: Provider produced inconsistent final plan
10:49:57
10:49:57 When expanding the plan for module.application.aws_autoscaling_group.elb[0] to
10:49:57 include new values learned so far during apply, provider "aws" produced an
10:49:57 invalid new value for .tag: block set length changed from 1 to 12.
10:49:57
10:49:57 This is a bug in the provider, which should be reported in the provider's own
10:49:57 issue tracker.
terraform apply
Am I misunderstanding maps, when I assumed they would be iterable per each key-value-pair?
dynamic "tag" {
for_each = local.tags
content {
key = tag.key
value = tag.value
propagate_at_launch = true
}
}
according to hashiCorps example, the above dynamic block configuration should work, but it doesn't in my case, the error is the same as with my previous code
While I haven't reproduced this exactly, the root cause here is likely https://github.com/hashicorp/terraform/issues/20517, which causes the dynamic block to be updated with the incorrect values.
In terraform 0.12.0 beta2 also facing the similar issue. In my case it is google provider where I'm trying to add attached disk using dynamic block and getting similar error.
locals {
secondary_disks = slice(list(google_compute_disk.secondary_disk_0, google_compute_disk.secondary_disk_1, google_compute_disk.secondary_disk_2), 0, length(var.secondaryDisks))
}
# Compute Instance
resource "google_compute_instance" "instance" {
project = var.project
count = var.instanceCount
name = random_id.instance_random_id[count.index].dec
machine_type = var.machineType
zone = var.zone
boot_disk {
initialize_params {
image = var.osImage
size = var.primaryDiskSize
type = var.primaryDiskType
}
}
dynamic "attached_disk" {
for_each = local.secondary_disks
content {
source = format("%s", attached_disk.value[count.index].name)
mode = "READ_WRITE"
device_name = format("%s%s",attached_disk.value[count.index].name, "-device")
}
}
network_interface {
subnetwork = var.subNetworkName
}
depends_on = [ "google_compute_disk.secondary_disk_0", "google_compute_disk.secondary_disk_1", "google_compute_disk.secondary_disk_2"]
service_account {
scopes = var.service_account_scopes
}
}
2019/04/22 13:00:58 [WARN] Provider "google" produced an invalid plan for module.gcp_instance_0_us-central1-a.google_compute_instance.instance[0], but we are tolerating it because it is using the legacy plugin SDK.
The following problems may be the cause of any confusing errors from downstream operations:
- .deletion_protection: planned value cty.False does not match config value cty.NullVal(cty.Bool)
- .can_ip_forward: planned value cty.False does not match config value cty.NullVal(cty.Bool)
- .scheduling: attribute representing nested block must not be unknown itself; set nested attribute values to unknown instead
- .boot_disk[0].auto_delete: planned value cty.True does not match config value cty.NullVal(cty.Bool)
2019/04/22 13:00:58 [TRACE] module.gcp_instance_0_us-central1-a: eval: *terraform.EvalCheckPlannedChange
2019/04/22 13:00:58 [TRACE] EvalCheckPlannedChange: Verifying that actual change (action Create) matches planned change (action Create)
2019/04/22 13:00:58 [ERROR] module.gcp_instance_0_us-central1-a: eval: *terraform.EvalCheckPlannedChange, err: Provider produced inconsistent final plan: When expanding the plan for module.gcp_instance_0_us-central1-a.google_compute_instance.instance[0] to include new values learned so far during apply, provider "google" produced an invalid new value for .attached_disk: block count changed from 1 to 2.
This is a bug in the provider, which should be reported in the provider's own issue tracker.
2019/04/22 13:00:58 [ERROR] module.gcp_instance_0_us-central1-a: eval: *terraform.EvalSequence, err: Provider produced inconsistent final plan: When expanding the plan for module.gcp_instance_0_us-central1-a.google_compute_instance.instance[0] to include new values learned so far during apply, provider "google" produced an invalid new value for .attached_disk: block count changed from 1 to 2.
This is a bug in the provider, which should be reported in the provider's own issue tracker.
2019/04/22 13:00:58 [TRACE] [walkApply] Exiting eval tree: module.gcp_instance_0_us-central1-a.google_compute_instance.instance[0]
2019/04/22 13:00:58 [TRACE] vertex "module.gcp_instance_0_us-central1-a.google_compute_instance.instance[0]": visit complete
Error: Provider produced inconsistent final plan
N/A
google_compute_instance should be created with list of attached_disks during first time terraform apply itself
google_compute_instance is getting created with list of attached_disks during second time terraform apply only.
1) Have terraform template as mentioned in the configuration files and do terraform apply, will get error mentioned in the debug output.
This is still relevant with aws provider version 2.11 and terraform 0.12.0-rc1.
Can I assist by providing any additional information?
Running into the same issue with aws provider 2.14.0 and terraform 0.12.1. @theTranqu, did you ever find a workaround?
@apparentlymart, should've https://github.com/hashicorp/terraform/pull/21193 fixed this issue here as well?
There is not really a workaround that I know of. Before TF12, i used my own self-made null-resource tag-generator to serve this purpose, sadly that was not possible anymore with TF12.
Essentially, there is not a good way that I have found at this time.
I found that the following works well in this case:
resource "aws_autoscaling_group" "asg" {
...
tags = [
for k, v in local.asg_tags :
{
key = k
value = v
propagate_at_launch = true
}
]
}
@manfredlift what type is local.asg_tags in this case? is it a map with multiple keys/values as might be expected from the context?
In my case it is a map of type string.
Yet still my job fails:
tags = [
for k, v in var.tags :
{
key = k
value = v
propagate_at_launch = true
}
]
Error: ccp-9ff71c3c-7d01-6f3c-6fe6-7e069cf21832: invalid tag attributes: value missing
It is a map(string)
e.g
locals {
asg_tags = {
Foo = "foo"
Bar = "bar"
}
}
That is awkward. I have the tags created as map outside of the module, and feed them as a map(string) to the module.
Yet something seems to be wrong with it. Apparently, some value is missing. I am curious as to what value is missing.
locals {
tags = merge(
{
"Name" = var.application_name,
"Region" = var.region,
"Build_uri" = var.build_uri
"Deployment_folder_in_repo" = replace(path.root, "/^(.*)(/.*/platform/apps/.*)/", "$2"),
"Deployment_time_UTC" = timestamp(),
"Terraform_state_location" = "#jenkins_repl_backend_bucket#/#jenkins_repl_backend_key#"
}, var.environment_tags, var.tags)
var.environment_tags is just a explicitly created map(string), var.tags is a variable of type map(string) with default={}. The module can be fed with additional tags this way, but doesnt have to.
This locals.tags is being fed to the module where the asg is created. the variable tags for the module is of type map(string) and there are no complaints about var.tags integrity, so I am really wondering why the same code that you are using doesn't work for me
EDIT: I tried to remove var.tags, as it is just an empty map so that might cause problems, but that did not change anything for me/terraform
Hmm, maybe try to test with less tags that definitely have a value.
Also, which phase do you get the error in? Can you try to run terraform validate
and terraform plan
to maybe find the issue?
Also I am not sure how this for-expression interacts with explicitly defined tag
blocks in the autoscaling_group resource, maybe you might've left something with no value in there?
I did indeed find the culprit, an environment variable was not set so one of the tags(build_uri) got populated with an empty string during our pipeline runs.
Thank you @manfredlift for the workaround, it does work perfectly. I tried to use the dynamic block again, just for good measure with a valid map(string), but i does indeed still fail with the same old error-message. But now we can go with the tags being created in this for expression, which is great!
Hi, I'm experiencing a similar issue with dynamic load balancers being linked to an ECS Service:
locals {
target_groups = [for arn in var.target_groups: arn if length(arn) != 0]
}
resource "aws_ecs_service" "service" {
name = local.namespace
task_definition = aws_ecs_task_definition.task.arn
cluster = data.aws_ecs_cluster.cluster.arn
scheduling_strategy = var.scheduling_strategy
desired_count = var.desired_count
health_check_grace_period_seconds = var.health_check_grace_period
dynamic "load_balancer" {
for_each = local.target_groups
content {
target_group_arn = load_balancer.value
container_name = var.container_name
container_port = var.container_port
}
}
ordered_placement_strategy {
type = "spread"
field = "instanceId"
}
lifecycle {
ignore_changes = [desired_count]
}
}
This is inside sub-module - var.target_groups
is populated by the parent module with either one or multiple target groups.
The initial plan shows a single load_balancer block with unknown values, and the apply fails with:
Error: Provider produced inconsistent final plan
When expanding the plan for
module.ecs_app.module.ecs_service.aws_ecs_service.service to include new
values learned so far during apply, provider "aws" produced an invalid new
value for .load_balancer: block set length changed from 1 to 2.
This is a bug in the provider, which should be reported in the provider's own
issue tracker.
Any help would be greatly appreciated :)
Some additional information, the culprit in my case is the filter I'm applying to var.target_groups
:
target_groups = [for arn in var.target_groups: arn if length(arn) != 0]
Using the var.target_groups variable directly works as expected. I was in need of that filter for modularization purposes but I'm reworking my pipeline to be less modularized in the meantime.
Error: ccp-9ff71c3c-7d01-6f3c-6fe6-7e069cf21832: invalid tag attributes: value missing
For that, see #9049.
Hi folks 👋 The original issue reported here should be resolved as of Terraform CLI 0.12.28 and Terraform AWS Provider 2.70.0 (potentially much sooner in both). If you are still experiencing issues with aws_autoscaling_group
resource tags
argument handling on those versions or later, please file a new bug report issue including all the details requested in the issue template and we can take a fresh look. Thanks.
I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.
If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thanks!
Most helpful comment
This is still relevant with aws provider version 2.11 and terraform 0.12.0-rc1.
Can I assist by providing any additional information?