Terraform-provider-aws: Interpolated + count aws_alb_target_group_attachment.target_id showing changes when increasing count or when reprovisioning a single target

Created on 13 Jun 2017  ·  11Comments  ·  Source: hashicorp/terraform-provider-aws

_This issue was originally opened by @levinse as hashicorp/terraform#8684. It was migrated here as part of the provider split. The original body of the issue is below._


Given this config

resource "aws_alb_target_group_attachment" "fe" {
  count = "${var.count}"

  target_group_arn = "${var.alb_target_group_https_arn}"
  target_id = "${element(aws_instance.fe.*.id, count.index)}"
  port = 8080
}

With count = 1 and changing to 2, I expect terraform plan to not show any changes to the existing first resource and to only show a change with the second resource added. However, terraform is showing the following plan output:

-/+ module.fe-app-stg.aws_alb_target_group_attachment.fe.0
    port:             "8080" => "8080"
    target_group_arn: "my ARN value" => "same ARN value"
    target_id:        "my instance id" => "${element(aws_instance.fe.*.id, count.index)}" (forces new resource)

Apparently this will destroy and re-create the first target group attachment when I add a second one - is that how the AWS API works or is that a bug?

bug servicelbv2 terraform-0.12

Most helpful comment

I ran into this issue, but I had a count variable for ec2 instance as well

resource "aws_instance" "ec2" {
  count                  = "${var.count}"
  ami                    = "${data.aws_ami.centos.id}"
  instance_type          = "${var.instance_type}"
....
}
resource "aws_lb_target_group_attachment" "apache" {
  count            = "${var.count}"
  target_group_arn = "${aws_lb_target_group.target_group.arn}"
  target_id        = "${element(module.apache.ec2_ids, count.index)}"
  port             = 443
}



md5-f3e44ad34a1607aaf35667c3d5a5c4ef



[Truncated]
-/+ module.apache.aws_lb_target_group_attachment.apache[0] (new resource required)
-/+ module.apache.aws_lb_target_group_attachment.apache[1] (new resource required)
-/+ module.apache.aws_lb_target_group_attachment.apache[2] (new resource required)
-/+ module.apache.aws_lb_target_group_attachment.apache[3] (new resource required)



md5-32e9a6b82dccc77ae1f36f9a6ae03bac



  lifecycle {
    ignore_changes = true
  }



md5-bee9083a2b80b51dd3ac5b89b837d39a



[Truncated]
+ module.apache.aws_lb_target_group_attachment.apache[4]

+ module.apache.module.apache.aws_instance.ec2[4]



md5-037148162c2198e942a9bfa70c179252



  - module.apache.aws_lb_target_group_attachment.apache[3]

  - module.apache.aws_lb_target_group_attachment.apache[4]

  - module.apache.module.apache.aws_instance.ec2[3]

  - module.apache.module.apache.aws_instance.ec2[4]

I hope this was helpful

All 11 comments

@mitchellh any news on this? I've originally opened this bug a year ago...

Thanks

I've been annoyed by the same thing... would love to see a fix.

Hello? Any news? We are happy to pay to get this fixed.

I ran into this recently and it's extremely annoying -- basically resulting in unnecessary downtime unless you jump through a lot of hoops!

Just ran into this same issue. Any word on when this will be fixed?

Very annoying ! Defeats the purpose of load balancing...

I ran into this issue, but I had a count variable for ec2 instance as well

resource "aws_instance" "ec2" {
  count                  = "${var.count}"
  ami                    = "${data.aws_ami.centos.id}"
  instance_type          = "${var.instance_type}"
....
}
resource "aws_lb_target_group_attachment" "apache" {
  count            = "${var.count}"
  target_group_arn = "${aws_lb_target_group.target_group.arn}"
  target_id        = "${element(module.apache.ec2_ids, count.index)}"
  port             = 443
}



md5-f3e44ad34a1607aaf35667c3d5a5c4ef



[Truncated]
-/+ module.apache.aws_lb_target_group_attachment.apache[0] (new resource required)
-/+ module.apache.aws_lb_target_group_attachment.apache[1] (new resource required)
-/+ module.apache.aws_lb_target_group_attachment.apache[2] (new resource required)
-/+ module.apache.aws_lb_target_group_attachment.apache[3] (new resource required)



md5-32e9a6b82dccc77ae1f36f9a6ae03bac



  lifecycle {
    ignore_changes = true
  }



md5-bee9083a2b80b51dd3ac5b89b837d39a



[Truncated]
+ module.apache.aws_lb_target_group_attachment.apache[4]

+ module.apache.module.apache.aws_instance.ec2[4]



md5-037148162c2198e942a9bfa70c179252



  - module.apache.aws_lb_target_group_attachment.apache[3]

  - module.apache.aws_lb_target_group_attachment.apache[4]

  - module.apache.module.apache.aws_instance.ec2[3]

  - module.apache.module.apache.aws_instance.ec2[4]

I hope this was helpful

I provided some information in a pull request related to this issue about changes in the upcoming Terraform 0.12 release that will help with this situation: https://github.com/terraform-providers/terraform-provider-aws/pull/1726#issuecomment-418481437

this is a huge pain. When is this expected to be fixed?

Hi folks 👋 This issue is resolved in Terraform 0.12.6 and later, which supports new functionality in the configuration language aimed at solving problems like these. The new resource-level for_each argument can be used so resources are indexed in the Terraform state based on a string map or set, rather than the simple numeric list with the resource-level count argument. Resources switched to for_each over count will no longer have issues with removing elements in the middle of a list or general rearranging of elements as the resource index keys are stable.

If you're looking for general assistance with how to implement for_each in this situation, please note that we use GitHub issues in this repository for tracking bugs and enhancements with the Terraform AWS Provider codebase rather than for questions. While we may be able to help with certain simple problems here it's generally better to use the community forums where there are far more people ready to help, whereas the GitHub issues here are generally monitored only by a few maintainers and dedicated community members interested in code development of the Terraform AWS Provider itself.

I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thanks!

Was this page helpful?
0 / 5 - 0 ratings