Terraform-provider-aws: terraform crashes when update aws_lb resource

Created on 29 Mar 2018  ยท  6Comments  ยท  Source: hashicorp/terraform-provider-aws

_This issue was originally opened by @suker200 as hashicorp/terraform#17725. It was migrated here as a result of the provider split. The original body of the issue is below._


Hi,

I meet the crash issue with error "panic: runtime error: invalid memory address or nil pointer dereference" relate to "flattenAwsLbTargetGroupResource"

Error is EOF , for both resource "aws_lb" for loadbalancer type = network + application

Terraform Version

Terraform v0.11.5
+ provider.aws v1.12.0

Terraform Configuration Files

resource "aws_lb" "alb" {
  name            = "${var.name}"
  internal        = "${var.internal_lb == "true" ? true : false }"
  security_groups = ["${aws_security_group.alb.id}"]
  subnets         =  ["${split(",", var.subnets)}"]

  enable_deletion_protection = "${var.delete_protection == "true" ? true : false }"
  idle_timeout                = "${var.idle_timeout}"

  load_balancer_type = "${var.load_balancer_type}"
#  enable_cross_zone_load_balancing = true

  tags { "Name" = "${var.name}" }

  depends_on = ["aws_security_group.alb"]
}

resource "aws_security_group" "alb" {
  name        = "${var.name}"
  vpc_id      = "${var.vpc_id}"

  tags { "Name" = "${var.name}" }

}

resource "aws_lb_target_group" "alb" {
  name     = "${var.name}"
  port     = "${var.tg_port}"
  protocol = "HTTP"
  vpc_id   = "${var.vpc_id}"
  deregistration_delay = "${var.draining_timeout}"

  stickiness {
    type   = "lb_cookie"
    cookie_duration  = "${var.stickiness_cookie_duration}"
    enabled          = "${var.stickiness_enabled == "true" ? true : false }"
  }

  health_check {
    healthy_threshold   = 2
    unhealthy_threshold = 5
    timeout             = 5
    port                = "${var.healthcheck_port}"
    path                = "/${var.healthcheck_path}"
    interval            = 10
    protocol            = "HTTP"
    matcher             = "${var.healthcheck_code}"
  }

  tags { "Name" = "${var.name}" }
}

resource "aws_security_group_rule" "allow_all_https" {
  type            = "ingress"
  from_port       = 443
  to_port         = 443
  protocol        = "tcp"
  cidr_blocks     = ["0.0.0.0/0"]
  security_group_id = "${aws_security_group.alb.id}"
}

crash.log

bug crash servicelbv2

Most helpful comment

Hi :wave: Sorry you ran into trouble here. Due to an ELBv2 service upgrade that started around March 27, 2018, the aws_lb_target_group resource you are using requires an update. The fix has been released in version 1.13.0 of the AWS provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading.

All 6 comments

We use Network ELBs and are running into this issue as well, for all of our workspaces, all of which were working fine as of a couple days ago. At first I thought our state was corrupted, but then I noticed even terraform refresh on another workspace (which has not been touched in weeks) is failing with the same error.

I can repro with a fairly minimal example (just running terraform apply for the first time).

Hi :wave: Sorry you ran into trouble here. Due to an ELBv2 service upgrade that started around March 27, 2018, the aws_lb_target_group resource you are using requires an update. The fix has been released in version 1.13.0 of the AWS provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading.

@bflad Figured it was something like that -- confirmed 1.13 works as expected. Thanks for the quick response!

I was experiencing various errors with health checks on a TCP aws_lb_target_group.
Unless I changed the health check protocol to HTTP, I was getting back:

Error: Error applying plan:

2 error(s) occurred:

* aws_lb.my-new-lb: 1 error(s) occurred:

* aws_lb.my-new-lb: unexpected EOF
* aws_lb_target_group.my-new-target: 1 error(s) occurred:

* aws_lb_target_group.my-new-target: unexpected EOF

v1.13 resolved those errors, thank you.

I'm going to lock this issue because it has been closed for _30 days_ โณ. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thanks!

Was this page helpful?
0 / 5 - 0 ratings