Terraform: Error retrieving policy: LoadBalancerNotFound: There is no ACTIVE Load Balancer

Created on 16 Mar 2016  ยท  10Comments  ยท  Source: hashicorp/terraform

I'm getting the error:

* aws_lb_cookie_stickiness_policy.INSTANCE: Error retrieving policy: LoadBalancerNotFound: There is no ACTIVE Load Balancer named 'ENVIRONMENT-INSTANCE'
        status code: 400, request id: a9f37732-eb7e-11e5-bdd8-a7fc077a93a7
resource "aws_elb" "INSTANCE" {
  name = "ENVIRONMENT-INSTANCE"
  subnets = [ "${aws_subnet.private.id}" ]
  internal = true

  listener {
    instance_port = 22
    instance_protocol = "tcp"
    lb_port = 22
    lb_protocol = "tcp"
  }

  health_check {
    healthy_threshold = 2
    unhealthy_threshold = 2
    timeout = 3
    target = "TCP:22"
    interval = 30
  }

  cross_zone_load_balancing = false
  idle_timeout = 400
  connection_draining = true
  connection_draining_timeout = 400

  tags {
    Name = "ENVIRONMENT-INSTANCE"
  }
}

resource "aws_lb_cookie_stickiness_policy" "INSTANCE" {
  name = "ENVIRONMENT-INSTANCE"
  load_balancer = "${aws_elb.INSTANCE.id}"
  lb_port = 22
  cookie_expiration_period = 600
}

It needs to create the aws_elb first, THEN create the aws_lb_cookie_stickiness_policy, but it doesn't.

This is latest master (1bfa436). Doesn't work with 0.6.11 either though.

bug provideaws

Most helpful comment

For me, I was mixing the old "Elastic Load Balancing (ELB Classic)" aws_lb_cookie_stickiness_policy, with the new "Elastic Load Balancing v2 (ALB/NLB)" aws_lb - which obviously doesn't work (aws_lb naming for classic is misleading though!).

_Be aware of the difference._

I should've been using aws_lb_target_group.stickiness instead.

All 10 comments

Only way to recover from this now, even if I comment out these in the config, is to edit .terraform/terraform.tfstate manually and remove them from there.

Hi @FransUrbo - thanks for the report.

Testing your provided config (with a VPC and Subnet added):

resource "aws_vpc" "foo" {
  cidr_block = "10.0.0.0/16"
}

resource "aws_subnet" "private" {
  vpc_id     = "${aws_vpc.foo.id}"
  cidr_block = "10.0.0.0/24"
}

resource "aws_elb" "INSTANCE" {
  name     = "ENVIRONMENT-INSTANCE"
  subnets  = [ "${aws_subnet.private.id}" ]
  internal = true

  listener {
    instance_port     = 22
    instance_protocol = "tcp"
    lb_port           = 22
    lb_protocol       = "tcp"
  }

  health_check {
    healthy_threshold   = 2
    unhealthy_threshold = 2
    timeout             = 3
    target              = "TCP:22"
    interval            = 30
  }

  cross_zone_load_balancing   = false
  idle_timeout                = 400
  connection_draining         = true
  connection_draining_timeout = 400

  tags {
    Name = "ENVIRONMENT-INSTANCE"
  }
}

resource "aws_lb_cookie_stickiness_policy" "INSTANCE" {
  name                     = "ENVIRONMENT-INSTANCE"
  load_balancer            = "${aws_elb.INSTANCE.id}"
  lb_port                  = 22
  cookie_expiration_period = 600
}

I get a different error message:

Error applying plan:

1 error(s) occurred:

* aws_lb_cookie_stickiness_policy.INSTANCE: Error setting LBCookieStickinessPolicy: InvalidConfigurationRequest: ENVIRONMENT-INSTANCE can be associated only with a listener with one of HTTP, HTTPS as frontend protocol
        status code: 409, request id: 8152cf0d-ebaf-11e5-900f-313d79e903d1

Correcting for that error message by changing the listener to HTTP, the following config successfully applies for me:

resource "aws_vpc" "foo" {
  cidr_block = "10.0.0.0/16"
}

resource "aws_subnet" "private" {
  vpc_id     = "${aws_vpc.foo.id}"
  cidr_block = "10.0.0.0/24"
}

resource "aws_elb" "INSTANCE" {
  name     = "ENVIRONMENT-INSTANCE"
  subnets  = [ "${aws_subnet.private.id}" ]
  internal = true

  listener {
    instance_port     = 80
    instance_protocol = "http"
    lb_port           = 80
    lb_protocol       = "http"
  }

  health_check {
    healthy_threshold   = 2
    unhealthy_threshold = 2
    timeout             = 3
    target              = "HTTP:80/healthcheck"
    interval            = 30
  }

  cross_zone_load_balancing   = false
  idle_timeout                = 400
  connection_draining         = true
  connection_draining_timeout = 400

  tags {
    Name = "ENVIRONMENT-INSTANCE"
  }
}

resource "aws_lb_cookie_stickiness_policy" "INSTANCE" {
  name                     = "ENVIRONMENT-INSTANCE"
  load_balancer            = "${aws_elb.INSTANCE.id}"
  lb_port                  = 80
  cookie_expiration_period = 600
}

Let me know if the above sheds any light on things from your side. :+1:

But did you start from scratch after the changing the aws_lb_cookie_stickiness_policy resource? Because second time, it works for me to (with the same change as you - that might have slipped by me when I cut-and-pasted).

The problem only arises the FIRST time (as in, "from scratch").

Yep, both of the above examples - the error message and the successful apply - were first-run results.

Just reran terraform destroy -force; terraform apply on that final config and got:

aws_vpc.foo: Creating...
  cidr_block:                "" => "10.0.0.0/16"
  default_network_acl_id:    "" => "<computed>"
  default_security_group_id: "" => "<computed>"
  dhcp_options_id:           "" => "<computed>"
  enable_classiclink:        "" => "<computed>"
  enable_dns_hostnames:      "" => "<computed>"
  enable_dns_support:        "" => "<computed>"
  main_route_table_id:       "" => "<computed>"
aws_vpc.foo: Creation complete
aws_subnet.private: Creating...
  availability_zone:       "" => "<computed>"
  cidr_block:              "" => "10.0.0.0/24"
  map_public_ip_on_launch: "" => "0"
  vpc_id:                  "" => "vpc-4bbc442f"
aws_subnet.private: Creation complete
aws_elb.INSTANCE: Creating...
  availability_zones.#:                   "" => "<computed>"
  connection_draining:                    "" => "1"
  connection_draining_timeout:            "" => "400"
  cross_zone_load_balancing:              "" => "0"
  dns_name:                               "" => "<computed>"
  health_check.#:                         "" => "1"
  health_check.0.healthy_threshold:       "" => "2"
  health_check.0.interval:                "" => "30"
  health_check.0.target:                  "" => "HTTP:80/healthcheck"
  health_check.0.timeout:                 "" => "3"
  health_check.0.unhealthy_threshold:     "" => "2"
  idle_timeout:                           "" => "400"
  instances.#:                            "" => "<computed>"
  internal:                               "" => "1"
  listener.#:                             "" => "1"
  listener.3057123346.instance_port:      "" => "80"
  listener.3057123346.instance_protocol:  "" => "http"
  listener.3057123346.lb_port:            "" => "80"
  listener.3057123346.lb_protocol:        "" => "http"
  listener.3057123346.ssl_certificate_id: "" => ""
  name:                                   "" => "ENVIRONMENT-INSTANCE"
  security_groups.#:                      "" => "<computed>"
  source_security_group:                  "" => "<computed>"
  source_security_group_id:               "" => "<computed>"
  subnets.#:                              "" => "1"
  subnets.1783237449:                     "" => "subnet-04b9895d"
  tags.#:                                 "" => "1"
  tags.Name:                              "" => "ENVIRONMENT-INSTANCE"
  zone_id:                                "" => "<computed>"
aws_elb.INSTANCE: Creation complete
aws_lb_cookie_stickiness_policy.INSTANCE: Creating...
  cookie_expiration_period: "" => "600"
  lb_port:                  "" => "80"
  load_balancer:            "" => "ENVIRONMENT-INSTANCE"
  name:                     "" => "ENVIRONMENT-INSTANCE"
aws_lb_cookie_stickiness_policy.INSTANCE: Creation complete

Apply complete! Resources: 4 added, 0 changed, 0 destroyed.

Hmm, weird. Ok, I'll close it for now and I'll investigate further if/when it happens again.

Thanx for the look!

Try to run it, then delete the ELB from the AWS Console, then run TF again and see if that changes things.

Because if after I have the whole TF setup passing without problem go in and delete all the ELBs in the Console and rerun TF, I get this error. If I then remove all the references to the stickiness, policies and schedules, everything works.

So I think it's TF that is/gets confused some how.

I saw this issue has resolved in 7065

Closed via #7166 sorry for not updating this sooner

For me, I was mixing the old "Elastic Load Balancing (ELB Classic)" aws_lb_cookie_stickiness_policy, with the new "Elastic Load Balancing v2 (ALB/NLB)" aws_lb - which obviously doesn't work (aws_lb naming for classic is misleading though!).

_Be aware of the difference._

I should've been using aws_lb_target_group.stickiness instead.

I'm going to lock this issue because it has been closed for _30 days_ โณ. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

shubhambhartiya picture shubhambhartiya  ยท  72Comments

mirogta picture mirogta  ยท  74Comments

radeksimko picture radeksimko  ยท  80Comments

phinze picture phinze  ยท  167Comments

gwagner picture gwagner  ยท  81Comments