Terraform-provider-aws: Add a separate ELB listener resource - ie. aws_elb_listener.

Created on 13 Jun 2017  ยท  10Comments  ยท  Source: hashicorp/terraform-provider-aws

_This issue was originally opened by @clstokes as hashicorp/terraform#9807. It was migrated here as part of the provider split. The original body of the issue is below._


Currently (as of 0.7.8), ELB listeners are defined and managed as in-line _sub_-resources. This makes it very difficult to create modules for re-use as the number of listeners is static.

Terraform should support separate resources for listeners so that these can be made more modular and allow for meta-parameters on listeners - ie. aws_elb_listener. This is similar to the aws_elb_attachment and aws_security_group_rule resources that exist.

Current:
resource "aws_elb" "bar" {
   ...
  listener {
    instance_port = 8000
    instance_protocol = "http"
    lb_port = 80
    lb_protocol = "http"
  }
}
Proposed:
resource "aws_elb" "bar" {
}

resource "aws_elb_listener" "bar" {
  instance_port = 8000
  instance_protocol = "http"
  lb_port = 80
  lb_protocol = "http"
}
enhancement

Most helpful comment

module "foo" {
  # Probably can't be computed.
  service_ports = [
    {
      instance_port     = "80"
      instance_protocol = "HTTP"
      lb_port           = "80"
      lb_protocol       = "HTTP"
    },
    {
      instance_port     = "6565"
      instance_protocol = "TCP"
      lb_port           = "6565"
      lb_protocol       = "TCP"
    },
  ]
}

And...

resource "aws_elb" "service" {
  name                      = "service-${var.name}"
  subnets                   = ["${var.subnet_ids}"]
  cross_zone_load_balancing = true
  internal                  = true

  # Pass it in as a list.
  listener = ["${var.service_ports}"]

  health_check {
    healthy_threshold   = 2
    unhealthy_threshold = 2
    interval            = 30
    timeout             = 5
    target              = "${var.service_check_target}"
  }

  tags {
    Name        = "service-${var.name}"
    Environment = "${var.environment}"
    Service     = "${var.name}"
  }
}

Outputs the following on on plan...

      listener.#:                             "2"
      listener.3057123346.instance_port:      "80"
      listener.3057123346.instance_protocol:  "HTTP"
      listener.3057123346.lb_port:            "80"
      listener.3057123346.lb_protocol:        "HTTP"
      listener.3057123346.ssl_certificate_id: ""
      listener.3132934786.instance_port:      "6565"
      listener.3132934786.instance_protocol:  "TCP"
      listener.3132934786.lb_port:            "6565"
      listener.3132934786.lb_protocol:        "TCP"
      listener.3132934786.ssl_certificate_id: ""

Seems like this would work fine? I've now got a module that takes a variable number of ports/listeners for the ELB. (This would be easier of ALBs supported TCP for me.)

All 10 comments

+1

+1

Hi I am using a dynamic list of maps to define the elb listeners and passing it to a module. There is currently no way to dynamically create multiple listeners without this feature. You can't even use 'count' loops in the listener block. Please see if this request can be prioritized.

@here any update on when this is going to be implemented?

@jkpwb1 could you please provide an example how you made it with modules? I am trying to do the same but without any success, using terraform 0.9.11.

Hey folks,

Paul (@stack72) worked on that here: https://github.com/hashicorp/terraform/pull/10095, but the work was stopped.
As stated by him:

So I just started looking at this and noticed that you can't actually create an AWS ELB without a listener. Therefore, the issue here would be that if we create an aws_elb and give it listener 1, then create aws_elb_listener and attach it to the aws_elb then the next time the aws_elb would be checked by terraform it would show 2 listeners and try and move it to 1

This is what happens with security_group and security_group_rule

There is a few questions to ask there, regarding UX & mecanism. Having some listeners on the ELB, and some elsewhere isn't a good way of understanding what's going on, and could lead to real issues.

According to me, as soon as the Listeners isn't a required field, we could easily fix this issue by re-using the work initiated by Paul.

Hi all,
to reiterate on what was mentioned above:

Each resource in Terraform is generally represented in the state and elsewhere (in codebase) as an isolated element. For most cases this presents advantages like efficient & fast provisioning via parallelism, clear relationships between resources via graph, resource-specific retries and error handling which allows us to decouple any problems related to specific resource/API etc.

The side effect of this approach is that when there are any overlapping resources (like the imaginary aws_elb_listener & aws_elb.listener) it's non trivial to figure out which resource is the point of truth.

Secondly Terraform should always detect drifts caused by modifications outside of config (e.g. manual ones via AWS Console) and in case you had 2 resources covering listeners for the same ELB it's impossible to decide where the diff should appear (i.e. whether you intended to define listener in first resource or the other).

The only way to differentiate such resources is to have 1 resource w/out listeners and one purely for listeners. As both @stack72 and @Ninir mentioned this is not supported by the API and I'm not aware of any plans in that area. If AWS decides to change API and allow managing ELB without listener then we'd be more than happy to revisit this proposal, but for now there's nothing we can do on the Terraform's side.

For that reason I'm going to close this.

module "foo" {
  # Probably can't be computed.
  service_ports = [
    {
      instance_port     = "80"
      instance_protocol = "HTTP"
      lb_port           = "80"
      lb_protocol       = "HTTP"
    },
    {
      instance_port     = "6565"
      instance_protocol = "TCP"
      lb_port           = "6565"
      lb_protocol       = "TCP"
    },
  ]
}

And...

resource "aws_elb" "service" {
  name                      = "service-${var.name}"
  subnets                   = ["${var.subnet_ids}"]
  cross_zone_load_balancing = true
  internal                  = true

  # Pass it in as a list.
  listener = ["${var.service_ports}"]

  health_check {
    healthy_threshold   = 2
    unhealthy_threshold = 2
    interval            = 30
    timeout             = 5
    target              = "${var.service_check_target}"
  }

  tags {
    Name        = "service-${var.name}"
    Environment = "${var.environment}"
    Service     = "${var.name}"
  }
}

Outputs the following on on plan...

      listener.#:                             "2"
      listener.3057123346.instance_port:      "80"
      listener.3057123346.instance_protocol:  "HTTP"
      listener.3057123346.lb_port:            "80"
      listener.3057123346.lb_protocol:        "HTTP"
      listener.3057123346.ssl_certificate_id: ""
      listener.3132934786.instance_port:      "6565"
      listener.3132934786.instance_protocol:  "TCP"
      listener.3132934786.lb_port:            "6565"
      listener.3132934786.lb_protocol:        "TCP"
      listener.3132934786.ssl_certificate_id: ""

Seems like this would work fine? I've now got a module that takes a variable number of ports/listeners for the ELB. (This would be easier of ALBs supported TCP for me.)

@joestump it seems that NLB was introduced as the L4 complement to ALB (which is L7). Now all it takes is to write a resource for it :)

I'm going to lock this issue because it has been closed for _30 days_ โณ. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thanks!

Was this page helpful?
0 / 5 - 0 ratings