_This issue was originally opened by @andreaske76 as hashicorp/terraform#18753. It was migrated here as a result of the provider split. The original body of the issue is below._
I got the following error message after an NLB subnet change / NLB recreation:
aws_lb.nlb_service: Destroying... (ID: arn:aws:elasticloadbalancing:eu-west-1:...ailRelay-vpc-xxxxxxx/xxxxxxxx)
Error: Error applying plan:
1 error(s) occurred:
* aws_lb.nlb_service (destroy): 1 error(s) occurred:
* aws_lb.nlb_service: Error deleting LB: ResourceInUse: Load balancer 'arn:aws:elasticloadbalancing:eu-west-1:xxxxxxxxxxxxx:loadbalancer/net/MailRelay-vpc-xxxxxxxxx/xxxxxxxxxxxxxxx' cannot be deleted because it is currently associated with another service
status code: 400, request id: 1a9fced9-xxxx-11e8-845c-2944ad39d000
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.
My Terraform code is:
resource "aws_lb" "nlb_service" {
name = "${var.ELB-name}-${var.VPCId}"
internal = true
load_balancer_type = "network"
subnets = ["${var.subnet01_az1_id}", "${var.subnet01_az2_id}", "${var.subnet01_az3_id}"]
}
resource "aws_lb_target_group" "target_group_nlb_service" {
name = "${var.ELB-target-group-name}-${var.VPCId}"
port = "${var.tcp_service}"
protocol = "TCP"
vpc_id = "${var.VPCId}"
target_type = "ip"
}
resource "aws_lb_target_group_attachment" "target_group_attach_nlb_service" {
count = "${var.NLB_target_count}"
availability_zone = "all"
target_group_arn = "${aws_lb_target_group.target_group_nlb_service.arn}"
target_id = "${element(var.NLB_target_IPs, count.index)}"
port = "${var.tcp_service}"
}
resource "aws_lb_listener" "nlb_service-listener" {
load_balancer_arn = "${aws_lb.nlb_service.arn}"
port = "${var.tcp_service}"
protocol = "TCP"
"default_action" {
target_group_arn = "${aws_lb_target_group.target_group_nlb_service.arn}"
type = "forward"
}
}
resource "aws_vpc_endpoint_service" "endpoint_service_nlb_service" {
acceptance_required = true
network_load_balancer_arns = ["${aws_lb.nlb_service.arn}"]
}
I think the aws_vpc_endpoint_service will not disassociate the NLB before the NLB will be recreated.
Hi @andreaske76 馃憢 Sorry you ran into trouble here.
Are you able to provide the full terraform plan output? I'm curious if the aws_vpc_endpoint_service change is noted.
That's my output of terraform plan:
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
aws_lb_target_group.target_group_nlb_service: Refreshing state... (ID: arn:aws:elasticloadbalancing:eu-west-1:...ailRelay-vpc-4b4dxxxx/35ef24ccbexxxxxx)
aws_lb.nlb_service: Refreshing state... (ID: arn:aws:elasticloadbalancing:eu-west-1:...ailRelay-vpc-4b4dxxxx/f014f36xxxxxxxxx)
aws_vpc_endpoint_service.endpoint_service_nlb_service: Refreshing state... (ID: vpce-svc-056542ff7eb990xxx)
aws_lb_listener.nlb_service-listener: Refreshing state... (ID: arn:aws:elasticloadbalancing:eu-west-1:...1e2d/f014f36xxxxxxxxx/7eb7c4667ce5c383)
aws_lb_target_group_attachment.target_group_attach_nlb_service[3]: Refreshing state... (ID: arn:aws:elasticloadbalancing:eu-west-1:...4ccbexxxxxx-20180719132200667200000014)
aws_lb_target_group_attachment.target_group_attach_nlb_service[4]: Refreshing state... (ID: arn:aws:elasticloadbalancing:eu-west-1:...4ccbexxxxxx-20180719132159633200000005)
aws_lb_target_group_attachment.target_group_attach_nlb_service[2]: Refreshing state... (ID: arn:aws:elasticloadbalancing:eu-west-1:...4ccbexxxxxx-20180719132159738200000008)
aws_lb_target_group_attachment.target_group_attach_nlb_service[15]: Refreshing state... (ID: arn:aws:elasticloadbalancing:eu-west-1:...4ccbexxxxxx-20180719132159574900000003)
aws_lb_target_group_attachment.target_group_attach_nlb_service[0]: Refreshing state... (ID: arn:aws:elasticloadbalancing:eu-west-1:...4ccbexxxxxx-2018071913215996010000000d)
aws_lb_target_group_attachment.target_group_attach_nlb_service[5]: Refreshing state... (ID: arn:aws:elasticloadbalancing:eu-west-1:...4ccbexxxxxx-20180719132200183200000010)
aws_lb_target_group_attachment.target_group_attach_nlb_service[18]: Refreshing state... (ID: arn:aws:elasticloadbalancing:eu-west-1:...4ccbexxxxxx-2018071913220003950000000f)
aws_lb_target_group_attachment.target_group_attach_nlb_service[8]: Refreshing state... (ID: arn:aws:elasticloadbalancing:eu-west-1:...4ccbexxxxxx-20180719132159652400000006)
aws_lb_target_group_attachment.target_group_attach_nlb_service[17]: Refreshing state... (ID: arn:aws:elasticloadbalancing:eu-west-1:...4ccbexxxxxx-20180719132159774300000009)
aws_lb_target_group_attachment.target_group_attach_nlb_service[10]: Refreshing state... (ID: arn:aws:elasticloadbalancing:eu-west-1:...4ccbexxxxxx-20180719132201301200000016)
aws_lb_target_group_attachment.target_group_attach_nlb_service[12]: Refreshing state... (ID: arn:aws:elasticloadbalancing:eu-west-1:...4ccbexxxxxx-2018071913215984480000000a)
aws_lb_target_group_attachment.target_group_attach_nlb_service[11]: Refreshing state... (ID: arn:aws:elasticloadbalancing:eu-west-1:...4ccbexxxxxx-20180719132159592200000004)
aws_lb_target_group_attachment.target_group_attach_nlb_service[13]: Refreshing state... (ID: arn:aws:elasticloadbalancing:eu-west-1:...4ccbexxxxxx-20180719132159512000000001)
aws_lb_target_group_attachment.target_group_attach_nlb_service[9]: Refreshing state... (ID: arn:aws:elasticloadbalancing:eu-west-1:...4ccbexxxxxx-20180719132159720300000007)
aws_lb_target_group_attachment.target_group_attach_nlb_service[14]: Refreshing state... (ID: arn:aws:elasticloadbalancing:eu-west-1:...4ccbexxxxxx-2018071913215991690000000c)
aws_lb_target_group_attachment.target_group_attach_nlb_service[20]: Refreshing state... (ID: arn:aws:elasticloadbalancing:eu-west-1:...4ccbexxxxxx-2018071913215998930000000e)
aws_lb_target_group_attachment.target_group_attach_nlb_service[6]: Refreshing state... (ID: arn:aws:elasticloadbalancing:eu-west-1:...4ccbexxxxxx-20180719132200337900000013)
aws_lb_target_group_attachment.target_group_attach_nlb_service[16]: Refreshing state... (ID: arn:aws:elasticloadbalancing:eu-west-1:...4ccbexxxxxx-20180719132201140900000015)
aws_lb_target_group_attachment.target_group_attach_nlb_service[1]: Refreshing state... (ID: arn:aws:elasticloadbalancing:eu-west-1:...4ccbexxxxxx-20180719132200260500000012)
aws_lb_target_group_attachment.target_group_attach_nlb_service[7]: Refreshing state... (ID: arn:aws:elasticloadbalancing:eu-west-1:...4ccbexxxxxx-2018071913215989610000000b)
aws_lb_target_group_attachment.target_group_attach_nlb_service[19]: Refreshing state... (ID: arn:aws:elasticloadbalancing:eu-west-1:...4ccbexxxxxx-20180719132200210800000011)
aws_lb_target_group_attachment.target_group_attach_nlb_service[21]: Refreshing state... (ID: arn:aws:elasticloadbalancing:eu-west-1:...4ccbexxxxxx-20180719132159523800000002)
------------------------------------------------------------------------
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
~ update in-place
-/+ destroy and then create replacement
Terraform will perform the following actions:
-/+ aws_lb.nlb_service (new resource required)
id: "arn:aws:elasticloadbalancing:eu-west-1:514704127851:loadbalancer/net/MailRelay-vpc-4b4dxxxx/f014f36xxxxxxxxx" => <computed> (forces new resource)
arn: "arn:aws:elasticloadbalancing:eu-west-1:514704127851:loadbalancer/net/MailRelay-vpc-4b4dxxxx/f014f36xxxxxxxxx" => <computed>
arn_suffix: "net/MailRelay-vpc-4b4dxxxx/f014f36xxxxxxxxx" => <computed>
dns_name: "MailRelay-vpc-4b4dxxxx-f014f36xxxxxxxxx.elb.eu-west-1.amazonaws.com" => <computed>
enable_cross_zone_load_balancing: "false" => "false"
enable_deletion_protection: "false" => "false"
internal: "true" => "true"
ip_address_type: "ipv4" => <computed>
load_balancer_type: "network" => "network"
name: "MailRelay-vpc-4b4dxxxx" => "MailRelay-vpc-4b4dxxxx"
security_groups.#: "0" => <computed>
subnet_mapping.#: "3" => <computed>
subnets.#: "3" => "3"
subnets.1140074952: "subnet-1d3cb2xx" => "" (forces new resource)
subnets.1726784642: "subnet-afbe75xx" => "" (forces new resource)
subnets.1780564593: "" => "subnet-583db3xx" (forces new resource)
subnets.1880408859: "" => "subnet-88b279xx" (forces new resource)
subnets.3082521147: "" => "subnet-26aa33xx" (forces new resource)
subnets.3232097513: "subnet-b5b62fxx" => "" (forces new resource)
vpc_id: "vpc-4b4d1exx" => <computed>
zone_id: "Z2IFOLAFXWLOXX" => <computed>
-/+ aws_lb_listener.nlb_service-listener (new resource required)
id: "arn:aws:elasticloadbalancing:eu-west-1:XXXXXXXXXXXX:listener/net/MailRelay-vpc-4b4dxxxx/f014f36xxxxxxxxx/7eb7c4667ce5c383" => <computed> (forces new resource)
arn: "arn:aws:elasticloadbalancing:eu-west-1:XXXXXXXXXXXX:listener/net/MailRelay-vpc-4b4dxxxx/f014f36xxxxxxxxx/7eb7c4667ce5c383" => <computed>
default_action.#: "1" => "1"
default_action.0.target_group_arn: "arn:aws:elasticloadbalancing:eu-west-1:XXXXXXXXXXXX:targetgroup/MailRelay-vpc-4b4dxxxx/35ef24ccbexxxxxx" => "arn:aws:elasticloadbalancing:eu-west-1:514704127851:targetgroup/MailRelay-vpc-4b4dxxxx/35ef24ccbexxxxxx"
default_action.0.type: "forward" => "forward"
load_balancer_arn: "arn:aws:elasticloadbalancing:eu-west-1:XXXXXXXXXXXX:loadbalancer/net/MailRelay-vpc-4b4dxxxx/f014f36xxxxxxxxx" => "${aws_lb.nlb_service.arn}" (forces new resource)
port: "25" => "25"
protocol: "TCP" => "TCP"
ssl_policy: "" => <computed>
~ aws_vpc_endpoint_service.endpoint_service_nlb_service
network_load_balancer_arns.#: "" => <computed>
Plan: 2 to add, 1 to change, 2 to destroy.
Is this bug going to be assigned?
I had the same issue when trying to add additional subnets to a NLB that was associated with an endpoint service. The subnet change correctly showed in the output plan for the NLB and showed the update to the service endpoint (new NLB).
Output plan
Resource actions are indicated with the following symbols:
~ update in-place
-/+ destroy and then create replacement
Terraform will perform the following actions:
# module.proxy.aws_lb.nlb must be replaced
-/+ resource "aws_lb" "nlb" {
~ arn = "arn:aws:elasticloadbalancing:us-east-1:xxx:loadbalancer/net/bi-proxy-res/55938f006b539f3d" -> (known after apply)
~ arn_suffix = "net/bi-proxy-res/55938f006b539f3d" -> (known after apply)
~ dns_name = "bi-proxy-res-xxx.elb.us-east-1.amazonaws.com" -> (known after apply)
enable_cross_zone_load_balancing = true
enable_deletion_protection = false
~ id = "arn:aws:elasticloadbalancing:us-east-1:xxx:loadbalancer/net/bi-proxy-res/55938f006b539f3d" -> (known after apply)
internal = true
~ ip_address_type = "ipv4" -> (known after apply)
load_balancer_type = "network"
name = "bi-proxy-res"
~ security_groups = [] -> (known after apply)
~ subnets = [ # forces replacement
+ "subnet-041dbefd3ec5b3a7a",
+ "subnet-0c67e2d5dcd4d5487",
+ "subnet-0fcf989be1b877c80",
"subnet-1721bd7b",
"subnet-53869709",
"subnet-b33ce08c",
]
tags = {
"application" = "infra-service-endpoints-bi"
"cost_centre" = "bi"
"environment" = "res"
"repository" = "terraform/services/proxies-from-bi"
}
~ vpc_id = "vpc-xxx" -> (known after apply)
~ zone_id = "xxx" -> (known after apply)
- access_logs {
- enabled = false -> null
}
- subnet_mapping {
- subnet_id = "subnet-1721bd7b" -> null
}
- subnet_mapping {
- subnet_id = "subnet-53869709" -> null
}
- subnet_mapping {
- subnet_id = "subnet-b33ce08c" -> null
}
+ subnet_mapping {
+ allocation_id = (known after apply)
+ subnet_id = (known after apply)
}
}
# module.proxy.aws_lb_listener.proxies["redshift"] must be replaced
-/+ resource "aws_lb_listener" "proxies" {
~ arn = "arn:aws:elasticloadbalancing:us-east-1:xxx:listener/net/bi-proxy-res/55938f006b539f3d/f0fcbf2ff2f764e5" -> (known after apply)
~ id = "arn:aws:elasticloadbalancing:us-east-1:xxx:listener/net/bi-proxy-res/55938f006b539f3d/f0fcbf2ff2f764e5" -> (known after apply)
~ load_balancer_arn = "arn:aws:elasticloadbalancing:us-east-1:xxx:loadbalancer/net/bi-proxy-res/55938f006b539f3d" -> (known after apply) # forces replacement
port = 5439
protocol = "TCP"
+ ssl_policy = (known after apply)
~ default_action {
~ order = 1 -> (known after apply)
target_group_arn = "arn:aws:elasticloadbalancing:us-east-1:xxx:targetgroup/bi-proxy-res-redshift/65e682dc094780b6"
type = "forward"
}
}
# module.proxy.aws_vpc_endpoint_service.api_proxies will be updated in-place
~ resource "aws_vpc_endpoint_service" "api_proxies" {
acceptance_required = false
allowed_principals = [
"arn:aws:iam::xxx:root",
]
availability_zones = [
"us-east-1a",
"us-east-1c",
"us-east-1e",
]
base_endpoint_dns_names = [
"vpce-svc-xxx.us-east-1.vpce.amazonaws.com",
]
id = "vpce-svc-xxx"
manages_vpc_endpoints = false
~ network_load_balancer_arns = [
- "arn:aws:elasticloadbalancing:us-east-1:xxx:loadbalancer/net/bi-proxy-res/55938f006b539f3d",
] -> (known after apply)
service_name = "xxx"
service_type = "Interface"
state = "Available"
tags = {
"Name" = "bi-proxy-res"
"application" = "infra-service-endpoints-bi"
"cost_centre" = "bi"
"environment" = "res"
"repository" = "terraform/services/proxies-from-bi"
}
}
However it failed to delete the NLB when applying which resulted in:
Apply output
module.proxy.aws_lb_listener.proxies["redshift"]: Destroying... [id=arn:aws:elasticloadbalancing:us-east-1:xxx:listener/net/bi-proxy-res/55938f006b539f3d/f0fcbf2ff2f764e5]
module.proxy.aws_autoscaling_group.proxies: Modifying... [id=bi-proxy-res20200224134419632600000001]
module.proxy.aws_lb_listener.proxies["redshift"]: Destruction complete after 1s
module.proxy.aws_lb.nlb: Destroying... [id=arn:aws:elasticloadbalancing:us-east-1:xxx:loadbalancer/net/bi-proxy-res/55938f006b539f3d]
module.proxy.aws_security_group_rule.tcp_to_proxy_asg["redshift"]: Destruction complete after 1s
module.proxy.aws_security_group_rule.tcp_to_proxy_asg["redshift"]: Creating...
module.proxy.aws_autoscaling_group.proxies: Modifications complete after 2s [id=bi-proxy-res20200224134419632600000001]
module.proxy.aws_security_group_rule.tcp_to_proxy_asg["redshift"]: Creation complete after 2s [id=sgrule-1433283772]
Error: Error deleting LB: ResourceInUse: Load balancer 'arn:aws:elasticloadbalancing:us-east-1:xxx:loadbalancer/net/bi-proxy-res/55938f006b539f3d' cannot be deleted because it is currently associated with another service
status code: 400, request id: 297f684c-e201-4430-ad1b-182ec9edde13
Note that this issue is different to #8536 as it applies to the service endpoint and not the endpoint, so the work around mentioned to use the explicit subnet association does not apply here. I had to work around it manually by:
Is anyone aware of a work around that does not require manual steps?
Most helpful comment
Is this bug going to be assigned?