Terraform-provider-aws: Cannot delete launch configuration because it is attached to AutoScalingGroup

Created on 30 Apr 2019  ยท  13Comments  ยท  Source: hashicorp/terraform-provider-aws

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Terraform Version

terraform -v
Terraform v0.11.13
+ provider.aws v2.7.0
+ provider.null v2.1.1
+ provider.random v2.1.1

Affected Resource(s)

  • aws_autoscaling_group
  • aws_launch_configuration

Terraform Configuration Files

directory layout

.
โ”œโ”€โ”€ main.tf
โ”œโ”€โ”€ modules
โ”‚ย ย  โ”œโ”€โ”€ app
โ”‚ย ย  โ””โ”€โ”€ terraform-aws-autoscaling

modules/app/main.tf

variable "private_subnets" {
  type    = "list"
}

variable "name" {}
variable "userdata" {}

module "app_asg" {
  source = "../terraform-aws-autoscaling"

  name          = "${var.name}"
  lc_name       = "${var.name}"
  asg_name      = "${var.name}"
  image_id      = "ami-0e219142c0bee4a6e"
  instance_type = "t2.micro"

  root_block_device = [
    {
      volume_size           = "10"
      volume_type           = "gp2"
      delete_on_termination = true
    },
  ]

  user_data                    = "${var.userdata}"
  vpc_zone_identifier          = ["${var.private_subnets}"]
  health_check_type            = "EC2"
  min_size                     = 1
  max_size                     = 1
  desired_capacity             = 1
  wait_for_capacity_timeout    = 0
  recreate_asg_when_lc_changes = true
}

main.tf

First apply with this configuration:

provider "aws" {
  region = "eu-west-1"
}

module "app-001" {
  source   = "modules/app"
  name     = "app-001"
  userdata = "echo hello there version 1"
}

Second apply with this configuration:

provider "aws" {
  region = "eu-west-1"
}

module "app-001" {
  source   = "modules/app"
  name     = "app-001"
  userdata = "echo hello there version 2" ## <- just changed this
}

Third apply with this configuration:

provider "aws" {
  region = "eu-west-1"
}

# module "app-001" {
#   source   = "modules/app"
#   name     = "app-001"
#   userdata = "echo hello there version 2" ## <- just changed this
# }

Debug Output

Panic Output

No

Expected Behavior

Terraform to apply the configuration and delete the commented out resources successfully.

Actual Behavior

Got an error:

module.app-001.app_asg.aws_autoscaling_group.this: Still destroying... (ID: app-001-tight-bengal-20190430102834949100000002, 1m0s elapsed)
module.app-001.app_asg.aws_autoscaling_group.this: Still destroying... (ID: app-001-tight-bengal-20190430102834949100000002, 1m10s elapsed)
module.app-001.module.app_asg.aws_autoscaling_group.this: Destruction complete after 1m15s

Error: Error applying plan:

1 error(s) occurred:

* module.app-001.module.app_asg.aws_launch_configuration.this (destroy): 1 error(s) occurred:

* aws_launch_configuration.this: error deleting Autoscaling Launch Configuration (app-001-20190430102834111500000001): ResourceInUse: Cannot delete launch configuration app-001-20190430102834111500000001 because it is attached to AutoScalingGroup app-001-tight-bengal-20190430102834949100000002
    status code: 400, request id: 2f89b9c4-6b33-11e9-8d9d-711c4d69f590

Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.

Steps to Reproduce

  1. terraform apply
  2. change configuration as stated above
  3. terraform apply
  4. change configuration as stated above, so we delete the module
  5. terraform apply

Important Factoids

References

Opening a new issue, as the original one was closed.

  • #646
needs-triage servicautoscaling

Most helpful comment

The punch line of this bug: aws_launch_configuration's attribute name breaks the aws_autoscaling_group dependency chain, when interpolated.

Anytime aws_launch_configuration has a change, it needs to be recreated: (new resource required). Knowing aws_launch_configuration is immutable and their names need to be unique per region, interpolating name into aws_autoscaling_group should _always_ force a new resource if it's being destroyed / created.

Now the work around is to use name_prefix in aws_launch_configuration instead, as aws_autoscaling_group recognizes this interpolated change, and is able to update without destroying aws_autoscaling_group (saving running instances in the process). EDIT: And adding create_before_destroy = true.

All 13 comments

Is there a work around for this?

So I looked into a possible workaround and if lifecycle actually does something to fix it:
here's the gist with the code - make sure to have credentials set up if you're copying this.

To summarise here:

If you change the name of the launch configuration the whole lifecycle hook works out fine at the moment.

I guess the easiest solution to do that is implemented using name_prefix or leaving both name and name_prefix empty.

this didn't fix our problem, even with name_prefix and the life cycle in place.

We've started migrating from Launch Configurations to Launch Templates. We have to run TF twice to get the Launch Configurations to delete. I suspect these issues are related, but I can open a new issue if it's requested.

TL;DWrite would be:

  1. make vpc
  2. make subnet
  3. make Launch Config
  4. make ASG
  5. attach lc to asg
  6. run TF APPLY
  7. make Launch Template
  8. delete lc
  9. attach LT to ASG
  10. run TF APPLY, receive error:
* aws_launch_configuration.12t_launch_configuration (deposed #0): 1 error(s) occurred:
* aws_launch_configuration.12t_launch_configuration (deposed #0): ResourceInUse: Cannot delete launch configuration terraform-20190626061640500900000008 because it is attached to AutoScalingGroup 12t_staging_asg
status code: 400, request id: b5315459-996b-11e9-8016-99e5310fc359

Ok, built the simple test case I documented above, and was unable to reproduce the behavior in a simple way.

I don't know the difference between my simple example, and our "real world" example, but whenever we move an ASG from LC to LT, terraform blows up with the above error, and has to be re-run a second time to run clean.

TF v0.11.14
aws ~> 1.60

Same issue @zapman449 -

@zapman449 we introduced a bug fix specifically to prevent the ResourceInUse error due to eventual consistency on deletion (#7819), which was released in version 2.1.0 of the Terraform AWS Provider. You may want to try upgrading to that version or later to see if it helps resolve your situation.

@jvelasquezjs looking at your original report, I'm noticing this:

module "app_asg" {
  source = "../terraform-aws-autoscaling"

  # ... other configuration ...
  lc_name       = "${var.name}"
  # ... other configuration ...

Can you please show the aws_autoscaling_group resource configuration? In order for Terraform to determine the correct ordering of operations, its important that the aws_launch_configuration.XXX.name attribute reference is being directly passed to the aws_autoscaling_group resource (through outputs and module arguments if necessary) and is not just referencing the literal name string. You may also wish to take a peek at the output of terraform graph, which should show the dependency between the two resources via an arrow between the two if the references are correct.

@bflad the aws_autoscaling_group resource is as used in the module:

terraform-aws-autoscaling:
  source:  "[email protected]:terraform-aws-modules/terraform-aws-autoscaling.git"
  version: "v2.9.1"

I ran into this error as well and updated the LC/ASG resources as suggested by these Terraform docs.

Summary:

  1. Add lifecycle block with create_before_destroy = true in both LC and ASG resources
  2. For LC resource, use name_prefix instead of name

Terraform and AWS provider versions:

  • Terraform v0.12.3
  • AWS 2.16

This is a snippet from my working config in case anyone is interested (var.create_before_destroy --> true):

######
# Main Cluster
######

resource "aws_ecs_cluster" "main" {
  name = var.main_cluster_name
}

## Lanch Configuration / Auto Scaling Group

resource "aws_launch_configuration" "main" {
  associate_public_ip_address = var.lc_main_associate_public_ip_address
  enable_monitoring           = var.lc_main_enable_monitoring
  iam_instance_profile        = var.lc_main_iam_instance_profile
  image_id                    = data.aws_ami.amazon_ecs_v2.id
  instance_type               = var.lc_main_instance_type
  key_name                    = var.lc_main_keypair_name
  name_prefix                 = var.lc_main_name
  security_groups             = var.lc_main_security_groups

  user_data = templatefile("${path.module}/templates/ecs_container_instance_userdata.tmpl", {
    cluster_name = var.lc_main_cluster_name,
    efs_id       = var.lc_efs_id,
    region       = data.aws_region.current
  })

  lifecycle {
    create_before_destroy = var.lc_main_create_before_destroy
  }
}

resource "aws_autoscaling_group" "main" {
  name                 = var.asg_main_name
  launch_configuration = aws_launch_configuration.main.name
  vpc_zone_identifier  = var.asg_private_subnet_ids

  desired_capacity = var.asg_main_desired_capacity
  max_size         = var.asg_main_maximum_size
  min_size         = var.asg_main_minimum_size

  lifecycle {
    create_before_destroy = var.asg_main_create_before_destroy
  }
}

The punch line of this bug: aws_launch_configuration's attribute name breaks the aws_autoscaling_group dependency chain, when interpolated.

Anytime aws_launch_configuration has a change, it needs to be recreated: (new resource required). Knowing aws_launch_configuration is immutable and their names need to be unique per region, interpolating name into aws_autoscaling_group should _always_ force a new resource if it's being destroyed / created.

Now the work around is to use name_prefix in aws_launch_configuration instead, as aws_autoscaling_group recognizes this interpolated change, and is able to update without destroying aws_autoscaling_group (saving running instances in the process). EDIT: And adding create_before_destroy = true.

Solution posted by @u2mejc indeed solved this issue for me.

My workaround was to create an md5 out of the user_data file.
resource "aws_autoscaling_group" "my_autoscaling" {
name = md5(data.template_file.user_date.template)
...
}

Was this page helpful?
0 / 5 - 0 ratings