Terraform: Cannot delete launch configuration because it is attached to AutoScalingGroup

Created on 31 Oct 2014  路  39Comments  路  Source: hashicorp/terraform

Using terraform v0.3.1, when I change the AMI my launch configuration depends on, it fails because of the reference to the autoscaling group.

Here's the relevant section of my configuration:

resource "aws_launch_configuration" "go_agent" {
  name = "go_agent"
  image_id = "${lookup(var.amis, var.region)}"
  instance_type = "t2.small"
  key_name = "${var.key_name}"
}

resource "aws_autoscaling_group" "go_agent_pool" {
  availability_zones = ["eu-west-1a", "eu-west-1b", "eu-west-1c"]
  vpc_zone_identifier = ["${aws_subnet.agentsInZoneA.id}","${aws_subnet.agentsInZoneB.id}","${aws_subnet.agentsInZoneC.id}"]
  name = "go_agent_pool"
  max_size = 3
  min_size = 0
  health_check_grace_period = 300
  health_check_type = "ELB"
  desired_capacity = 0
  force_delete = true
  launch_configuration = "${aws_launch_configuration.go_agent.name}"
}

Here's the result of "terraform apply"

$ terraform apply
aws_launch_configuration.go_agent: Refreshing state... (ID: go_agent)
aws_vpc.gocd: Refreshing state... (ID: vpc-d331f0b6)
aws_subnet.agentsInZoneC: Refreshing state... (ID: subnet-18956441)
aws_subnet.agentsInZoneB: Refreshing state... (ID: subnet-045e8c73)
aws_subnet.agentsInZoneA: Refreshing state... (ID: subnet-9377c3f6)
aws_subnet.go_server: Refreshing state... (ID: subnet-1b956442)
aws_security_group.go_server: Refreshing state... (ID: sg-0274cc67)
aws_internet_gateway.gocd: Refreshing state... (ID: igw-2fe50e4a)
aws_autoscaling_group.go_agent_pool: Refreshing state... (ID: go_agent_pool)
aws_route_table.gocd: Refreshing state... (ID: rtb-f827ec9d)
aws_instance.go_server: Refreshing state... (ID: i-51455513)
aws_route_table_association.go_server: Refreshing state... (ID: rtbassoc-a84497cd)
aws_eip.go_server_public_ip: Refreshing state... (ID: eipalloc-40a54a25)
aws_launch_configuration.go_agent: Destroying...
aws_launch_configuration.go_agent: Error: ResourceInUse: Cannot delete launch configuration go_agent because it is attached to AutoScalingGroup go_agent_pool
Error applying plan:

1 error(s) occurred:

* ResourceInUse: Cannot delete launch configuration go_agent because it is attached to AutoScalingGroup go_agent_pool

Thanks,
Kief

bug provideaws

Most helpful comment

I'm not sure if people have tried this method on this thread, but it is working for me, so I figured I would post.

  1. Do not name your launch config, let terraform name it automatically, so the name is calculated. Just delete the entire "name" param inside your launch config block.
  2. As mentioned above: lifecycle { create_before_destroy = true } Inside your launch config AND your ASG block.
  3. Not sure if this part matters, but I have the following inside my ASG block: depends_on = ["aws_launch_configuration.<launchconfigname>"]
  4. Name your ASG, including the generated launch config name, like so:
    name = "app-asg-${aws_launch_configuration.<launchconfigname>.name}"

Also note, if you include wait_for_elb_capacity = "${var.asg_desired}" your ASG will wait for that number of healthy hosts to show up BEFORE rotating out your old AMIs. Hope this helps

All 39 comments

I am having the same issue. Even changing simple "user_data" triggers this problem.

I am also running into this. For now, I create a new launch configuration, apply, then remove the old launch configuration, and apply.

+1

Ran into this within minutes of trying out terraform for the first time with trying to change the iam_instance_profile on a launch configuration.

Like @motdotla I'm able to work around it with small incremental changes but it would be problematic if terraform were in some sort of automated continuous integration.

:+1: We hit this as well, and like @gwilym above, it's a blocker for integrating Terraform into our CI pipeline. :person_frowning:

+1

:+1: im also experiencing this bug. I think #1109 is a duplicate

exciting, thanks!

This seems to be present in 0.7.4

I can confirm that this is present in 0.7.4

Present in 0.7.7 as well

Present in 0.7.9.

Modified Launch config userdata, ran apply:

Error applying plan:

1 error(s) occurred:

* aws_launch_configuration.ecs: ResourceInUse: Cannot delete launch configuration ECS health-staging-old because it is attached to AutoScalingGroup ECS health-staging-old
        status code: 400, request id: a0dcc293-a514-11e6-9498-13c53b8ef17a

Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.

Any update on this? Just bumped into this problem as well.

Still present in Terraform v0.7.11 when trying to update Launch config userdata,

Is there any plan to fix this? This bug has been around for 2+ years :/

Gyus, AWS won't allow you to delete active launch configuration. The fix for this bug (not really a bug) is to simply add a lifecycle rule "create_before_destroy" in launch configuration section (https://www.terraform.io/docs/configuration/resources.html#lifecycle)

Aha! All it takes is for someone to post the answer, rather than just sweep it under the rug :) Thanks a ton @rumenvasilev , worked great!

@rumenvasilev that's the kind of information we were really looking for anyway!

I would vote we close this issue, any takers?

lifecycle {
    create_before_destroy = true
  }

I've got this in my resource aws_launch_configuration and I still get this error message. I'm using 0.8.0-rc2

Error:

ResourceInUse: Cannot delete launch configuration

Still present in 0.8.0

Same for 0.8.2. Adding create_before_destroy unfortunately doesn't work when trying to just destroy.

Bump. Still present.

Still present in 0.8.3 when I'm only updating my launch config user_data script file.

I'm not sure if people have tried this method on this thread, but it is working for me, so I figured I would post.

  1. Do not name your launch config, let terraform name it automatically, so the name is calculated. Just delete the entire "name" param inside your launch config block.
  2. As mentioned above: lifecycle { create_before_destroy = true } Inside your launch config AND your ASG block.
  3. Not sure if this part matters, but I have the following inside my ASG block: depends_on = ["aws_launch_configuration.<launchconfigname>"]
  4. Name your ASG, including the generated launch config name, like so:
    name = "app-asg-${aws_launch_configuration.<launchconfigname>.name}"

Also note, if you include wait_for_elb_capacity = "${var.asg_desired}" your ASG will wait for that number of healthy hosts to show up BEFORE rotating out your old AMIs. Hope this helps

Thanks Michael - great tip, works for me. I guess the only knock-on effect of this is that you get a new load balancer each time you update your launch config, which means you potentially have to update your DNS and make sure nothing breaks while that's propagated.

I managed to achieve the right result without step 4 - which avoids the asg being recreated each time. So effectively I just used Michael's step 1 & 2 and all is good!

I ran into this problem even though I use lifecycle { create_before_destroy = true } on both the ASG and the launch configuration. When deleting an the ASG and launch configuration, I see terraform deleting the autoscaling group first and waiting for it to be destroyed like:

module.foo.aws_autoscaling_group.foo: Still destroying... (10s elapsed)
module.foo.aws_autoscaling_group.foo: Still destroying... (20s elapsed)
module.foo.aws_autoscaling_group.foo: Still destroying... (30s elapsed)
module.foo.aws_autoscaling_group.foo: Still destroying... (40s elapsed)
module.foo.aws_autoscaling_group.foo: Still destroying... (50s elapsed)
module.foo.aws_autoscaling_group.foo: Still destroying... (1m0s elapsed)
module.foo.aws_autoscaling_group.foo: Still destroying... (1m10s elapsed)
module.foo.aws_autoscaling_group.foo: Still destroying... (1m20s elapsed)
module.foo.aws_autoscaling_group.foo: Still destroying... (1m30s elapsed)
module.foo.aws_autoscaling_group.foo: Still destroying... (1m40s elapsed)
module.foo.aws_autoscaling_group.foo: Destruction complete

terraform then immediately goes on to delete the launch configuration, but the deletion fails because AWS still complains that the launch configuration is associated with an ASG. If I then run terraform apply again, the deletion succeeds.

I think this is an eventual consistency problem in AWS. Although the ASG is deleted first, it appears we need to wait a few seconds before attempting to delete the launch configuration.

Ss we still have issue with 0.8.7 even after it's update - we made "fix" :-|
Terrafrom calls from the Groovy script in Jenkins.
So - we just wrapped it into try/catch:

        ...
        sh 'cd terraform && terraform get'

        try {
            sh "cd terraform && terraform apply \
              -var 'environment=${ENVIRONMENT}' \
               ...
              -var 'max_size=8'"
        } catch(Exception) {
            return 0
        }
        ...

It's "ok" for us, as Terraform made changes in ASG's launch config setting before fail, and this is all we need to deploy an application.

Hope this will be fixed soon in a correct way.

example below works for me

esource "aws_launch_configuration" "api_dev_front" {
// name = "api_dev_front"
image_id = "ami-ANY_AMY"
instance_type = "t2.micro"
security_groups = [
"${var.api_dev_sec_gr_front}"]
user_data = "${file("./minin_data.sh")}"
key_name = "${var.api_dev_key_pair_minin}"
iam_instance_profile = "${var.api_dev_iam_minin_inst_prof}"
associate_public_ip_address = false
enable_monitoring = true
lifecycle {
create_before_destroy = true
}
}

resource "aws_autoscaling_group" "api_dev_front" {
vpc_zone_identifier = [
"${var.api_dev_ext_subnet_ids}"]
// name = "api_dev_front"
max_size = "4"
min_size = "1"
desired_capacity = "1"
health_check_type = "ELB"
force_delete = true
launch_configuration = "${aws_launch_configuration.api_dev_front.name}"
target_group_arns = ["${var.api_dev_alb_target_gr_arn}"]
lifecycle {
create_before_destroy = true
}
}

@michael-henderson If you do not name your launch configuration. Its hard to identify it on the console. So to identify the name of launch configuration need to navigate via Asg ?

It seems that if you have an autoscaling group that uses name, but then later you transition it to use name_prefix, but you add name to ignore_changes, you also run into this whenever something in the launch configuration changes.

I did this, but in a module, and I didn't want every user of the module to have to re-create their ASGs, and only have the name_prefix thing take effect on ASGs created after I made the change.

I had to revert and instead add another variable where the user can use a random suffix that gets appended to the ASG name. Quite annoying, but less annoying that having terraform fail a lot.

Not sure if this is related, but when I change some of my launch configurations, the ASGs are not recreated, even though they're using the respective LC's name.

@hubertgrzeskowiak when you change a launch configuration, the ASGs that use it will be updated to use it. They will not be recreated. Instances in the ASG will remain in place but any new instance launched will use the new launch configuration.

@joelittlejohn Curious, were you able to resolve this? I'm also seeing the same thing: terraform attempts to delete the launch config, and manually checking it should be able to, so I'm guessing it's an eventual consistency problem. When I re-plan and apply the changes, with nothing else changed, it's able to "depose" of the old launch configurations.

I wonder if there's some way for terraform to "sleep" a few seconds or just retry?

@joelittlejohn Therefore I am using the LC's name as part of the ASG's name. The implicit dependency should re-create it.

Is there any further guidance for this? I have custom logic to sleep on the terraform run then to remove the oldest LC matching each host configuration.

Change your launch_configuration to use name_prefix instead of name so there is no conflicts in the name.
And add

  lifecycle {
    create_before_destroy = true
  }

This still doesn't work at all as of Terraform v0.11.7 despite using name_prefix for the lc as well as create_before_destroy for both lc and asg.

I'm seeing the exact same behavior as @joshma - guaranteed to happen every single time.

I'm also seeing the same thing: terraform attempts to delete the launch config, and manually checking it should be able to, so I'm guessing it's an eventual consistency problem. When I re-plan and apply the changes, with nothing else changed, it's able to "depose" of the old launch configurations.

@phinze could you please reopen this and have another look?

Hi all,

Issues with the terraform AWS provider should be opened in the aws provider repository.

Because this closed issue is generating notifications for subscribers, I am going to lock it and encourage anyone experiencing issues with the aws provider to open tickets there.

Please continue to open issues here for any other terraform issues you encounter, and thanks!

Was this page helpful?
0 / 5 - 0 ratings