Terraform: TF-0.9.2 "ResourceInUse: Cannot delete launch configuration " occured. When changing launch configuration

Created on 11 Apr 2017  ยท  10Comments  ยท  Source: hashicorp/terraform

Same issue that was happening in GH-11349 is now happening on terraform 0.9.2

* module.service.aws_launch_configuration.launch_configuration (destroy): 1 error(s) occurred:

* aws_launch_configuration.launch_configuration (deposed #1): 1 error(s) occurred:

* aws_launch_configuration.launch_configuration (deposed #1): ResourceInUse: Cannot delete launch configuration Prod-0073185873a1d1466e73181bd3 because it is attached to AutoScalingGroup Prod-Prod-U15OLDGS3MCV
    status code: 400, request id: 68e9849a-1e3a-11e7-a57f-67c69b83858f
bug provideaws

Most helpful comment

It seems when both launch configuration and ASG configured with "create_before_destroy = true" the following happens:

  1. New launch configuration created.
  2. Old launch configuration destroyed ( and fails here because associated with old ASG ).
  3. New ASG created.
  4. Old ASG destroyed.

However when new LC already created on first run, old LC destruction started after ASG re-provisioning. So on second try terraform apply works well.

I see such behavior with v0.8.8 periodically and have assumption the same happens with v0.9.x.

All 10 comments

Seeing this in v0.9.3 as well.

Hi @mtb-xt

Thanks for opening the issue here - in order to try and get this looked at, please can you show us the terraform config (minus any secrets) that you are using? This will help us understand how to recreate the failure condition

thanks

Paul

It seems when both launch configuration and ASG configured with "create_before_destroy = true" the following happens:

  1. New launch configuration created.
  2. Old launch configuration destroyed ( and fails here because associated with old ASG ).
  3. New ASG created.
  4. Old ASG destroyed.

However when new LC already created on first run, old LC destruction started after ASG re-provisioning. So on second try terraform apply works well.

I see such behavior with v0.8.8 periodically and have assumption the same happens with v0.9.x.

Hi @stack72 , Thanks for replying - As @blinohod already said, this happens when you create an ASG and then attach a launch configuration with "create_before_destroy = true"

In my case, we're using a CloudFormation template inside a Terraform one to create the ASG, the configuration is in this gist

@stack72

This is causing havoc with our deploys. We update AMIs monthly, and we've been updating windows AMIs like mad today because of #WannaCry and this is causing problems for us across 200+ ASGs. This would be great if we could get a fix sooner than later. If you need code snippets, let me know.

I did a more detailed write-up here explaining the issues we were seeing: #13187, but on 0.9.4 we discovered that ignore_changes was causing the entire ASG resource to be ignored, thus introducing this dependency problem where TF tries to delete the LC by itself without respecting the ASG->LC link.

@stack72 here's a basic test case

We've run into this as well specifically when trying to pack some conventions around ASG resources into a module.

I've put two test folders on https://github.com/matschaffer/tf-issue-13517 to demonstrate the behavior on 0.9.5.

You can see that in withoutmodule changes apply fine, but in withmodule any change to user data causes:

Error applying plan:

1 error(s) occurred:

* module.shared.aws_launch_configuration.main (destroy): 1 error(s) occurred:

* aws_launch_configuration.main (deposed #0): 1 error(s) occurred:

* aws_launch_configuration.main (deposed #0): ResourceInUse: Cannot delete launch configuration matschaffer-test2-00c70fe4f2cbb8a2dbfef8b531 because it is attached to AutoScalingGroup matschaffer-test2-00bb371364dd47c85178899744
    status code: 400, request id: ...

Similar to the above comment a second apply will complete the change w/o error.

Looks like if you also put the aws_autoscaling_group into the module it works fine.

Though this doesn't work well for our case since some services isolate ASGs for each availability zone, but others don't.

I haven't looked into the code yet, but my guess is that modules have their own scope for deposed operations that's being evaluated too early in the cycle.

I'm going to lock this issue because it has been closed for _30 days_ โณ. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

Was this page helpful?
0 / 5 - 0 ratings