Terraform-provider-aws: Apply fails when aws_autoscaling_group is into a module and its aws_launch_configuration changes.

Created on 13 Jun 2017  ยท  6Comments  ยท  Source: hashicorp/terraform-provider-aws

_This issue was originally opened by @jordiclariana as hashicorp/terraform#11557. It was migrated here as part of the provider split. The original body of the issue is below._


Terraform Version

Terraform v0.8.5

Affected Resource(s)

  • aws_autoscaling_group
  • aws_launch_configuration

Terraform Configuration Files

The main.tf file:

resource "aws_launch_configuration" "aws_lc" {
  name_prefix = "test"
  image_id = "ami-fe408091"
  instance_type = "t2.small"

  security_groups = ["sg-73d80d1b"]
  key_name = "ubuntu"

  lifecycle {
    create_before_destroy = true
  }
}

module "asg_module" {
  source = "aws_autoscaling_group_module"
  name = "test_asg"
  vpc_subnets = ["subnet-6760e60e"]
  availability_zones = ["eu-central-1a"]
  launch_configuration_name = "${aws_launch_configuration.aws_lc.name}"
}

The aws_autoscaling_group_module module file:

variable "name" {}
variable "vpc_subnets" { type = "list" }
variable "availability_zones" { type = "list" }
variable "launch_configuration_name" {}

resource "aws_autoscaling_group" "asg" {
  name = "${var.name}"
  launch_configuration = "${var.launch_configuration_name}"
  vpc_zone_identifier = ["${var.vpc_subnets}"]
  availability_zones = ["${var.availability_zones}"]
  max_size = "0"
  min_size = "0"
  health_check_type = "EC2"

}

Debug Output

https://gist.github.com/jordiclariana/151f1d04c32b60c856fab970ab560bd7

Expected Behavior

It is expected to work the first time, and also when you change aws_launch_configuration and apply again.

Actual Behavior

It works the first time, but if run after changing the aws_launch_configuration we get this message:

Error applying plan:

1 error(s) occurred:

* aws_launch_configuration.aws_lc (deposed #0): ResourceInUse: Cannot delete launch configuration test00e459ef7a52f055982e412c68 because it is attached to AutoScalingGroup test_asg
    status code: 400, request id: 4e2e469b-e7d3-11e6-92ef-f139a587520d

Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.

Steps to Reproduce

  1. terraform apply
  2. Modify something in the aws_launch_configuration.aws_lc (like for instance, change instance_type from t2.small to t2.medium)
  3. Run terraform apply again. We then get the error.

References

This is related with hashicorp/terraform#1109, and the proposed solution (add the lifecycle parameter) normally works, but when the aws_autoscaling_group is into a separated module it fails.

bug servicautoscaling

Most helpful comment

Any chance of this getting fixed in a coming release? Due to this limitation we cannot use modules for launch configuration and autoscaling groups - which in turn causes code duplication for each environment.

All 6 comments

I am facing the same issue. My auto scaling group and Launch configuration are in different git modules.

Any chance of this getting fixed in a coming release? Due to this limitation we cannot use modules for launch configuration and autoscaling groups - which in turn causes code duplication for each environment.

The fix for this has been merged and will release with version 2.1.0 of the Terraform AWS Provider, likely later today.

This has been released in version 2.1.0 of the AWS provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading.

I'm going to lock this issue because it has been closed for _30 days_ โณ. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thanks!

Was this page helpful?
0 / 5 - 0 ratings