When changing launch configuration.
"ResourceInUse: Cannot delete launch configuration " occured
* aws_launch_configuration.jqmb_lc.1 (deposed #0): ResourceInUse: Cannot delete launch configuration jqmb_smp_stg_lc_BLUE003df7c24951572ed9b5aed3a0 because it is attached to AutoScalin
gGroup jqmb_smp_stg_asg_BLUE
Terraform v0.8.4 .
but Terraform v0.7.7 is ok
Please list the resources as a list, for example:
resource "aws_launch_configuration" "jqmb_lc" {
name_prefix = "jqmb_smp_stg_lc_${var.environment_names[ count.index ]}"
count = "${var.environment_count}"
image_id = "${var.web_linux_ami}"
instance_type = "${var.web_instance_size}"
security_groups = [ "${aws_security_group.jqmb_sg_web_instance.id}" ]
associate_public_ip_address = true
lifecycle {
ignore_changes = ["image_id"]
create_before_destroy = true
}
}
/**
* Autoscaling group.
*/
resource "aws_autoscaling_group" "jqmb_asg" {
name = "jqmb_smp_stg_asg_${var.environment_names[ count.index ]}"
count = "${var.environment_count}"
availability_zones = ["${var.main_availability_zone}"]
launch_configuration ="${element(aws_launch_configuration.jqmb_lc.*.id, count.index)}"
min_size = "0"
max_size = "${var.web_instance_count}"
desired_capacity = "${var.web_instance_count}"
lifecycle {
ignore_changes = ["desired_capacity"]
}
vpc_zone_identifier = ["${element(aws_subnet.jqmb_web_subnet.*.id, count.index)}"]
target_group_arns = ["${element(aws_alb_target_group.jqmb_alb_target_group_http.*.id, count.index)}"]
tag {
key = "Name"
value = "JQMB-STG-WEB-${var.environment_names[ count.index ]}"
propagate_at_launch = true
}
}
1.Terraform create a new launch configuration with prefix name and change
2.and bind new launch configuration to autoscaling group.
3.and remote old launch configuration
1.Removing current launch configuration.
2.Error occured
ResourceInUse: Cannot delete launch configuration
Please list the steps required to reproduce the issue, for example:
terraform applyterraform applyReproduced, thanks for the report.
Here's full repro case:
https://gist.github.com/radeksimko/48badf865b9724e0b6d940fc390d29fc
Is there a workaround? We have already used terraform 0.8.4 on some of our important state files and downgrading would be extremely inconvenient.
It's a race between ASGs fully registering the launch configuration update, and the deposed lc being destroyable. Waiting long enough then running a second apply will destroy the deposed launch configurations successfully, as they are still tracked in the tfstate.
The provider for launch configuration probably needs some code added to see if it's a createbeforedestroy resource, and if so, implement a retry loop on the delete opp, similar to whats done in resource_aws_autoscaling_group.go:666 inside resourceAwsAutoscalingGroupDelete -- just in the corresponding function in resource_aws_launch_configuration.go
If you were implementing that, I imagine that if you can get at the deposed flag for the resource from inside that context, that would be sufficient to make that a retryable error. the flag looks like d.InstanceDiff.DestroyDeposed.
As an addenda, I think its that 'silent' handling of deposed resources the legacy graph code does that may have hidden that in 0.7, but I don't have the resources to debug deeper atm.
In my case apply tf tries to destroy (and fails) the old lc resource before changing the ASG to use the new one so no subsequent apply will succeed. Maybe a different issue if this isn't the case here?
+1 to what @georgiou said. I know sometimes you can re-apply to ungunk but not in my case.
I ended up just having to revert to an older version of the statefile and clean up orphaned resources 馃槥 I will be watching this issue closely.
I've been notified that this looks like a core bug, retagging and will take a look!
Verified, here is the legacy graph:
aws_autoscaling_group.jqmb_asg - *terraform.GraphNodeConfigResource
aws_autoscaling_group.jqmb_asg (destroy) - *terraform.graphNodeResourceDestroy
aws_launch_configuration.jqmb_lc - *terraform.GraphNodeConfigResource
aws_subnet.jqmb_web_subnet - *terraform.GraphNodeConfigResource
var.web_instance_count - *terraform.GraphNodeConfigVariable
aws_autoscaling_group.jqmb_asg (destroy) - *terraform.graphNodeResourceDestroy
provider.aws - *terraform.GraphNodeConfigProvider
var.environment_count - *terraform.GraphNodeConfigVariable
aws_launch_configuration.jqmb_lc - *terraform.GraphNodeConfigResource
aws_security_group.jqmb_sg_web_instance - *terraform.GraphNodeConfigResource
var.environment_count - *terraform.GraphNodeConfigVariable
var.environment_names - *terraform.GraphNodeConfigVariable
var.web_instance_size - *terraform.GraphNodeConfigVariable
var.web_linux_ami - *terraform.GraphNodeConfigVariable
aws_launch_configuration.jqmb_lc (destroy) - *terraform.graphNodeResourceDestroy
aws_autoscaling_group.jqmb_asg - *terraform.GraphNodeConfigResource
Key part is that LC destroy depends on ASG and ASG depends on the LC create.
With the new graph:
aws_autoscaling_group.jqmb_asg[0] - *terraform.NodeApplyableResource
aws_launch_configuration.jqmb_lc[0] (destroy) - *terraform.NodeDestroyResource
aws_launch_configuration.jqmb_lc[1] (destroy) - *terraform.NodeDestroyResource
var.main_availability_zone - *terraform.NodeRootVariable
var.web_instance_count - *terraform.NodeRootVariable
aws_autoscaling_group.jqmb_asg[1] - *terraform.NodeApplyableResource
aws_launch_configuration.jqmb_lc[0] (destroy) - *terraform.NodeDestroyResource
aws_launch_configuration.jqmb_lc[1] (destroy) - *terraform.NodeDestroyResource
var.main_availability_zone - *terraform.NodeRootVariable
var.web_instance_count - *terraform.NodeRootVariable
aws_launch_configuration.jqmb_lc[0] - *terraform.NodeApplyableResource
provider.aws - *terraform.NodeApplyableProvider
var.environment_count - *terraform.NodeRootVariable
var.environment_names - *terraform.NodeRootVariable
var.web_instance_size - *terraform.NodeRootVariable
var.web_linux_ami - *terraform.NodeRootVariable
aws_launch_configuration.jqmb_lc[0] (destroy) - *terraform.NodeDestroyResource
aws_launch_configuration.jqmb_lc[0] - *terraform.NodeApplyableResource
aws_launch_configuration.jqmb_lc[1] - *terraform.NodeApplyableResource
provider.aws - *terraform.NodeApplyableProvider
var.environment_count - *terraform.NodeRootVariable
var.environment_names - *terraform.NodeRootVariable
var.web_instance_size - *terraform.NodeRootVariable
var.web_linux_ami - *terraform.NodeRootVariable
aws_launch_configuration.jqmb_lc[1] (destroy) - *terraform.NodeDestroyResource
aws_launch_configuration.jqmb_lc[1] - *terraform.NodeApplyableResource
It is a good looking graph :) more specific than the old, but you can see that some edges are incorrect. Namely: the LC destroy depends only on LC create, not ASG.
More details: this only happens with resources with count > 1. CBD edge transformation works fine with count == 1.
Fix in PR #11753
FWIW I had to taint the asg resource then plan and apply. v0.8.8
I also need to taint asg resources in one of my modules, not sure why Terraform does not identify it needs to be recreated when the launch-config is recreated. Yet another module I use does not have this issue.
The below one works fine in recreating both resources when the LC changes.
resource "aws_autoscaling_group" "vault" {
name = "vault - ${aws_launch_configuration.vault.name}"
launch_configuration = "${aws_launch_configuration.vault.name}"
availability_zones = ["${split(",", var.availability-zones)}"]
min_size = "${var.nodes}"
max_size = "${var.nodes}"
desired_capacity = "${var.nodes}"
health_check_grace_period = 15
health_check_type = "EC2"
vpc_zone_identifier = ["${split(",", var.subnets)}"]
load_balancers = ["${aws_elb.vault.name}"]
tag {
key = "Name"
value = "vault"
propagate_at_launch = true
}
tag {
key = "consul_role"
value = "${var.consul_role}"
propagate_at_launch = true
}
}
resource "aws_launch_configuration" "vault" {
image_id = "${var.ami}"
instance_type = "${var.instance_type}"
key_name = "${var.key-name}"
security_groups = ["${aws_security_group.vault.id}"]
user_data = "${template_file.install.rendered}"
iam_instance_profile = "${var.iam_instance_profile}"
}
But this one below requires manual tainting of the ASG
name = "${var.name}"
default_cooldown = 60
max_size = "${var.asg_max}"
min_size = "${var.asg_min}"
#desired_capacity = "${var.servers}"
launch_configuration = "${aws_launch_configuration.launch-config.name}"
health_check_type = "${var.health_check_type}"
health_check_grace_period = 60
vpc_zone_identifier = ["${var.subnet_ids}"]
load_balancers = ["${aws_elb.consul.name}"]
tag {
key = "Name"
value = "${var.name}"
propagate_at_launch = "true"
}
tag {
key = "Environment"
value = "${var.environment}"
propagate_at_launch = true
}
tag {
key = "managed_by"
value = "terraform"
propagate_at_launch = true
}
tag {
key = "consul_role"
value = "${var.consul_role}"
propagate_at_launch = true
}
}
#define the launch configuration
resource "aws_launch_configuration" "launch-config" {
name = "${var.name}-${var.platform}-LC"
image_id = "${var.ami_file}"
instance_type = "${var.instance_type}"
security_groups = ["${aws_security_group.consul-nomad.id}"]
key_name = "${var.key_name}"
user_data = "${lookup(null_resource.test.triggers, var.userdata_template)}"
#This is required for the consul cluster to find members
iam_instance_profile = "${var.iam_instance_profile}"
}
This is happening again on terraform 0.9.2 , subscribed :)
still an issue.
This is definitely still a problem:
module.external_lb_asg.aws_launch_configuration.launch_configuration: Creating...
module.external_lb_asg.aws_launch_configuration.launch_configuration: Creation complete after 3s (ID: external_lb-20180420032359824800000001)
module.external_lb_asg.aws_launch_configuration.launch_configuration.deposed: Destroying... (ID: external_lb-20180420011524280700000001)
Error: Error applying plan:
A subsequent apply changes the launch configuration setting for the ASG.
So the real problem here is that it didn't even try to change the launch configuration in the ASG. So the delete doesn't depend on the ASG update.
Still happening.
Still happening in
terraform --version
Terraform v0.11.7
- provider.aws v1.27.0
Still happening in
Terraform v0.11.8
This bug is still happening in:
Terraform v0.11.10
+ provider.aws v1.43.1
Here's an example of the error I'm getting:
module.launchconfig.aws_launch_configuration.aws_launch_configuration: Creating...
associate_public_ip_address: "" => "true"
ebs_block_device.#: "" => "<computed>"
ebs_optimized: "" => "<computed>"
enable_monitoring: "" => "true"
iam_instance_profile: "" => "ec2.foo.bar"
image_id: "" => "ami-xxxyyy"
instance_type: "" => "m4.xlarge"
key_name: "" => "foobar"
name: "" => "Foo Bar - 7ed87ae6"
root_block_device.#: "" => "1"
root_block_device.0.delete_on_termination: "" => "true"
root_block_device.0.iops: "" => "<computed>"
root_block_device.0.volume_size: "" => "256"
root_block_device.0.volume_type: "" => "gp2"
security_groups.#: "" => "2"
security_groups.aaa: "" => "sg-xxx"
security_groups.bbb: "" => "sg-xxx"
user_data: "" => "xxx"
module.launchconfig.aws_launch_configuration.aws_launch_configuration: Creation complete after 2s (ID: Foo Bar - 7ed87ae6)
module.launchconfig.aws_launch_configuration.aws_launch_configuration.deposed: Destroying... (ID: Foo Bar - f0d9d964)
module.autoscaling.aws_autoscaling_group.aws_autoscaling_group: Modifying... (ID: Foo Bar)
launch_configuration: "Foo Bar - f0d9d964" => "Foo Bar - 7ed87ae6"
module.autoscaling.aws_autoscaling_group.aws_autoscaling_group: Modifications complete after 1s (ID: Foo Bar)
Error: Error applying plan:
1 error(s) occurred:
* module.launchconfig.aws_launch_configuration.aws_launch_configuration (destroy): 1 error(s) occurred:
* aws_launch_configuration.aws_launch_configuration (deposed #0): 1 error(s) occurred:
* aws_launch_configuration.aws_launch_configuration (deposed #0): ResourceInUse: Cannot delete launch configuration Foo Bar - f0d9d964 because it is attached to AutoScalingGroup Foo Bar
status code: 400, request id: xxxx
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.
In my resource "aws_launch_configuration" I have:
lifecycle {
create_before_destroy = true
}
I have to run terraform apply twice every time I make a change to the resource "aws_launch_configuration".
Hi all,
The AWS provider is no longer part of the Terraform Core release since Terraform 0.10, so you are seeing this with any version of Terraform newer than that then the bug is in the AWS provider version you are using, not in the Terraform Core version. You may wish to report a bug in the AWS provider repository.
Most helpful comment
Still happening.