0.6.14
When removing user_data from aws_launch_configuration
resource "aws_launch_configuration" "application" {
lifecycle { create_before_destroy = true }
image_id = "${element(split(",", atlas_artifact.application.metadata_full.ami_id), index(split(",", atlas_artifact.application.metadata_full.region), var.region))}"
name_prefix = "${var.application_name}-${var.environment}-"
instance_type = "${var.instance_type}"
security_groups = ["${var.instance_security_group_id}"]
iam_instance_profile = "${var.codedeploy_role_name}"
}
When removing user_data from an existing aws_launch_configuration, Terraform throws the following error:
"aws_launch_configuration.application: doesn't support update"
Resource should be destroyed and recreated, with user_data set to ""
Terraform throws the following error:
"aws_launch_configuration.application: doesn't support update"
Using the code above:
terraform applySetting user_data = "" seems to not throw the error.
I had a similar problem with aws_ecs_task_definition resources. Using terraform taint was a successful workaround for this problem.
My 2 cents.
Same issue here with auto scaling group changed. Shouldn't launch configuration should be deleted and re-create since modify is never allow?
I have the same issue - if you modify something on a launch configuration file, terraform is unable to destroy the resource and the recreate it. I have tried to use taint but it's not working:
$ terraform taint aws_launch_configuration.as_lc_ecs_pub
The resource aws_launch_configuration.as_lc_ecs_pub in the module root has been marked as tainted!
$ terraform plan -target aws_launch_configuration.as_lc_ecs_pub
-/+ aws_launch_configuration.as_lc_ecs_pub
associate_public_ip_address: "true" => "true"
ebs_block_device.#: "0" => "<computed>"
ebs_optimized: "false" => "<computed>"
enable_monitoring: "true" => "true"
image_id: "ami-64385917" => "ami-64385917"
instance_type: "t2.medium" => "t2.medium"
key_name: "cirep-cvdm-test" => "cirep-cvdm-test"
name: "cirep-pub-ecs" => "cirep-pub-ecs"
root_block_device.#: "0" => "<computed>"
security_groups.#: "1" => "1"
security_groups.1327476399: "sg-906039f7" => "sg-906039f7"
user_data: "58c6eb8c3e3ccae2b73760056df41f2817b2e484" => "5b6a2d6bc0e6bfcddc3045d2e8055d42aa24de1d" (forces new resource)
Plan: 1 to add, 0 to change, 1 to destroy.
$ terraform apply -target aws_launch_configuration.as_lc_ecs_pub
Error applying plan:
1 error(s) occurred:
* aws_launch_configuration.as_lc_ecs_pub: Error creating launch configuration: AlreadyExists: Launch Configuration by this name already exists - A launch configuration already exists with the name cirep-pub-ecs
status code: 400, request id: 25fbb588-5ec5-11e6-bf8c-058208cad09b
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.
Later edit: Fixed it using name_prefix instead of name on my as_lc_ecs_pub resource.
Thank you,
Ionut
Hey there - sorry the super long wait. This was finally fixed in #9699 and released in v0.7.8!
I'm going to lock this issue because it has been closed for _30 days_ โณ. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.