In essence terraform complains, if I understand it correctly, about not being able to remove non-existent resource that is a dependency for another non-existent resource.
v0.11.7
Plan:
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
~ update in-place
- destroy
<= read (data resources)
Terraform will perform the following actions:
<= module.notify_slack.data.archive_file.copy_archive_file
id: <computed>
output_base64sha256: <computed>
output_md5: <computed>
output_path: "/terraform/states/platform/.terraform/modules/9cc629faae97b19da00144a87126b075/.terraform/archive_files/notify_slack.zip"
output_sha: <computed>
output_size: <computed>
source.#: <computed>
source_dir: "/terraform/states/platform/.terraform/modules/9cc629faae97b19da00144a87126b075/build/out"
type: "zip"
~ module.notify_slack.aws_lambda_function.lambda
last_modified: "2018-08-09T11:50:47.239+0000" => <computed>
source_code_hash: "X68jLZpyrn/OOO/NtBO5aTnB0XJZS9a5246lWk2b/3k=" => "MmuuXhYtpIehnhUJk6G34EeFiEZjdftMymOVawgjZUU="
- module.influxdb.module.ecs_cluster.aws_iam_instance_profile.ecs (deposed)
- module.influxdb.module.ecs_cluster.aws_iam_role.ecs (deposed)
- module.influxdb.module.ecs_cluster.aws_launch_configuration.ecs (deposed)
- module.influxdb.module.ecs_cluster.aws_security_group.ecs (deposed)
Plan: 0 to add, 1 to change, 4 to destroy.
Crash:
Applying...
+ terraform apply -parallelism=5 platform-prod-eu-central-1.plan
Releasing state lock. This may take a few moments...
module.notify_slack.data.archive_file.copy_archive_file: Refreshing state...
module.influxdb.module.ecs_cluster.aws_launch_configuration.ecs.deposed: Destroying... (ID: lech-ecs20180208115759339400000003)
module.notify_slack.aws_lambda_function.lambda: Modifying... (ID: us-east-1-dash-slack-lambda)
last_modified: "2018-08-09T11:50:47.239+0000" => "<computed>"
source_code_hash: "X68jLZpyrn/OOO/NtBO5aTnB0XJZS9a5246lWk2b/3k=" => "MmuuXhYtpIehnhUJk6G34EeFiEZjdftMymOVawgjZUU="
module.notify_slack.aws_lambda_function.lambda: Still modifying... (ID: us-east-1-dash-slack-lambda, 10s elapsed)
module.notify_slack.aws_lambda_function.lambda: Still modifying... (ID: us-east-1-dash-slack-lambda, 20s elapsed)
module.notify_slack.aws_lambda_function.lambda: Modifications complete after 24s (ID: us-east-1-dash-slack-lambda)
Releasing state lock. This may take a few moments...
Error: Error applying plan:
1 error(s) occurred:
* module.influxdb.module.ecs_cluster.aws_launch_configuration.ecs (destroy): 1 error(s) occurred:
* aws_launch_configuration.ecs (deposed #0): 1 error(s) occurred:
* aws_launch_configuration.ecs (deposed #0): ValidationError: Launch configuration name not found - Launch configuration lech-ecs20180208115759339400000003 not found
status code: 400, request id: 403631ff-9bcb-11e8-b540-49f5cb28067c
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.
The deferred resource should be removed.
Crashes
This is tricky as it happen after a failed terraform apply so I am unsure on how to produce this state.
What would be useful is to know the way that terraform calculates the state files checksum. With the ability to update the checksum, removing those deposed resources would be very easy.
Hi @teu! Thanks for reporting this, and sorry for this unfortunate behavior.
I think the issue here is that Terraform expects that the only reasonable operation to do for a deposed instance is to destroy it. I think what it should ideally do is first _refresh_ that instance during plan -- just as it would do for a non-deposed resource -- and then Terraform would get an opportunity to notice that the remote object is already deleted and not plan to delete it.
In the mean time, I think the only way to avoid this is to let Terraform be the one to delete the deposed object, rather than some other system. If the object still exists at the point Terraform tries to delete it then it should complete successfully.
Hi @apparentlymart. The problem is, we have this in our production state and we are quite not sure how to get rid of it without removing infrastructure. I tried removing it with terraform state rm and tried terraform refresh, didn't work.
Any idea how to fix the state?
Hi @teu!
I'm sorry I don't have a great answer here, but I do have an idea for a possible workaround:
terraform state rm module.influxdb.module.ecs_cluster.aws_launch_configuration.ecs to make Terraform "forget" all of the remote objects associated with that resource.terraform import module.influxdb.module.ecs_cluster.aws_launch_configuration.ecs NAME where NAME is the current launch configuration name saved earlier, to re-import the current one back into the state again.I will be the first to admit that this is a bothersome workaround because it involves creating a temporary state where Terraform doesn't know about the remote resource at all, and so it'll take some care to ensure that another Terraform run doesn't try to create a fresh one in the mean time.
A variant of this is possible if your environment can tolerate there temporarily being another duplicate launch configuration: do steps 1 and 2 from above and then just run terraform apply to have Terraform create a new object for module.influxdb.module.ecs_cluster.aws_launch_configuration.ecs. Once that's succeeded, delete the old one (now forgotten by Terraform) manually from the AWS console.
Hi @apparentlymart,
This actually solved my problem. We are very grateful!
Cheers
Hello,
I also have this problem.
In my opinion, this issue shouldn't have been closed, as it wasn't solved - only a workaround was provided.
Please re-open the issue.
@vlad2 works for me here
$ terraform plan
[...]
Plan: 20 to add, 0 to change, 1 to destroy.
Then:
$ terraform state rm module.eks.aws_launch_configuration.eks
1 items removed.
Item removal successful.
$ terraform plan
[...]
Plan: 21 to add, 0 to change, 0 to destroy.
@apparentlymart You are a life Saver. I wish i can thank you some how. Cheers and thank you
I'm going to lock this issue because it has been closed for _30 days_ โณ. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
Most helpful comment
Hi @teu!
I'm sorry I don't have a great answer here, but I do have an idea for a possible workaround:
terraform state rm module.influxdb.module.ecs_cluster.aws_launch_configuration.ecsto make Terraform "forget" all of the remote objects associated with that resource.terraform import module.influxdb.module.ecs_cluster.aws_launch_configuration.ecs NAMEwhere NAME is the current launch configuration name saved earlier, to re-import the current one back into the state again.I will be the first to admit that this is a bothersome workaround because it involves creating a temporary state where Terraform doesn't know about the remote resource at all, and so it'll take some care to ensure that another Terraform run doesn't try to create a fresh one in the mean time.
A variant of this is possible if your environment can tolerate there temporarily being another duplicate launch configuration: do steps 1 and 2 from above and then just run
terraform applyto have Terraform create a new object formodule.influxdb.module.ecs_cluster.aws_launch_configuration.ecs. Once that's succeeded, delete the old one (now forgotten by Terraform) manually from the AWS console.