Terraform v0.11.11
+ provider.google v1.20.0
terreform init
terraform destroy -var 'cluster_name=lu-application-cluster' -var 'node_count=2'
Destroy complete! Resources: 1 destroyed.
Destroy complete! Resources: 0 destroyed.
I'm running into the same issue on v0.12.0-alpha4.
Terraform v0.12.0-alpha4 (2c36829d3265661d8edbd5014de8090ea7e2a076)
+ provider.aws v1.40.0-6-gb23683732-dev
From the output below, you can see that it is destroying things, but they're not reported as destroyed. I also check the AWS Console and verified that the resources no longer existed. The Statefile also appears to have been updated properly; a plan after the destroy shows all the resources needed to be recreated.
module.info.data.aws_vpc.destination: Refreshing state...
module.info.data.aws_subnet_ids.destination: Refreshing state...
module.info.data.aws_subnet.destination[0]: Refreshing state...
module.info.data.aws_subnet.destination[1]: Refreshing state...
module.network.module.load_balancing.aws_lb.lb: Refreshing state... [id=arn:aws:elasticloadbalancing:us-east-2:172271833182:loadbalancer/app/dev-mc-network-lb/0dac109eadba678e]
module.network.module.load_balancing.aws_lb_target_group.http: Refreshing state... [id=arn:aws:elasticloadbalancing:us-east-2:172271833182:targetgroup/dev-mc-network-lb-http/87ad48ddad303fdf]
module.network.module.security.aws_security_group.lb: Refreshing state... [id=sg-0c6f81a6b08d9b41f]
module.network.module.security.aws_security_group.node: Refreshing state... [id=sg-076503eb81fe97216]
module.network.module.security.aws_security_group_rule.node_vpc_in_ssh: Refreshing state... [id=sgrule-2243070925]
module.network.module.security.aws_security_group_rule.node_vpc_in_http: Refreshing state... [id=sgrule-672460542]
module.network.module.security.aws_security_group_rule.node_all_out: Refreshing state... [id=sgrule-247035264]
module.network.module.security.aws_security_group_rule.node_vpc_in_javahttp: Refreshing state... [id=sgrule-3715111259]
module.network.module.security.aws_security_group_rule.lb_all_in_https: Refreshing state... [id=sgrule-3631827115]
module.network.module.security.aws_security_group_rule.lb_all_in_http: Refreshing state... [id=sgrule-1490170565]
module.network.module.security.aws_security_group_rule.lb_all_out: Refreshing state... [id=sgrule-2109225246]
module.network.module.load_balancing.aws_lb.lb: Destroying... [id=arn:aws:elasticloadbalancing:us-east-2:172271833182:loadbalancer/app/dev-mc-network-lb/0dac109eadba678e]
module.network.module.security.aws_security_group_rule.lb_all_in_https: Destroying... [id=sgrule-3631827115]
module.network.module.load_balancing.aws_lb_target_group.http: Destroying... [id=arn:aws:elasticloadbalancing:us-east-2:172271833182:targetgroup/dev-mc-network-lb-http/87ad48ddad303fdf]
module.network.module.security.aws_security_group_rule.node_vpc_in_ssh: Destroying... [id=sgrule-2243070925]
module.network.module.security.aws_security_group_rule.node_vpc_in_javahttp: Destroying... [id=sgrule-3715111259]
module.network.module.security.aws_security_group_rule.lb_all_in_http: Destroying... [id=sgrule-1490170565]
module.network.module.security.aws_security_group_rule.lb_all_out: Destroying... [id=sgrule-2109225246]
module.network.module.security.aws_security_group_rule.node_all_out: Destroying... [id=sgrule-247035264]
module.network.module.security.aws_security_group_rule.node_vpc_in_http: Destroying... [id=sgrule-672460542]
module.network.module.load_balancing.aws_lb_target_group.http: Destruction complete after 1s
module.network.module.security.aws_security_group_rule.node_vpc_in_ssh: Destruction complete after 1s
module.network.module.security.aws_security_group_rule.lb_all_in_https: Destruction complete after 1s
module.network.module.security.aws_security_group_rule.node_vpc_in_javahttp: Destruction complete after 1s
module.network.module.security.aws_security_group_rule.lb_all_in_http: Destruction complete after 1s
module.network.module.load_balancing.aws_lb.lb: Destruction complete after 2s
module.network.module.security.aws_security_group_rule.node_vpc_in_http: Destruction complete after 2s
module.network.module.security.aws_security_group_rule.lb_all_out: Destruction complete after 2s
module.network.module.security.aws_security_group.lb: Destroying... [id=sg-0c6f81a6b08d9b41f]
module.network.module.security.aws_security_group_rule.node_all_out: Destruction complete after 3s
module.network.module.security.aws_security_group.node: Destroying... [id=sg-076503eb81fe97216]
module.network.module.security.aws_security_group.lb: Destruction complete after 1s
module.network.module.security.aws_security_group.node: Destruction complete after 0s
Destroy complete! Resources: 0 destroyed.
I've started to see this issue in a new scenario.
I have 6 information systems in my project that require scheduled terraform destroy for a number of workspaces per system each day. Until recently, I did this sequentially with a shell for loop. This seemed to work but took a long time. I now have a background process implementation of the same loop which sends the following summary script to the background.
...
check_lock $_app_name $_workspace && {
cd ${_cwd}
terraform workspace select ${_workspace}
terraform state rm module.x_ansible_remote module.y_ansible_remote
terraform destroy -auto-approve -lock=false -lock-timeout=0s -parallelism=${TF_PARALLELISM} -refresh=true
}
echo -e "\n>>> INFO - Verifing destroy\nComponent - ${_app_name}\nWorkspace - ${_workspace}"
local state=$(terraform state list)
if [ -z ${state} ]; then
terraform workspace select default && {
terraform workspace delete ${_workspace}
}
else
echo -e "\n### Error - Workspace is not empty\nComponent - ${_app_name}\nWorkspace - ${_workspace}"
fi
...
Since using this background implementation, a number of random workspaces return "Destroyed 0" when the log shows clearly that it's refreshed and destroying the resources in state. If I attempt the same flow in the foreground, the resources are destroyed successfully.
Even more oddly, when run in the background, I run a test on terraform state list after the destroy to see if the state contains any items. This returns empty in the background process but during troubleshooting if I attempt terraform state list again in the foreground, all state items are present.
I'm trying to understand if terraform parallelism, AWS API rate limiting or simply the number of concurrent bg processes attempting to exec terraform commands in a single directory on multiple workspaces is causing the issue but so far, replication is difficult due to the random nature of when this bug occurs.
Having the exact same issue. Am using S3 as backed storage.
I am seeing the same issue on Azure, the destroy is complete and destroys most of the resources... some resources do not seem to be attemped to be destroyed. I dont see any error on destroy.
when i run destroy again.. remaining resources get destroyed.
I am using Az storage account for my statefile.
Same issue on Terraform v0.12.10 with local state
Any update on this? I'm currently having the same issue.
That's frustrating and very confusing,
Why aws_elasticache_cluster can create a single one only?!
I resolved with parameter -state like this: terraform destroy -state="mystate.tfstate"
I am also getting the same error. The above command " terraform destroy -state="mystate.tfstate" not working for me.
Most helpful comment
Having the exact same issue. Am using S3 as backed storage.