Terraform v0.11.7
aws_volume_attachment
When running a terraform destroy
, in which you want to destroy an aws_instance
and it's aws_volume_attachment
, terraform (or the AWS API) will attempt to destroy the volume attachment first, then then instance second. This often fails because Error waiting for Volume (vol-010b3d979b1027867) to detach from Instance
.
If an aws_volume_attachment
was going to be destroyed, and the associated aws_instance
was also going to be destroyed, it would make sense to destroy the instance first and then the aws_volume_attachment
second, which would never run any risk of timing out.
aws_ebs_volume
, an aws_instance
, and a aws_volume_attachment
.terraform destroy
I can confirm this for our infra too. On latest Terraform, destroying aws_volume_attachment and aws_instance together fails with this timeout error.
Running into this as well.
➜ tfa
data.aws_availability_zones.az: Refreshing state...
data.aws_route53_zone.int-orgaws: Refreshing state...
data.aws_route53_zone.orgaws: Refreshing state...
data.aws_vpc.stage-org: Refreshing state...
data.aws_ami.stage-mongodb: Refreshing state...
data.aws_subnet.stage-mongodb: Refreshing state...
data.aws_security_group.stage-mongodb: Refreshing state...
aws_instance.stage-mongodb: Refreshing state... (ID: i-xxxxxxxxxxxxxxxxxxx)
aws_route53_record.stage-mongodb-int: Refreshing state... (ID: ACCID-stage-mongodb-int_A)
aws_ebs_volume.stage-mongodb-volume-a: Refreshing state... (ID: vol-xxxxxxxxxxx)
aws_volume_attachment.stage-mongodb-volume-attachment-a: Refreshing state... (ID: vai-xxxxxxxxx)
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
- destroy
Terraform will perform the following actions:
- aws_ebs_volume.stage-mongodb-volume-a
- aws_instance.stage-mongodb
- aws_route53_record.stage-mongodb-int
- aws_volume_attachment.stage-mongodb-volume-attachment-a
Plan: 0 to add, 0 to change, 4 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
aws_volume_attachment.stage-mongodb-volume-attachment-a: Destroying... (ID: vai-xxxxxxxxxxx)
aws_route53_record.stage-mongodb-int: Destroying... (ID: ACCID-stage-mongodb-int_A)
aws_volume_attachment.stage-mongodb-volume-attachment-a: Still destroying... (ID: vai-385500658, 10s elapsed)
aws_route53_record.stage-mongodb-int: Still destroying... (ID: ACCID-stage-mongodb-int_A, 10s elapsed)
aws_route53_record.stage-mongodb-int: Still destroying... (ID: ACCID-stage-mongodb-int_A, 20s elapsed)
aws_route53_record.stage-mongodb-int: Still destroying... (ID: ACCID-stage-mongodb-int_A, 30s elapsed)
aws_route53_record.stage-mongodb-int: Still destroying... (ID: ACCID-stage-mongodb-int_A, 40s elapsed)
aws_route53_record.stage-mongodb-int: Still destroying... (ID: ACCID-stage-mongodb-int_A, 50s elapsed)
aws_route53_record.stage-mongodb-int: Still destroying... (ID: ACCID-stage-mongodb-int_A, 1m0s elapsed)
aws_route53_record.stage-mongodb-int: Still destroying... (ID: ACCID-stage-mongodb-int_A, 1m10s elapsed)
aws_route53_record.stage-mongodb-int: Still destroying... (ID: ACCID-stage-mongodb-int_A, 1m20s elapsed)
aws_route53_record.stage-mongodb-int: Still destroying... (ID: ACCID-stage-mongodb-int_A, 1m30s elapsed)
aws_route53_record.stage-mongodb-int: Still destroying... (ID: ACCID-stage-mongodb-int_A, 1m40s elapsed)
aws_route53_record.stage-mongodb-int: Destruction complete after 1m40s
Error: Error applying plan:
1 error(s) occurred:
* aws_volume_attachment.stage-mongodb-volume-attachment-a (destroy): 1 error(s) occurred:
* aws_volume_attachment.stage-mongodb-volume-attachment-a: Error waiting for Volume (vol-xxxxxxxxxxxxx) to detach from Instance: i-xxxxxxxxxxxxx
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.
this is also a problem if ebs volume ids are replaced in the template, you'll get a similar error. if a volume can't be detached, it would be good to have a variable that did both these things-
shutdown instances if volumes can't be detached
startup instances after volume detachments and attachments are complete.
this would also help when doing terraform destroy.
I've attached an example below from changing the aws_volume_attachement name that causes the same problem.
Error: Error applying plan:
5 error(s) occurred:
module.softnas.aws_volume_attachment.ebs_att[0] (destroy): 1 error(s) occurred:
aws_volume_attachment.ebs_att.0: Error waiting for Volume (vol-0dfdb289f3a00f63c) to detach from Instance: i-055c1f1bd7d1b417c
module.softnas.aws_cloudformation_stack.SoftNAS1Stack: 1 error(s) occurred:
aws_cloudformation_stack.SoftNAS1Stack: Creating CloudFormation stack failed: AlreadyExistsException: Stack [FCB-SoftNAS1Stack] already exists
status code: 400, request id: 303555c9-287f-11e9-b20c-4d54279ee980
module.softnas.aws_volume_attachment.ebs_att[1] (destroy): 1 error(s) occurred:
aws_volume_attachment.ebs_att.1: Error waiting for Volume (vol-071219d59b8153d6a) to detach from Instance: i-055c1f1bd7d1b417c
module.softnas.aws_volume_attachment.ebs_att[3] (destroy): 1 error(s) occurred:
aws_volume_attachment.ebs_att.3: Error waiting for Volume (vol-0c0703a9c333cef86) to detach from Instance: i-055c1f1bd7d1b417c
module.softnas.aws_volume_attachment.ebs_att[2] (destroy): 1 error(s) occurred:
aws_volume_attachment.ebs_att.2: Error waiting for Volume (vol-0a21cf466c807a753) to detach from Instance: i-055c1f1bd7d1b417c
I ran into this today, is there a workaround? I tried running terraform destroy --target=module.machines.ep_instance
but it still tries to destroy the volume_attachment object first.
FYI, I ran into this when implementing Windows servers with MSSQL.
I found that adding a null_resource which has a depends_on for the instance and a remote-exec provisioner which includes a 'when = "destroy"' option and executes a Powershell command stop the MSSQLSERVER service on the host.
After this, terraform destroy will destroy the attachment.
You could replace the MSSQL service with relevant shell commands to stop services, unmount disks, etc.
Even I'm facing this issue.
Terraform v0.12.1
Terraform v0.12.3
Still failing...
I have the same issue
Terraform v0.12.5
We are facing this issue as well.
Terraform v0.12.5
provider.aws v2.22
Facing same issue
Terraform v0.11.14
We have the same problem
Terraform 0.12.7 and provider AWS 2.25.0
Facing the same issue here, we worked around it by including skip_destroy = true
in resource aws_volume_attachment
.
According to the documentation, setting skip_destroy = true
will remove resource aws_volume_attachment
from the state file, and this will unblock terraform destroy
operation.
Hey all, Happy Holidays.
What fixed it for me was the first line here:
You can detach an Amazon EBS volume from an instance explicitly or by terminating the instance. However, if the instance is running, you must first unmount the volume from the instance.
If the volume is still mounted on the instance, it will never detach (it even shows gray in the AWS console, so terraform may be able to help us here). Debugging terraform with export TF_LOG=TRACE
showed that the api returned a 200 OK
for the DetachVolume
api call, but subsequent calls to DescribeVolume
showed the volume as busy.
After I unmounted the volume, terraform destroyed the volume attachment successfully.
If we're listing workarounds, shutting down the instance before the destroy will work as well. The detach will work since the volume would not be "busy" at that point.
Same problem here. Only been a bug 19 months. Could we get Terraform to STOP instances with attached storage, DETACH/DELETE all volumes and then TERMINATE the instance? Seems like a fundamental function fails often in this circumstance.
Probably there should be a provision to run a script just before volume is detached.
+1 here
Terraform version: 0.12.25
AWS Provider version: 2.65.0
another day, another issue with super basic stuff expected from TF. No response from TF also.
For the record:
Issue persists.
Got this issue too.
Terraform v0.12.28
+ provider.aws v2.63.0
Encountered Same issue, have to manually unmount to volume and then proceed with terraform destroy.
persists in
terraform v0.13.0
provider.aws v2.77.0
Problem still here
Terraform v0.13.4
AWS v3.10.0
Same problem!!!
terraform v0.13.2
provider.aws ~> 3.0
Most helpful comment
Same problem here. Only been a bug 19 months. Could we get Terraform to STOP instances with attached storage, DETACH/DELETE all volumes and then TERMINATE the instance? Seems like a fundamental function fails often in this circumstance.