Terraform-provider-aws: Can't delete invalid batch_compute_environment

Created on 7 May 2019  路  4Comments  路  Source: hashicorp/terraform-provider-aws

Community Note

  • Please vote on this issue by adding a 馃憤 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Terraform Version

v0.11.7

Affected Resource(s)

aws_batch_compute_environment

Terraform Configuration Files

# Copy-paste your Terraform configurations here - for large Terraform configs,
# please use a service like Dropbox and share a link to the ZIP file. For
# security, you can also encrypt the files using our GPG public key: https://keybase.io/hashicorp

Debug Output

Panic Output

Expected Behavior

First, I would expect deploying a Batch Compute Environment that has an invalid configuration would cause terraform apply to fail, but it does not. If you, for example, misspell the service role, the deployment will succeed but the Batch Compute Environment will have an INVALID state. But, terraform considers that successful which I think is a different issue.

Running terraform destroy should destroy the Batch Compute Environment, but instead it's expecting a VALID Compute Environment and fails when it is not.

Actual Behavior

Error: Error applying plan:

1 error(s) occurred:

  • module.test_batch.aws_batch_compute_environment.test_batch_environment (destroy): 1 error(s) occurred:

  • aws_batch_compute_environment.test_batch_environment: error disabling Batch Compute Environment (my-test-batch-environment): unexpected state 'INVALID', wanted target 'VALID'
    . last error: %!s()

Steps to Reproduce

Deploy a batch compute environment with an invalid configuration, such as a misspelled Service Role. terraform apply will work just fine but the Batch Compute Environment will have state INVALID. Running terraform destroy will give the error shown above.

Important Factoids

References

  • #0000
bug servicbatch

Most helpful comment

I have also just run into this issue:

Error: error disabling Batch Compute Environment (bob-cluster): unexpected state 'INVALID', wanted target 'VALID'. last error: %!s(<nil>)

Basically this means during development, my developers will have to resort to using the cli/console to clean up these resources.

terraform --version
Terraform v0.13.3

  • provider registry.terraform.io/-/aws v3.8.0
  • provider registry.terraform.io/hashicorp/aws v3.8.0
  • provider registry.terraform.io/hashicorp/null v2.1.2

All 4 comments

Related #8550

We ran into this same problem today with:

Terraform v0.12.23

  • provider.archive v1.3.0
  • provider.aws v2.54.0
  • provider.template v2.1.2

We had the wrong ssh key name configured. Terraform couldn't delete the resources, but it was easy to delete them in the AWS web console. In order to get Terraform to proceed, we had to manually also delete them from tfstate.

I have also just run into this issue:

Error: error disabling Batch Compute Environment (bob-cluster): unexpected state 'INVALID', wanted target 'VALID'. last error: %!s(<nil>)

Basically this means during development, my developers will have to resort to using the cli/console to clean up these resources.

terraform --version
Terraform v0.13.3

  • provider registry.terraform.io/-/aws v3.8.0
  • provider registry.terraform.io/hashicorp/aws v3.8.0
  • provider registry.terraform.io/hashicorp/null v2.1.2

I've also run into this issue in development. When running terraform destroy the Batch service role is destroyed before the compute environment, which then causes the compute environment to become invalid and therefore impossible to be deleted. I have tried adding a depends_on to the compute environment specifying the role, but this has not solved the problem.

Was this page helpful?
0 / 5 - 0 ratings