Hey,
I produced different environments with terraform and now I want to destroy one. Ideally I should be able to destroy whole infrastructure given only tfstate file, but for now terraform requires me to issue commands from the same directory they were instantiated with, with the same files..
What prevents you from destroying projects given only tfstate file? I understand one blocking factor is that you don't store initial variables in tfstate, what is covered by #5424 What are other factors?
I too expected to be able to save a .tfstate file somewhere, and fetch it to run a terraform destroy.
When running a terraform destroy without the templates and only the .tfstate, aws_route53_record objects were destroyed, but not my instances. The instances were shown as Refreshing state
It may be telling, that i had to touch empty.tf before terraform destroy
Using a terraform plan -destroy -out destroy.plan (with .tf files present) and then a terraform apply destroy.plan (without .tf files present) works as desired. But at this point, I'd probably just start backing up all the .tf files along with the .tfstate instead.
The Terraform state does not contain provider configuration or credentials, so there is not enough information in there to destroy objects.
The minimum required to successfully destroy is all of the provider blocks for the providers that created all of the existing resources, along with any variable blocks they depend on.
In my early days with Terraform I too had this assumption, but unfortunately it's not really possible to support this completely-unattended destroy as long as Terraform does not retain provider configuration and credentials in the state file. The ongoing discussion in #516 is somewhat related to this.
Awesome insight. Thank you for being awesome @apparentlymart and working on a valuable project.
I'm not familiar with variable blocks yet. Does terraform remote remote with an S3 bucket store all the necessary files to pull back and destroy?
@apparentlymart did a great job.
I'll add further: you can destroy infra with only a TF state if you set the proper env vars so Terraform knows how to communicate to providers. Otherwise, just make some basic tf files to configure this.
A caveat with the above two answers - it's not just the variables that the providers depend on; it's the whole variables.tf file. Everything needs to be provided before terraform will resolve the provider. Not sure if this a bug, or just a documentation update.
Steps:
eg:
// main.tf, where terraform destroy is run, and where terraform.tfstate lives
provider "aws" {
region = "${var.vpc_region}"
shared_credentials_file = "hardCoded"
profile = "hardCoded"
}
module "vpc" {
source = "./modules/vpc"
env = "${var.env}"
vpc_region = "${var.vpc_region}"
vpc_cidr = "${var.vpc_cidr}"
subnet_cidrs = "${var.subnet_cidrs}"
ports = "${var.ports}"
}
From that file you can see that only vpc_region is needed to resolve the provider. However, destroy wants the other vars:
$ terraform destroy
var.env
One of dev/staging/production
Enter a value: ^C
Which comes from:
variable "env" {
description = "One of dev/staging/production"
}
variable "vpc_region" {}
variable "vpc_cidr" {}
variable "subnet_cidrs" {
type = "map"
}
variable "ports" {
type = "map"
default = {
ssh = 22
}
}
At first I though maybe it was just trying to resolve variables in the whole file (main.tf), but then I commented out the whole module definition (so main.tf only has the provider), and it still asked for var.env first. It's resolving the whole variables.tf file, even though it needs just one. I'm using the default configuration of local state backend for all of this. So the only sources are the local terraform.tfstate and main.tf.
Full example:
$ terraform init
...
Terraform has been successfully initialized!
$ terraform plan -var-file=production.tfvars -out create.plan
...
Plan: 12 to add, 0 to change, 0 to destroy.
$ ls
create.plan modules variables.tf
main.tf production.tfvars
$ terraform apply create.plan
Apply complete! Resources: 12 added, 0 changed, 0 destroyed.
$ ls
create.plan modules terraform.tfstate
main.tf production.tfvars variables.tf
$ terraform destroy
var.env
One of dev/staging/production
Enter a value: ^C
Interrupt received.
Please wait for Terraform to exit or data loss may occur.
Gracefully shutting down...
What gets me is that a destroy on AWS even goes out and attempts to resolve data resources such as SSM Parameters. Why do they need those when your gonna kill all infra anyway.
I'm going to lock this issue because it has been closed for _30 days_ โณ. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
Most helpful comment
The Terraform state does not contain provider configuration or credentials, so there is not enough information in there to destroy objects.
The minimum required to successfully destroy is all of the
providerblocks for the providers that created all of the existing resources, along with anyvariableblocks they depend on.In my early days with Terraform I too had this assumption, but unfortunately it's not really possible to support this completely-unattended destroy as long as Terraform does not retain provider configuration and credentials in the state file. The ongoing discussion in #516 is somewhat related to this.