On 0.11.0 and 0.11.1, I cannot destroy my module. I made the simplest config to reproduce my problem. See "Terraform Configuration Files".
Terraform v0.11.1
+ provider.aws v1.5.0
It makes just one EC2 instance:
# mytest/terraform.tf
terraform {
backend "s3" {
region = "ap-northeast-1"
bucket = "foobarfoobar"
key = "mytest.tfstate"
dynamodb_table = "infra-lock"
}
}
locals {
region = "ap-northeast-1"
}
module "root_ec2" {
source = "../lib/simple-ec2"
region = "${local.region}"
}
# lib/simple-ec2/terraform.tf
variable "region" {}
provider "aws" {
region = "${var.region}"
}
resource "aws_instance" "simple_ec2" {
ami = "ami-bec974d8"
instance_type = "t2.micro"
}
I think these conditions are necessary to reproduce the issue:
mytest uses S3 as a Terraform backend.simple-ec2 has provider.aws.simple-ec2 sets provider.aws.region via var.region.mytest passes local.region to simple-ec2's var.region.I've tried 2 ways to destroy the EC2 instance.
module.root_ec2, then plan & apply.plan -destroy then apply.Both ways should destroy the EC2 instance clearly.
I commented out like it:
# mytest/terraform.tf
terraform {
backend "s3" {
region = "ap-northeast-1"
bucket = "foobarfoobar"
key = "mytest.tfstate"
dynamodb_table = "infra-lock"
}
}
locals {
region = "ap-northeast-1"
}
# module "root_ec2" {
# source = "../lib/simple-ec2"
# region = "${local.region}"
# }
plan after commenting out fails with a very weird error:
$ terraform plan -out terraform.tfplan
Error: Error asking for user input: 1 error(s) occurred:
* module.root_ec2.aws_instance.simple_ec2: configuration for module.root_ec2.provider.aws is not present; a provider configuration block is required for all operations
In this case, plan -destroy is fine. But apply fails with another weird error:
$ terraform plan -out terraform.tfplan -destroy
...
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
- destroy
Terraform will perform the following actions:
- module.root_ec2.aws_instance.simple_ec2
Plan: 0 to add, 0 to change, 1 to destroy.
...
$ terraform apply terraform.tfplan
...
Error: Error applying plan:
1 error(s) occurred:
* module.root_ec2.provider.aws: Not a valid region:
Terraform does not automatically rollback...
...
Hi @sublee,
Sorry this is a little confusing from how previous versions worked, but this is the expected behavior with the new handling of providers in modules. In earlier version this would often fail with harder to decipher errors, or in hard to recovers ways, because it would usually end up using and incorrectly configured provider.
Rather than declaring the provider region within the module, you will want to pass a configured provider into the module.
# mytest/terraform.tf
provider "aws" {
alias = "custom"
region = "${local.region}"
}
locals {
region = "ap-northeast-1"
}
module "root_ec2" {
source = "../lib/simple-ec2"
providers = {
"aws" = "aws.custom"
}
}
@jbardin I didn't know this restriction. I'll try your suggestion tomorrow. Thanks for the answer.
It will be great if a provider in a source module also fails when to create resources, not only destroying.
@jbardin It works very well. Thanks!
Hi @sublee! Just wanted to follow up on your suggestion there.
The idea of keeping provider declarations to the root module is a strong _recommendation_, since in general it makes the relationships between providers and modules easier to see in more complex configurations.
However, we still _allow_ providers in child modules as a measure of flexibility for more rare/complex use-cases, at the expense of a more awkward workflow to remove that module from configuration:
terraform destroy -target=module.root_ec2 to destroy all of the resources in the child module while its configuration is still present. This is possible because the corresponding provider block remains in configuration while this command is run.module block from the parent module.terraform apply to confirm that no further changes are required because the module resources were already eliminated.We may consider restricting this further in future based on emerging real-world usage, since indeed it is confusing if you find yourself in this situation accidentally. We may find that after people have become accustomed to the "new normal" there are better ways to achieve use-cases where a module might have its own provider configuration(s), but given the existing usage of this pattern we wanted to compromise and allow people to continue with their existing approaches for now if desired.
@apparentlymart how can I destroy a module with terraform cloud? If I try destroy with a target I'm getting this:
$ terraform destroy -target=module.test2
Error: Resource targeting is currently not supported
The "remote" backend does not support resource targeting at this time.
I can't figure out how to delete a module without tearing down the entire stack.
Actually, nevermind I figured this out. I found this part of the docs: https://www.terraform.io/docs/configuration/modules.html#providers-within-modules
I think I read that when I first started but it didn't click until I was actually having this issue. I was able to fix the issue by moving the provider outside of the module.
I'm going to lock this issue because it has been closed for _30 days_ โณ. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
Most helpful comment
Hi @sublee,
Sorry this is a little confusing from how previous versions worked, but this is the expected behavior with the new handling of providers in modules. In earlier version this would often fail with harder to decipher errors, or in hard to recovers ways, because it would usually end up using and incorrectly configured provider.
Rather than declaring the provider region within the module, you will want to pass a configured provider into the module.