→ terraform -v
Terraform v0.12.0-rc1
+ provider.aws v2.10.0
terraform apply "/tmp/vpc"
module.transit_gw.aws_ec2_transit_gateway.this: Creating...
module.vpc_2.aws_vpc.this: Creating...
module.vpc_2.aws_vpc.this: Creation complete after 2s [id=vpc-0957aef47490b6983]
module.vpc_2.aws_route_table.redzone[0]: Creating...
module.vpc_2.aws_route_table.greenzone[0]: Creating...
module.vpc_2.aws_internet_gateway.this[0]: Creating...
module.vpc_2.aws_subnet.greenzone[0]: Creating...
module.vpc_2.aws_route_table.orangezone[0]: Creating...
module.vpc_2.aws_subnet.redzone[0]: Creating...
module.vpc_2.aws_subnet.orangezone[1]: Creating...
module.vpc_2.aws_subnet.orangezone[0]: Creating...
module.vpc_2.aws_subnet.redzone[1]: Creating...
module.vpc_2.aws_route_table.redzone[0]: Creation complete after 0s [id=rtb-0a9ae3ce6e748e1fb]
module.vpc_2.aws_route_table.orangezone[0]: Creation complete after 0s [id=rtb-02ad55860d7e95676]
module.vpc_2.aws_subnet.orangezone[2]: Creating...
module.vpc_2.aws_subnet.redzone[2]: Creating...
module.vpc_2.aws_route_table.greenzone[0]: Creation complete after 0s [id=rtb-01f1baacf60f5a745]
module.vpc_2.aws_subnet.greenzone[1]: Creating...
module.vpc_2.aws_internet_gateway.this[0]: Creation complete after 1s [id=igw-062c767a421bc2a08]
module.vpc_2.aws_subnet.greenzone[2]: Creating...
module.vpc_2.aws_subnet.orangezone[0]: Creation complete after 1s [id=subnet-0e2861ab821b1d8c1]
module.vpc_2.aws_route.redzone_routes[0]: Creating...
module.vpc_2.aws_subnet.greenzone[0]: Creation complete after 1s [id=subnet-0f6a184a6194c4b18]
module.vpc_2.aws_subnet.orangezone[1]: Creation complete after 1s [id=subnet-0a4de4d2cdb9ac386]
module.vpc_2.aws_subnet.redzone[1]: Creation complete after 1s [id=subnet-04e823b792b057a37]
module.vpc_2.aws_subnet.redzone[0]: Creation complete after 1s [id=subnet-0c0b5edff4ce63f5f]
module.vpc_2.aws_route.redzone_routes[0]: Creation complete after 0s [id=r-rtb-0a9ae3ce6e748e1fb1080289494]
module.vpc_2.aws_subnet.orangezone[2]: Creation complete after 1s [id=subnet-0e79ee6a8fe320c1d]
module.vpc_2.aws_route_table_association.orangezone_rt_assoc[0]: Creating...
module.vpc_2.aws_route_table_association.orangezone_rt_assoc[2]: Creating...
module.vpc_2.aws_route_table_association.orangezone_rt_assoc[1]: Creating...
module.vpc_2.aws_network_acl.orangezone_acl[0]: Creating...
module.vpc_2.aws_subnet.greenzone[1]: Creation complete after 1s [id=subnet-0af18d01a60a40d89]
module.vpc_2.aws_subnet.greenzone[2]: Creation complete after 0s [id=subnet-0ef96b1dbc8502dd2]
module.vpc_2.aws_subnet.redzone[2]: Creation complete after 1s [id=subnet-0399af8c8413b6442]
module.vpc_2.aws_route_table_association.orangezone_rt_assoc[1]: Creation complete after 0s [id=rtbassoc-0ddd148402427ef83]
module.vpc_2.aws_route_table_association.greenzone_rt_assoc[0]: Creating...
module.vpc_2.aws_route_table_association.greenzone_rt_assoc[1]: Creating...
module.vpc_2.aws_route_table_association.greenzone_rt_assoc[2]: Creating...
module.vpc_2.aws_network_acl.greenzone_acl[0]: Creating...
module.vpc_2.aws_route_table_association.redzone_rt_assoc[2]: Creating...
module.vpc_2.aws_network_acl.redzone_acl[0]: Creating...
module.vpc_2.aws_route_table_association.orangezone_rt_assoc[2]: Creation complete after 0s [id=rtbassoc-0d628472b788edd9c]
module.vpc_2.aws_route_table_association.redzone_rt_assoc[0]: Creating...
module.vpc_2.aws_route_table_association.orangezone_rt_assoc[0]: Creation complete after 0s [id=rtbassoc-0e465ca49a7925aa9]
module.vpc_2.aws_route_table_association.redzone_rt_assoc[1]: Creating...
module.vpc_2.aws_route_table_association.greenzone_rt_assoc[0]: Creation complete after 0s [id=rtbassoc-0b634c16670d7b97f]
module.vpc_2.aws_route_table_association.redzone_rt_assoc[0]: Creation complete after 0s [id=rtbassoc-033e25ff890cd56ca]
module.vpc_2.aws_route_table_association.greenzone_rt_assoc[2]: Creation complete after 0s [id=rtbassoc-0448d3e592189ee6e]
module.vpc_2.aws_route_table_association.greenzone_rt_assoc[1]: Creation complete after 0s [id=rtbassoc-06eac58aea8239c1d]
module.vpc_2.aws_route_table_association.redzone_rt_assoc[2]: Creation complete after 0s [id=rtbassoc-043ff1a1b48065eef]
module.vpc_2.aws_route_table_association.redzone_rt_assoc[1]: Creation complete after 0s [id=rtbassoc-04fb5fb36e2ccac49]
module.vpc_2.aws_network_acl.orangezone_acl[0]: Creation complete after 1s [id=acl-03659f9f08074a7ef]
module.vpc_2.aws_network_acl_rule.orangezone_acl_rules[4]: Creating...
module.vpc_2.aws_network_acl_rule.orangezone_acl_rules[3]: Creating...
module.vpc_2.aws_network_acl_rule.orangezone_acl_rules[0]: Creating...
module.vpc_2.aws_network_acl_rule.orangezone_acl_rules[2]: Creating...
module.vpc_2.aws_network_acl_rule.orangezone_icmp_acl_rules[1]: Creating...
module.vpc_2.aws_network_acl_rule.orangezone_icmp_acl_rules[0]: Creating...
module.vpc_2.aws_network_acl_rule.orangezone_acl_rules[1]: Creating...
module.vpc_2.aws_network_acl.greenzone_acl[0]: Creation complete after 1s [id=acl-0f4f7616f881ceb2d]
module.vpc_2.aws_network_acl_rule.greenzone_acl_rules[0]: Creating...
module.vpc_2.aws_network_acl.redzone_acl[0]: Creation complete after 1s [id=acl-024314de82fc0867c]
module.vpc_2.aws_network_acl_rule.greenzone_acl_rules[10]: Creating...
module.vpc_2.aws_network_acl_rule.orangezone_acl_rules[2]: Creation complete after 1s [id=nacl-660260479]
module.vpc_2.aws_network_acl_rule.orangezone_acl_rules[1]: Creation complete after 1s [id=nacl-4171114845]
module.vpc_2.aws_network_acl_rule.orangezone_acl_rules[0]: Creation complete after 1s [id=nacl-907229149]
module.vpc_2.aws_network_acl_rule.greenzone_acl_rules[1]: Creating...
module.vpc_2.aws_network_acl_rule.greenzone_icmp_acl_rules[1]: Creating...
module.vpc_2.aws_network_acl_rule.greenzone_acl_rules[7]: Creating...
module.vpc_2.aws_network_acl_rule.orangezone_acl_rules[3]: Creation complete after 1s [id=nacl-3019656016]
module.vpc_2.aws_network_acl_rule.greenzone_acl_rules[9]: Creating...
module.vpc_2.aws_network_acl_rule.orangezone_icmp_acl_rules[1]: Creation complete after 1s [id=nacl-2477113553]
module.vpc_2.aws_network_acl_rule.greenzone_acl_rules[6]: Creating...
module.vpc_2.aws_network_acl_rule.orangezone_acl_rules[4]: Creation complete after 1s [id=nacl-706620753]
module.vpc_2.aws_network_acl_rule.greenzone_acl_rules[4]: Creating...
module.vpc_2.aws_network_acl_rule.orangezone_icmp_acl_rules[0]: Creation complete after 1s [id=nacl-3028775121]
module.vpc_2.aws_network_acl_rule.greenzone_acl_rules[5]: Creating...
module.vpc_2.aws_network_acl_rule.greenzone_acl_rules[0]: Creation complete after 1s [id=nacl-1803361435]
module.vpc_2.aws_network_acl_rule.greenzone_acl_rules[8]: Creating...
module.vpc_2.aws_network_acl_rule.greenzone_acl_rules[10]: Creation complete after 1s [id=nacl-1157739551]
module.vpc_2.aws_network_acl_rule.greenzone_icmp_acl_rules[0]: Creating...
module.vpc_2.aws_network_acl_rule.greenzone_icmp_acl_rules[1]: Creation complete after 0s [id=nacl-3469470615]
module.vpc_2.aws_network_acl_rule.greenzone_acl_rules[9]: Creation complete after 0s [id=nacl-3251595137]
module.vpc_2.aws_network_acl_rule.greenzone_acl_rules[3]: Creating...
module.vpc_2.aws_network_acl_rule.greenzone_acl_rules[2]: Creating...
module.vpc_2.aws_network_acl_rule.greenzone_acl_rules[6]: Creation complete after 0s [id=nacl-1338465814]
module.vpc_2.aws_network_acl_rule.greenzone_acl_rules[4]: Creation complete after 0s [id=nacl-2269544964]
module.vpc_2.aws_network_acl_rule.redzone_acl_rules[1]: Creating...
module.vpc_2.aws_network_acl_rule.redzone_acl_rules[0]: Creating...
module.vpc_2.aws_network_acl_rule.greenzone_acl_rules[7]: Creation complete after 0s [id=nacl-4091009666]
module.vpc_2.aws_network_acl_rule.greenzone_acl_rules[5]: Creation complete after 0s [id=nacl-2797786214]
module.vpc_2.aws_network_acl_rule.greenzone_acl_rules[8]: Creation complete after 0s [id=nacl-844694850]
module.vpc_2.aws_network_acl_rule.greenzone_icmp_acl_rules[0]: Creation complete after 0s [id=nacl-728103519]
module.vpc_2.aws_network_acl_rule.greenzone_acl_rules[1]: Creation complete after 0s [id=nacl-4272844199]
module.vpc_2.aws_network_acl_rule.greenzone_acl_rules[2]: Creation complete after 0s [id=nacl-4040708340]
module.vpc_2.aws_network_acl_rule.greenzone_acl_rules[3]: Creation complete after 0s [id=nacl-1059107431]
module.vpc_2.aws_network_acl_rule.redzone_acl_rules[1]: Creation complete after 0s [id=nacl-1288191167]
module.vpc_2.aws_network_acl_rule.redzone_acl_rules[0]: Creation complete after 0s [id=nacl-2024006300]
module.transit_gw.aws_ec2_transit_gateway.this: Still creating... [10s elapsed]
module.transit_gw.aws_ec2_transit_gateway.this: Still creating... [20s elapsed]
module.transit_gw.aws_ec2_transit_gateway.this: Still creating... [30s elapsed]
module.transit_gw.aws_ec2_transit_gateway.this: Creation complete after 35s [id=tgw-0d28f1ce21aa5c028]
module.transit_gw.aws_ec2_transit_gateway_route_table.this: Creating...
module.transit_gw.aws_ec2_transit_gateway_route_table.this: Still creating... [10s elapsed]
module.transit_gw.aws_ec2_transit_gateway_route_table.this: Still creating... [20s elapsed]
module.transit_gw.aws_ec2_transit_gateway_route_table.this: Still creating... [30s elapsed]
module.transit_gw.aws_ec2_transit_gateway_route_table.this: Creation complete after 34s [id=tgw-rtb-0bf21fb249805d079]
Error: leftover module module.ap_southeast_1_vpc_2 in state that should have been removed; this is a bug in Terraform and should be reported
1. Create VPC and along with all the details using modules name "ap_southeast_1_vpc_2"
2. Destroy the VPC "ap_southeast_1_vpc_2" running "terraform plan -destroy terraform plan -destroy --target module.ap_southeast_1_vpc_2 -out /tmp/vpc", then "terraform apply /tmp/vpc"
3. Recreate VPC under name "vpc_2"
4. Got error "Error: leftover module module.ap_southeast_1_vpc_2 in state that should have been removed; this is a bug in Terraform and should be reported"
any updates on this?
getting this error while trying to apply a specific targeted resource.
@instigardo
What I resolve manually for now is …pull the state file, manually edit and remove all the branches of module.scylla_test then upload
This appears to be a duplicate of / related to https://github.com/hashicorp/terraform/issues/21529 which was closed because it couldn't be reproduced.
I'm hitting the same message but for a different module, and I can reliably reproduce this with -target but cannot reproduce it without. Hope that helps.
I've hit this issue also. It seems to be when you have previously done a destroy on a module then do an apply -target on another.
Doing a noop apply on the destroyed module seems to allow you to move past this error.
i.e. do an apply on the module from the error like this terraform apply -target module.ap_southeast_1_vpc_2
After recently updating to 0.12.3, this issue is still happening on my end
I'm seeing this happening on almost every terraform apply run which destroys a resource. Every time it happens I need to manually remove lingering resources from state files for error to go away. Lingering resources contain an empty instances array and appear several times throughout the state file.
edit: running 0.12.3
Also getting this issue when removing a module with terraform:
tf plan -refresh -out plan -var-file=inputs.tfvars -target=module.manager -destroy
tf apply plan
error:
Error: leftover module module.manager in state that should have been removed; this is a bug in Terraform and should be reported
There was no reference to any resource in tf state list but every subsequent apply command would give me the same error.
based on @gabrielqs comment I pulled the state from remote gcs bucket:
tf state pull > pulled-tfstate.json
I then found 7 references to my module module.manager, all with empty instances eg:
{
"module": "module.manager",
"mode": "managed",
"type": "google_project_iam_custom_role",
"name": "app_bucket",
"each": "list",
"provider": "provider.google",
"instances": []
}
Manually deleted all the json objects with reference to the delete module (also increased serial field by 1) and the pushed the tf state file back to the bucket:
tf state push pulled-tfstate.json
Have just run a plan and apply and that has removed the error.
Seems like terraform is not fully removing state when using destroy plans?
❯ tf version
Terraform v0.12.3
+ provider.google v2.9.1
+ provider.null v2.1.2
+ provider.random v2.1.2
+ provider.template v2.1.2
thanks for the detailed explanation @hawksight , that's exactly the process i have been following
Terraform v0.12.3
+ provider.aws v2.18.0
+ provider.cloudflare v1.16.1
+ provider.external v1.2.0
+ provider.template v2.1.2
I'm not sure if this will help in re-creation but I'm getting into a mess. On https://github.com/hashicorp/terraform/issues/21346 is regarding trouble moving resources to a module that does not exist yet in 0.12. Following the best hack-around (https://github.com/hashicorp/terraform/issues/21346#issuecomment-501277350), I solved my problem but seems that I'm left with the situation of Error: leftover module module.security_groups_classic in state that should have been removed. I am following the instructions on the above comment applied to our own code but is consistent as I'm running this pattern across 9 AWS regions * 3 envs with same code and result of the leftover module.
I think I've come up with a minimal reproduction case for this (using terraform v0.12.5). With the following config:
mod1/mod1.tf:
resource "null_resource" "module_1_resource" {}
mod2/mod2.tf:
resource "null_resource" "module_2_resource" {}
main.tf:
module "mod1" {
source = "./mod1"
}
module "mod2" {
source = "./mod2"
}
terraform apply - should be fine.module "mod2" block in main.tfterraform apply -target module.mod1, which results in the error:Error: leftover module module.mod2 in state that should have been removed; this is a bug in Terraform and should be reported
I'd expect terraform to ignore mod2 in the statefile and leave it untouched - this is the behaviour with Terraform 0.11.
Yeah the reproduction case by @alext matches what I'm seeing too in my case: with TF 0.12.8 I'm doing an terraform apply -target=module.foo when in the state there's a resource added by a colleague in his own branch (so, my branch would try to remove it without the -target). If the state is "clean", it doesn't happen.
Fixed by https://github.com/hashicorp/terraform/pull/22811 and will release with 0.12.11
What if the code has been removed as well, but we'd like the state to remain as is in a targeted run?
I'm trying to share the terraform state between a poly-repo setup of terraform projects where each project sets up its terraform modules and runs a targeted plan/apply setup. It works nicely other than the Error/warning about leftover modules. Which is technically correct, but in a poly-repo world is almost inevitable.
[if this use-case is considered different from the original reported one I can create a new issue]
@axelthimm If you do have a current issue, absolutely open another one [in particular, because closed issues are eventually locked]! But testing out what I _think_ you're asking for, it seems like 0.12.12 (current release) does what you're asking -- if you run with -target but module code has been deleted, the state for that module will remain untouched, at least from my experiments.
The current warning for -target remains true though, and we would not recommend a workflow that depends on it, and you will continue to see it:
The -target option is not for routine use, and is provided only for exceptional situations such as recovering from errors or mistakes, or when Terraform specifically suggests to use it as part of an error message.
So we'd also be interested in learning more about your workflow (if you can codify anything into a specific issue), such that you could do it _without_ -target.
The use case is having split the modules into several repositories, but still needing to go through a common terraform state as the target infrastructure is the same. E.g. say an Azure setup with a separate repo for proxies, another for nexus, another for gitlab-runners etc. They all depend on some common resources which are reflected in base modules always present (via git submodules) and of course a common tfstate file or a state backend.
Indeed the run does work on 0.12.12, but (!) it is not a warning, but a failure. So all out IaC ATM is blocked unless we generally ignore return codes. So my plea would be to make it a warning.
On the topic of not supporting this workflow - of course I have read this, but I'm trying to fit terraform in a non-monorepo design. Maybe I'm using the wrong methology, but keeping all these tiny projects (tiny from a terraform PoV) separated with their own state, one introduces too many data sources which then also need concurrent maintenance should the master resource require changes.
Instead by having access to the master resource in the terrafrom state one can pretend to be in a monorepo and write proper terraform code (which then also works when merging all projects together, which is a method we also use).
So there are use cases which benefit from a common terraform state, but which cannot see directly the module source anymore.
I use -target just for terraform not deleting those unscribed states. Within such a project -target actually addresses the whole project. But I do not want project X to wipe project Y's resources jusyt because they are sharing the state.
I hope I made the use case somewhat clear.
Indeed the run does work on 0.12.12, but (!) it is not a warning, but a failure. So all out IaC ATM is blocked unless we generally ignore return codes. So my plea would be to make it a warning.
Considering this, would you mind opening the comment above in a new issue?
I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
Most helpful comment
I think I've come up with a minimal reproduction case for this (using terraform v0.12.5). With the following config:
mod1/mod1.tf:mod2/mod2.tf:main.tf:terraform apply- should be fine.module "mod2"block inmain.tfterraform apply -target module.mod1, which results in the error:I'd expect terraform to ignore mod2 in the statefile and leave it untouched - this is the behaviour with Terraform 0.11.