Terraform version:
Terraform v0.11.7
+ provider.external v1.0.0
+ provider.google v1.13.0
+ provider.null v1.0.0
+ provider.random v1.3.1
Hello, I'm having trouble with a module, it does the following:
resource "google_project" "project" {
name = "${var.project_name}"
project_id = "${local.temp_project_id}"
org_id = "${var.organization_id}"
folder_id = "${var.folder_id}"
billing_account = "${var.billing_account}"
labels = "${var.project_labels}"
}
resource "null_resource" "get_repo" {
count = "${var.download_repo == true ? 1 : 0}"
provisioner "local-exec" {
command = "git clone --single-branch -b ${var.repo_b} ${var.repo_url}"
}
depends_on = ["google_project.project"]
}
resource "null_resource" "execute_command" {
provisioner "local-exec" {
command = "echo ${google_project.project.project_id}"
}
depends_on = ["null_resource.get_repo"]
}
data "external" "bucket_retrieval" {
program = ["bash", "${path.module}/scripts/get-project-buckets.sh", "${var.credentials_file_path}"]
depends_on = ["null_resource.execute_command"]
}
output "project_id" {
value = "${google_project.project.project_id}"
description = "The project's id"
}
output "buckets_list" {
value = "${compact(split(" ", data.external.bucket_retrieval.result["buckets_list"]))}"
description = "The buckets list within the created project"
}
output "project_id" {
value = "${module.install-simple.project_id}"
}
output "buckets_list" {
value = "${module.install-simple.buckets_list}"
}
terraform plan
and terraform apply
steps work wellterraform destroy
at first time fails and the output is:[root@localhost simple]# terraform destroy -force -var-file=variables.tfvariables
random_id.random_project_id_suffix: Refreshing state... (ID: lf0)
google_project.project: Refreshing state... (ID: terraform-test-95fd)
null_resource.execute_command: Refreshing state... (ID: 8745349763456357012)
module.install-simple.null_resource.execute_command: Destroying... (ID: 8745349763456357012)
module.install-simple.null_resource.execute_command: Destruction complete after 0s
module.install-simple.google_project.project: Destroying... (ID: terraform-test-95fd)
module.install-simple.google_project.project: Destruction complete after 3s
module.install-simple.random_id.random_project_id_suffix: Destroying... (ID: lf0)
module.install-simple.random_id.random_project_id_suffix: Destruction complete after 0s
Error: Error applying plan:
1 error(s) occurred:
* module.install-simple.output.buckets_list: Resource 'data.external.bucket_retrieval' does not have attribute 'result' for variable 'data.external.bucket_retrieval.result'
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.
Then when I try to perform terraform destroy
again, the output changes but error remains:
Error: Error applying plan:
2 error(s) occurred:
* module.install-simple.output.project_id: variable "project" is nil, but no error was reported
* module.install-simple.output.buckets_list: variable "bucket_retrieval" is nil, but no error was reported
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.
{
"buckets_list": " "
}
{
"buckets_list": "bucket1 bucket2 bucket3"
}
Thanks!
It looks like depending on an external data source will cause errors during destroy. Could we make a simple test case to demonstrate this?
Sure.
resource "null_resource" "execute_command" {
provisioner "local-exec" {
command = "echo hi"
}
}
data "external" "external" {
program = ["jq", "-n", "{\"buckets_list\" : \" \"}"]
depends_on = ["null_resource.execute_command"]
}
output "output" {
value = "${data.external.external.result["buckets_list"]}"
}
module "module" {
source = "module"
}
output "output_f_module" {
value = "${module.module.output}"
}
[root@localhost bug]# terraform destroy -force
null_resource.execute_command: Refreshing state... (ID: 2995243002782748572)
module.module.null_resource.execute_command: Destroying... (ID: 2995243002782748572)
module.module.null_resource.execute_command: Destruction complete after 0s
Error: Error applying plan:
1 error(s) occurred:
* module.module.output.output: Resource 'data.external.external' does not have attribute 'result' for variable 'data.external.external.result'
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.
[root@localhost bug]# terraform destroy -force
Error: Error applying plan:
1 error(s) occurred:
* module.module.output.output: variable "external" is nil, but no error was reported
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.
Hi @angelzamir,
Thanks for filing the issue. This is essentially the same as #17862, but I'll leave this open as a reminder that it applies to variables too.
Just encountered this as well, just to add to the report:
Versions:
Terraform v0.11.7
+ provider.google v1.16.0
Errors:
* module.foobar.module.deployment-project.output.ci-service-account-email: variable "ci_service_account" is nil, but no error was reported
* module.foobar.output.folder: variable "folder" is nil, but no error was reported
* module.foobar.module.deployment-project.output.terraform-service-account-email: variable "terraform_service_account" is nil, but no error was reported
Running into this as well, when destroying an aws bucket (that was also created by TF).
Versions:
Terraform v0.11.7
- provider.aws v1.28.0
The bucket is created in a module, with SSE, and a couple of locals for the bucket name and the SSE kind, and a bucket policy. Creation works as expected, destructions fails every time with the same error as above:
- module.x_bucket.output.arn: variable "b" is nil, but no error was reported
- module.y_bucket.output.arn: variable "b" is nil, but no error was reported
Even though destruction fails, the buckets are actually gone from AWS, so it does work as intended, it errors out for no apparent reason.
This also affects this version as well:
$ terraform -v
Terraform v0.11.8
+ provider.google v1.18.0
still present in v0.11.10.
While export TF_WARN_OUTPUT_ERRORS=1
does suppress this bug, it'd be nice to have this issue addressed.
Without suppression
โ K8SonAWS git:(master) โ terraform destroy --force
data.aws_availability_zones.azones: Refreshing state...
data.aws_region.region: Refreshing state...
data.aws_iam_policy_document.eks_wn_role_policy: Refreshing state...
data.aws_ami.eks_ami: Refreshing state...
data.aws_iam_policy_document.eks_mc_role_policy: Refreshing state...
Error: Error applying plan:
2 error(s) occurred:
* module.eks_cluster.output.certificate_authority_data: variable "ec" is nil, but no error was reported
* module.eks_cluster.output.endpoint: variable "ec" is nil, but no error was reported
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.
โ K8SonAWS git:(master) โ terraform --version
Terraform v0.11.10
+ provider.aws v1.45.0
+ provider.local v1.1.0
โ K8SonAWS git:(master) โ
With suppression
โ K8SonAWS git:(master) โ export TF_WARN_OUTPUT_ERRORS=1
โ K8SonAWS git:(master) โ terraform --version
Terraform v0.11.10
+ provider.aws v1.45.0
+ provider.local v1.1.0
โ K8SonAWS git:(master) โ terraform destroy --force
data.aws_availability_zones.azones: Refreshing state...
data.aws_region.region: Refreshing state...
data.aws_ami.eks_ami: Refreshing state...
data.aws_iam_policy_document.eks_wn_role_policy: Refreshing state...
data.aws_iam_policy_document.eks_mc_role_policy: Refreshing state...
Destroy complete! Resources: 0 destroyed.
โ K8SonAWS git:(master) โ
Thanks, @aaomoware
TF_WARN_OUTPUT_ERRORS=1
get rid of the warning and destroy is successful.
Yes, this bug exists in v0.11.11. The work around of setting
export TF_WARN_OUTPUT_ERRORS=1 helps.
happened in v0.11.14. I would not use the export of TF_WARN_OUTPUT_ERRORS as a proper fix. Tbh the error message looks also very un-meaningful (an error which says "no error was reported"?!), and not be able to destroy an empty tf state is really bad
ubuntu@ip-172-31-15-42:~/temp$ terraform state list
ubuntu@ip-172-31-15-42:~/temp$ terraform destroy -force
Error: Error applying plan:
1 error occurred:
* module.test.output.customer_sec_group: variable "customer_security_group" is nil, but no error was reported
I verified that once updated to 0.12, the examples here no longer present an error.
Since there is no further development for 0.11 releases, we can close this out.
I'm going to lock this issue because it has been closed for _30 days_ โณ. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
Most helpful comment
While
export TF_WARN_OUTPUT_ERRORS=1
does suppress this bug, it'd be nice to have this issue addressed.Without suppression
With suppression