Hi,
I have some configuration in a module and I have some configuration outside of the module creating an instance of the module and using its output map to create another resource. I ran into the issue that if I add another resource inside the module and a new output referencing that new resource in the output map, then everything living outside of the module that references that output map gets recreated.
Terraform v0.11.1
main.tf
terraform {
backend "s3" {}
}
variable "region" {
description = "AWS region to host your network"
}
variable "statebucket" {
description = "State Bucket"
}
provider "aws" {
region = "${var.region}"
}
module "test" {
source = "module"
statebucket = "${var.statebucket}"
region = "${var.region}"
}
resource "aws_s3_bucket" "test" {
bucket = "test-${module.test.outputs["region"]}-deployment"
force_destroy = true
}
module/main.tf
variable "region" {
description = "AWS region to host your network"
}
variable "statebucket" {
description = "State Bucket"
}
resource "aws_s3_bucket" "test" {
bucket = "test-${var.region}-module-bucket"
force_destroy = true
}
/* added after the other 2 buckets get created along with bucket_id output*/
resource "aws_s3_bucket" "test_2" {
bucket = "test-${var.region}-module-bucket-2"
force_destroy = true
}
output "outputs" {
value = "${
map(
"region", "${var.region}",
"bucket_id", "${aws_s3_bucket.test_2.id}",
)
}"
}
Adding the configuration for the test_2 bucket resource and the bucket_id output in the module shouldn't have caused any other configuration changes. The plan should only show that it will add the new bucket.
Terraform wants to recreate the resource outside of the module that uses the module's output map even though the values that it uses from the map haven't changed.
Terraform will perform the following actions:
-/+ aws_s3_bucket.test (new resource required)
id: "test-us-east-1-deployment" => <computed> (forces new resource)
acceleration_status: "" => <computed>
acl: "private" => "private"
arn: "arn:aws:s3:::test-us-east-1-deployment" => <computed>
bucket: "test-us-east-1-deployment" => "test-${module.test.outputs[\"region\"]}-deployment" (forces new resource)
bucket_domain_name: "test-us-east-1-deployment.s3.amazonaws.com" => <computed>
force_destroy: "true" => "true"
hosted_zone_id: "ABCDEFGHIJKLMN" => <computed>
region: "us-east-1" => <computed>
request_payer: "BucketOwner" => <computed>
versioning.#: "1" => <computed>
website_domain: "" => <computed>
website_endpoint: "" => <computed>
+ module.test.aws_s3_bucket.test_2
id: <computed>
acceleration_status: <computed>
acl: "private"
arn: <computed>
bucket: "test-us-east-1-module-bucket-2"
bucket_domain_name: <computed>
force_destroy: "true"
hosted_zone_id: <computed>
region: <computed>
request_payer: <computed>
versioning.#: <computed>
website_domain: <computed>
website_endpoint: <computed>
Plan: 2 to add, 0 to change, 1 to destroy.
Hi @carlapcastro! Sorry for this frustrating behavior.
The root cause here is a limitation of how Terraform handles <computed> values. Specifically:
map(
"region", "${var.region}",
"bucket_id", "${aws_s3_bucket.test_2.id}",
)
As currently implemented, passing a <computed> value to a function skips evaluating the function and just immediately returns <computed>, which is usually a reasonable behavior for most functions but isn't right for map because it's just copying those values directly into the map it returns, rather than trying to do computation based on them.
We're currently working on integrating an improved version of the configuration language interpreter that has a better syntax for maps that also addresses this limitation:
# (not yet implemented, and details may change before release)
output "outputs" {
value = {
region = var.region
bucket_id = aws_s3_bucket.test_2.id
}
}
Since this will be a native part of the language, rather than just a function, we can know that any computed values should remain isolated. This new implementation will be coming in a future major release of Terraform.
In the mean time, we usually recommend keeping outputs "flat" and using multiple outputs rather than complex values:
output "region" {
value = "${var.region}"
}
output "bucket_id" {
value = "${aws_s3_bucket.test_2.id}"
}
This keeps the two values distinct and avoids the special computed values behavior from interfering. If you _do_ need to keep these combined into a single value then another workaround is to use -target to ask Terraform to work on the S3 bucket in isolation first:
$ terraform apply -target=module.test.aws_s3_bucket.test_2
Once the new bucket has been created, re-running Terraform normally (without -target) should then get the desired effect, because the bucket ID will already be known and the map function can therefore be executed as expected.
The work that will fix this is the current focus for the Terraform team. Sorry again for this weird behavior!
Hi @apparentlymart thanks so much for the response and I'm happy to see it will be fixed in the next major release!
Hi again, @carlapcastro! Sorry for the long silence.
I just confirmed that this is now working as expected in the v0.12.0-alpha1 prerelease build, by using a modified version of your configuration that uses null_resource instead of aws_s3_bucket just to allow me to get set up more quickly:
The first apply created the initial set of objects, as before:
$ terraform apply
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# null_resource.test will be created
+ resource "null_resource" "test" {
+ id = (known after apply)
+ triggers = {
+ "bucket" = "test-us-west-2-deployment"
}
}
# module.test.null_resource.test will be created
+ resource "null_resource" "test" {
+ id = (known after apply)
}
Plan: 2 to add, 0 to change, 0 to destroy.
...
Then I added in the new resource in the child module and the new key in the output map, which I updated to the new syntax I mentioned before:
output "outputs" {
value = {
region = var.region
bucket_id = null_resource.test_2.id
}
}
$ terraform apply
module.test.null_resource.test: Refreshing state... [id=4673292035955842271]
null_resource.test: Refreshing state... [id=7306561915504961895]
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# module.test.null_resource.test_2 will be created
+ resource "null_resource" "test_2" {
+ id = (known after apply)
}
Plan: 1 to add, 0 to change, 0 to destroy.
...
This fix is in the master branch ready to be included in the forthcoming v0.12.0 final release, so I'm going to close this out. Thanks for reporting this, and thanks for your patience while we laid the groundwork to fix this.
I'm going to lock this issue because it has been closed for _30 days_ โณ. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
Most helpful comment
Hi @carlapcastro! Sorry for this frustrating behavior.
The root cause here is a limitation of how Terraform handles
<computed>values. Specifically:As currently implemented, passing a
<computed>value to a function skips evaluating the function and just immediately returns<computed>, which is usually a reasonable behavior for most functions but isn't right formapbecause it's just copying those values directly into the map it returns, rather than trying to do computation based on them.We're currently working on integrating an improved version of the configuration language interpreter that has a better syntax for maps that also addresses this limitation:
Since this will be a native part of the language, rather than just a function, we can know that any computed values should remain isolated. This new implementation will be coming in a future major release of Terraform.
In the mean time, we usually recommend keeping outputs "flat" and using multiple outputs rather than complex values:
This keeps the two values distinct and avoids the special computed values behavior from interfering. If you _do_ need to keep these combined into a single value then another workaround is to use
-targetto ask Terraform to work on the S3 bucket in isolation first:Once the new bucket has been created, re-running Terraform normally (without
-target) should then get the desired effect, because the bucket ID will already be known and the map function can therefore be executed as expected.The work that will fix this is the current focus for the Terraform team. Sorry again for this weird behavior!