Terraform v0.13.2
+ provider registry.terraform.io/hashicorp/aws v3.6.0
+ provider registry.terraform.io/hashicorp/local v1.4.0
+ provider registry.terraform.io/hashicorp/null v2.1.2
+ provider registry.terraform.io/hashicorp/random v2.3.0
+ provider registry.terraform.io/hashicorp/template v2.1.2
+ provider registry.terraform.io/hashicorp/tls v2.2.0
outside module resource
module "some_app" {
source = "../../modules/ecs-app"
name = "${var.app_name}-${var.environment}-${var.some_app_name}"
container_name = var.some_app_name
family_name = module.ecs.task_definition_family_name
execution_role_arn = module.ecs.task_execution_role_arn
task_role_arn = module.ecs.task_role_arn
cw_log_group_name = module.ecs.cw_log_group_name
environment = [...]
secrets = [...]
tags = {
Managed = "Terraform"
Environment = var.environment
Project = var.app_name
}
}
inside module resources
data "template_file" "container_definition" {
template = file("${path.module}/templates/container-definition.tmpl")
vars = {
log_group_name = var.cw_log_group_name
aws_region = data.aws_region.main.name
network_mode = var.network_mode
host_port = var.host_port
container_name = var.container_name
environment = jsonencode(var.environment)
secrets = jsonencode(var.secrets)
}
}
data "template_file" "taskdef" {
template = file("${path.module}/templates/taskdef.tmpl")
vars = {
cpu = var.cpu
memory = var.memory
execution_role_arn = var.execution_role_arn
task_role_arn = var.task_role_arn
family = var.family_name
network_mode = var.network_mode
launch_types = jsonencode(var.launch_types)
environment = jsonencode(var.environment)
secrets = jsonencode(var.secrets)
main_container = chomp(data.template_file.container_definition.rendered)
}
}
terraform plan/apply should not display template_file data if nothing to change
# module.some_app.data.template_file.taskdef will be read during apply
# (config refers to values not yet known)
<= data "template_file" "taskdef" {
+ id = "122c9d***12fb91"
+ rendered = <<~EOT
{
....
}
EOT
+ template = <<~EOT
{
....
}
EOT
+ vars = {
+ "main_container" = <<~EOT
{
....
}
EOT
}
}
terraform initterraform planI recently upgraded terraform to 0.13.2. Previously I used 0.12.28, but the template provider version was the same.
With flag -refresh=false template_file data not reading.
template_file data resource not read during plan if it outside module
I sow similar issue(https://github.com/hashicorp/terraform/issues/26100), but there data sources depends on count.
Hi @kostiukolex, thanks for reporting this.
I'm not able to reproduce the issue you're seeing. Here's the configuration I'm using:
main.tf:
module "hello-alisdair" {
source = "./hello"
name = "alisdair"
}
output "hello" {
value = module.hello-alisdair.greeting
}
hello/main.tf:
variable "name" {
type = string
}
data "template_file" "hello" {
template = file("${path.module}/hello.tpl")
vars = {
name = var.name
}
}
output "greeting" {
value = data.template_file.hello.rendered
}
hello/hello.tpl:
Hello, ${name}!
Applying the configuration creates the correct output:
$ terraform-0.13.3 apply -auto-approve
module.hello-alisdair.data.template_file.hello: Refreshing state...
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
Outputs:
hello = Hello, alisdair!
Plan does not reread the template_file data:
$ terraform-0.13.3 plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
module.hello-alisdair.data.template_file.hello: Refreshing state... [id=d0c1be373e8b11ba8f69ded55954fcfba70daae3c8f59512267e590ca4d9b546]
------------------------------------------------------------------------
No changes. Infrastructure is up-to-date.
This means that Terraform did not detect any differences between your
configuration and real physical resources that exist. As a result, no
actions need to be performed.
Can you adjust this simple configuration to cause the problem to reappear, so that we can see what is causing the issue?
Hi @alisdair, thanks for answer
I found some points, why it's happens in my side.
So, i have some module, which creates ECS task definition with some variables - outputs from another terraform modules (db endpoint, for example).
main.tf
module "test" {
source = "./test"
container_name = "test"
environment = [
{
name = "ENV",
value = var.environment
},
{
name = "RDS_PORT",
value = module.db.this_db_instance_port
}
]
}
rds.tf
resource "random_password" "db" {
length = 16
special = false
}
resource "random_string" "db_fsi" {
length = 8
number = false
special = false
upper = false
keepers = {
snapshot_identifier = var.rds_snapshot_identifier
}
}
module "db" {
source = "terraform-aws-modules/rds/aws"
version = "~> 2.0"
identifier = "${var.app_name}-${var.environment}"
license_model = "license-included"
engine = var.rds_engine
engine_version = var.rds_engine_version
major_engine_version = var.rds_major_engine_version
instance_class = var.rds_instance_class
allocated_storage = var.rds_allocated_storage
max_allocated_storage = var.rds_max_allocated_storage
vpc_security_group_ids = [module.db_sg.this_security_group_id]
subnet_ids = module.vpc.database_subnets
username = var.db_user
password = random_password.db.result
port = var.db_port
maintenance_window = var.rds_maintenance_window
backup_window = var.rds_backup_window
backup_retention_period = var.backup_retention_period
final_snapshot_identifier = "${var.app_name}-${var.environment}-${random_string.db_fsi.result}"
snapshot_identifier = var.rds_snapshot_identifier
enabled_cloudwatch_logs_exports = ["error"]
performance_insights_enabled = var.performance_insights_enabled
publicly_accessible = var.rds_publicly_accessible
create_db_subnet_group = true
create_db_option_group = true
family = var.rds_family
options = [
{
option_name = "SQLSERVER_BACKUP_RESTORE"
option_settings = [
{
name = "IAM_ROLE_ARN"
value = aws_iam_role.rds.arn
}
]
}
]
tags = {
Managed = "Terraform"
Environment = var.environment
Project = var.app_name
}
}
test/main.tf
data "template_file" "container_definition" {
template = file("${path.module}/templates/container-definition.tmpl")
vars = {
container_name = var.container_name
environment = jsonencode(var.environment)
}
}
output "container" {
value = data.template_file.container_definition.rendered
}
test/templates/container-definition.tmpl
{
"name": "${container_name}",
"environment" : ${environment}
}
$ terraform plan
...
# module.test.data.template_file.container_definition will be read during apply
# (config refers to values not yet known)
<= data "template_file" "container_definition" {
+ id = "81a66a44772c6bd892d55bb03500e07bc1634719b5a775f36026a03b4cc2890c"
+ rendered = jsonencode(
{
+ environment = [
+ {
+ name = "ENV"
+ value = "development"
},
+ {
+ name = "RDS_PORT"
+ value = "1433"
},
]
+ name = "test"
}
)
+ template = <<~EOT
{
"name": "${container_name}",
"environment" : ${environment}
}
EOT
+ vars = {
+ "container_name" = "test"
+ "environment" = jsonencode(
[
+ {
+ name = "ENV"
+ value = "development"
},
+ {
+ name = "RDS_PORT"
+ value = "1433"
},
]
)
}
}
$ terraform apply -auto-approve
...
module.test.data.template_file.container_definition: Reading...
module.test.data.template_file.container_definition: Read complete after 0s [id=81a66a44772c6bd892d55bb03500e07bc1634719b5a775f36026a03b4cc2890c]
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
test = {
"name": "test",
"environment" : [{"name":"ENV","value":"development"},{"name":"RDS_PORT","value":"1433"}]
}
Besides, i use Data Source: aws_acm_certificate, which in terraform 0.13 change itself id during every plan/apply. This leads to reread template file in some modules
# data.aws_acm_certificate.main will be read during apply
# (config refers to values not yet known)
<= data "aws_acm_certificate" "main" {
arn = "arn:aws:acm:us-east-1:***:certificate/***"
domain = "example.dev"
~ id = "2020-09-21 10:02:54.865358 +0000 UTC" -> "2020-09-21 10:03:53.8825448 +0000 UTC"
most_recent = false
statuses = [
"ISSUED",
]
tags = {
"Name" = "example.dev"
}
}
While I still can't reproduce this with incomplete configuration, I think I understand what's happening now. I think that Terraform is detecting possible changes in the template file data source's configuration due to depending on the module output. If any input to the template resource changes, this causes it to be refreshed. Does this make sense in your case?
For a similar issue, which we believe will be addressed as part of the 0.14 release, see #26316.
I also want to point out that the template provider is deprecated, since the same functionality is available since Terraform 0.12 with the templatefile function. Migrating to using this function might help here.
hi @alisdair
I rewrote my modules to use the templatefile function instead of template_file data source and it's helping me! Thanks a lot
But i still don't understand why it happens, because the 0.12 version didn't reread template resources.ย
We made some changes as part of 0.13 regarding refresh during plan, which results in more data sources being refreshed than in 0.12. This would normally be a non-issue, but for some data sources which always result in changes on read, it has caused some understandable confusion.
As I alluded to earlier, refresh and plan are being worked on as part of 0.14, which should improve this for many situations.
Since it seems like you're unblocked with the templatefile function, and I'm not able to reproduce this bug with other data sources, I'm going to close this issue for now. If I've misunderstood or you find a minimal reproduction, please do reply and it can always be reopened. Thanks again for reporting!
@alisdair I am not using the template data source, just the AWS data provider, but getting a very similar issue after upgrading to 0.13. It seems like the issue occurs when the data provider gives a transient id field.
Input inside module definition:
data "aws_availability_zones" "all_azs" {
}
Output:
# module.vpc_prodca.data.aws_availability_zones.all_azs will be read during apply
# (config refers to values not yet known)
<= data "aws_availability_zones" "all_azs" {
group_names = [
"ca-central-1",
]
~ id = "2020-09-29 21:19:14.2121098 +0000 UTC" -> "2020-09-29 21:19:22.9993714 +0000 UTC"
names = [
"ca-central-1a",
"ca-central-1b",
"ca-central-1d",
]
zone_ids = [
"cac1-az1",
"cac1-az2",
"cac1-az4",
]
}
If I'm doing something suboptimal or there's a better wayto get the list of AZ's that doesn't cause the issue I'm happy to use it.
I'm going to lock this issue because it has been closed for _30 days_ โณ. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.