Terraform: Changed resource does not trigger changes in depended module in same run

Created on 21 Nov 2017  ยท  7Comments  ยท  Source: hashicorp/terraform

Terraform Version

Terraform v0.11.0

Terraform Configuration Files

component/main.tf

terraform {}

provider "aws" {
  region = "us-east-1"
}

resource "aws_s3_bucket" "test" {
  bucket = "terraform-issue-example"
  acl    = "private"

  versioning {
    enabled = true
  }
}

resource "aws_s3_bucket_object" "test" {
  bucket  = "${aws_s3_bucket.test.bucket}"
  key     = "test.txt"
  content = "test2"
}

module "assets_bucket" {
  source     = "../module/"
  s3_version = "${aws_s3_bucket_object.test.version_id}"
}

module/main.tf

variable "s3_version" {
  type = "string"
}

data "template_file" "test" {
  template = "${file("${path.module}/template.txt")}"

  vars {
    s3_version = "${var.s3_version}"
  }
}

resource "local_file" "test_result" {
  content  = "${data.template_file.test.rendered}"
  filename = "${path.module}/test_result.txt"
}

module/template.txt

s3_version "${s3_version}" 

Expected Behavior

Terraform should first change the aws_s3_bucket_object and then change the module that writes version of the new aws_s3_bucket_object to that example file. In the dependency graph, the module should depend on aws_s3_bucket_object and reevaluate once it has changed. I should only be required to run terraform apply once to get those changes applied.

Actual Behavior

I have to run Terraform twice.
First run:

  ~ aws_s3_bucket_object.test
      content: "test1" => "test2"

Second run:

-/+ module.assets_bucket.local_file.test_result (new resource required)

Steps to Reproduce

  1. terraform init
  2. terraform apply
  3. Change content = "XXX" of aws_s3_bucket_object
  4. terraform apply
  5. terraform apply
bug core v0.11

All 7 comments

Hi @stephanlindauer,

Thanks for filing this with the full reproduction case.
The issue here isn't related to the module or variables at all, it's has to do with a data source depending on the computed output of a managed resource. The following reduced config would exhibit the same behavior, requiring an extra apply to change the output:

resource "aws_s3_bucket_object" "test" {
  bucket  = "bucket_name"
  key     = "test.txt"
  content = "test1"
}

data "template_file" "test" {
  template = "${file("${path.module}/template.txt")}"
  vars {
    s3_version = "${aws_s3_bucket_object.test.version_id}"
  }
}

output "out" {
  value = "${data.template_file.test.rendered}"
}

Hi @jbardin,
Thanks for your response! Okay, makes sense.
I also wouldn't expect a data source that is retrieving external data to be re-evaluated. However, in this case, the data is already there locally (sooo close :wink:), I guess I expected it to be refreshed. I now understand how this is something that would not work with the way data sources are built.
How can i work around this though? depends_on doesn't really help here and I can not think of any other workaround ATM.

Actually, depends_on sort of helps, it will get you the correct output in one try, but then you're stuck with a perpetual diff when running plan again, so not much better overall. Interestingly even a managed resource like null_resource isn't triggered by the computed version_id, so there may be something else amiss too.

I'm not sure there is a good workaround at the moment. Depending on your exact use case, you may be able to put something together with a null_resource and a local-exec provisioner. Maybe something like:

resource "null_resource" "result" {
  triggers {
    version = "${aws_s3_bucket_object.test.version_id}"
    content = "${aws_s3_bucket_object.test.content}"
  }

  provisioner "local-exec" {
    command = "echo s3_version ${aws_s3_bucket_object.test.version_id} > test_result.txt"
  }
}

This is a really good example for the data points I'm collecting to try and tackle various datasource issues in the near future. The datasource thinks it has the data available when its refreshed, but it can't know that it has changed until after the diff is run which is a later step.

Thanks!

The quirky behavior here would be addressed by the lifecycle change proposed in #17034.

This seems like a pretty major bug in terraform. We just had a failed production release because of this.
So much for "reproducible infrastructure"....

Can we get some traction in this?

Hello! :robot:

This issue seems to be covering the same problem or request as #17034, so we're going to close it just to consolidate the discussion over there. Thanks!

I'm going to lock this issue because it has been closed for _30 days_ โณ. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

Was this page helpful?
0 / 5 - 0 ratings