Terraform-provider-aws: Terraform detects change when there is no change due to template_file

Created on 19 Jun 2019  路  5Comments  路  Source: hashicorp/terraform-provider-aws

_This issue was originally opened by @thtran101 as hashicorp/terraform#21789. It was migrated here as a result of the provider split. The original body of the issue is below._


I use Terraform to manage a serverless achitecture on AWS and after migrating to Terraform v0.12.2 from v011.x, I've noticed that there are "false" positive diffs detected when running plan/apply but the false positive change is not actually applied when the plan is approved. This problem revolves around the use of template file resources. It seems like there is a difference in how/when?? template files are rendered and evaluated against current state.

The following are my TF specs.

Terraform v0.12.2

  • provider.aws v2.15.0
  • provider.null v2.1.2
  • provider.template v2.1.2

I've put together as concise an example for reproducing the behavior as possible. In my example below the template file is used for a resource policy, but I have this same problem occurring on a state function definitions using template files.

resource "aws_lambda_function" "test" {
  function_name = "test-delete-me"

  filename = "code-deployments/test.zip"
  handler  = "index.handler"
  runtime  = "nodejs10.x"

  // use any existing IAM role compatible w/ lambda to reproduce error
  role = aws_iam_role.lambda_basic_execution.arn

  publish = false
  timeout = 5

  environment {

    variables = {
      a_lambda_var = "x"
    }

  }

}

resource "aws_iam_role" "test_role" {
  name = "test-delete-me-role"

assume_role_policy = <<EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": "sts:AssumeRole",
      "Principal": {
        "Service": "lambda.amazonaws.com"
      },
      "Effect": "Allow",
      "Sid": ""
    }
  ]
}
EOF

}

data "template_file" "test_policy" {
  /*
    Use any policy file, doesn't need to actually consume the variable below
  */
  template = file("policies/test_policy.tpl")

  vars = {
    my_var = aws_lambda_function.test.arn
  }
}

resource "aws_iam_role_policy" "test_role_policy" {
  name = "test-policy"
  role = aws_iam_role.test_role.id

  policy = data.template_file.test_policy.rendered

}

In the above configuration file there is:

  • a lambda function which should use any existing IAM role. The function itself doesn't matter nor does the role.
  • a test IAM role
  • a template file for an IAM policy that is defined with a variable (attached below for convenience, actual content doesn't matter)
  • an inline policy to be attached to the test IAM role

When the infrastructure has been deployed and is in a steady state with no diffs detected, deploy an update to the lambda by toggling the a_lambda_var to another value like "y".

Expected Behavior:
Only 1 change is detected with terraform apply/plan for the lambda function.

Actual Behavior:
2 changes are detected/predicted in the following order:
a) aws_iam_role_policy.test_role_policy will change with its single statement being dropped
b) lambda function changes due to variable value change

Actual Approved Plan Behavior:
Only 1 modification is made to the lambda function which contradicts the plan.

I didn't experience this problem in Terraform v0.11.x or earlier versions. I've used my config for over 6 months with countless deployments. This bug may be related to open issue 21545???

test_policy.txt

Let me know if you need me to attach a test lambda package, but absolutely any package will allow you to reproduce the problem.

bug serviciam serviclambda

Most helpful comment

I just ran across something similar with elb listeners. (aws_elb resource). I've got a dynamic listener block, and no matter how I feed the list of listeners, it's showing a remove and an add. If this is a no-op, then it won't matter, but I'm not sure I want to run this against an active production load balancer and find out the hard way that it causes a hiccup while it's re-creating the listener!

What's strange is I have other load balancers in the same state, using same resource block, that aren't showing changes.

       - listener {
          - instance_port     = 25043 -> null
          - instance_protocol = "http" -> null
          - lb_port           = 25043 -> null
          - lb_protocol       = "http" -> null
        }
      + listener {
          + instance_port     = 25043
          + instance_protocol = "http"
          + lb_port           = 25043
          + lb_protocol       = "http"

All 5 comments

Just wanted to provide additional info.

I have refined my test cases and used the following to get expected results:
TF v0.11.14
AWS Provider v2.17.0
Template v2.1.2
These are all the latest versions of providers with the latest non 0.12.x version of TF.

As soon as I upgrade to TF v0.12.3, perform init -upgrade and then run plan, I will see additional projected changes for aws_iam_role_policy even though nothing will actually change.

Main terraform file with redacted role

Plan generated when toggling lambda var TFv0.11.14

Plan generated when toggling lambda var TFv0.12.3

Terraform will indicate there is a change or potential change to aws_iam_role_policy. This happens every time the lambda environment variable value is toggled and apply is rerun, so it's not happening just because of the first run with v0.12.3. It happens every time.

I know from the previous comments that terraform plan and execution aren't guaranteed to be equivalent, but this wasn't the previous behavior and the more noise created during the plan the more difficult it is to evaluate a plan and determine if it's acceptable and OK to implement/commit.

I just ran across something similar with elb listeners. (aws_elb resource). I've got a dynamic listener block, and no matter how I feed the list of listeners, it's showing a remove and an add. If this is a no-op, then it won't matter, but I'm not sure I want to run this against an active production load balancer and find out the hard way that it causes a hiccup while it's re-creating the listener!

What's strange is I have other load balancers in the same state, using same resource block, that aren't showing changes.

       - listener {
          - instance_port     = 25043 -> null
          - instance_protocol = "http" -> null
          - lb_port           = 25043 -> null
          - lb_protocol       = "http" -> null
        }
      + listener {
          + instance_port     = 25043
          + instance_protocol = "http"
          + lb_port           = 25043
          + lb_protocol       = "http"

This bug kinda defies the whole notion of "plan" - because for the infra with some only 8 servers, on every change, everything shows as changed, because of those -> null line endings.
Totally impossible to spot what's actually being changed.

Anybody have any ideas how to fix this, please? 馃檹

I'm having the same issue with AWS delivery streams. I don't change the variables but this happens on every apply.

Terraform version:

> terraform version
Terraform v0.12.28
+ provider.aws v2.48.0

```
processors {
type = "Lambda"

parameters {
    parameter_name  = "LambdaArn"
    parameter_value = "..."
}
+ parameters {
    + parameter_name  = "BufferSizeInMBs"
    + parameter_value = "3"
}
+ parameters {
    + parameter_name  = "BufferIntervalInSeconds"
    + parameter_value = "60"
}

}



the terraform looks like this

parameters {
parameter_name = "BufferSizeInMBs"
parameter_value = var.buffer-size
}
parameters {
parameter_name = "BufferIntervalInSeconds"
parameter_value = var.interval-seconds
}
```

I got something similar with a template for Batch:

Nothing has changed yet the environmental vars get swapped around at random:

 {
                      ~ name  = "AWS_SIGNATURE_VERSION" -> "PGDATABASE"
                      ~ value = "v4" -> "des"
                    },

 {
                      ~ name  = "AWS_REGION" -> "LIQUIBASE_CONTEXT"
                      ~ value = "eu-central-1" -> "non-legacy"
                    },
{
                      + name  = "AWS_SIGNATURE_VERSION"
                      + value = "v4"
                    },

Terraform 0.12.29
AWS Provider 3.5

edit: Well I found my issue.
For anyone that cares: One of the env vars for the template was an empty string. In my opinion I should be able to pass in empty vars. If that is not allowed then give an error and not this random change I have been seeing for the last 3 days.

Was this page helpful?
0 / 5 - 0 ratings