Terraform: template_file resource unknown variable accessed

Created on 8 Mar 2017  ·  32Comments  ·  Source: hashicorp/terraform

Part of a larger terraform script is failing when rendering a simple template

Terraform Version

Terraform v0.8.8

Affected Resource(s)

  • template_file

Terraform Configuration Files

variable "name" {
}

variable "environment" {
}

variable "account_id" {
}

data "template_file" "policy" {
  template = "${file("${path.module}/policy.json")}"

  vars = {
    bucket_name = "${var.name}-${var.environment}-logs"
    account_id  = "${var.account_id}"
  }
}

resource "aws_s3_bucket" "logs" {
  bucket = "${var.name}-${var.environment}-logs"

  tags {
    Name        = "${var.name}-${var.environment}-logs"
    Environment = "${var.environment}"
    Stack       = "${var.name}"
  }

  policy = "${data.template_file.policy.rendered}"
}

output "id" {
  value = "${aws_s3_bucket.logs.id}"
}

Policy json

{
  "Id": "log-bucket-policy",
  "Statement": [
    {
      "Sid": "AWSCloudTrailAclCheck20150319",
      "Effect": "Allow",
      "Principal": {
        "Service": "cloudtrail.amazonaws.com"
      },
      "Action": "s3:GetBucketAcl",
      "Resource": "arn:aws:s3:::${bucket_name}"
    },
    {
      "Action": "s3:PutObject",
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::${account_id}:root",
        "Service": "cloudtrail.amazonaws.com"
        },
      "Resource": "arn:aws:s3:::${bucket_name}/*",
      "Sid": "log-bucket-policy"
    }
  ],
  "Version": "2012-10-17"
}

Debug Output

https://www.dropbox.com/s/iwt5h3xxf8w1ot6/debug.log.gpg?dl=0

Expected Behavior

Template resource should render

Actual Behavior

Error message:

Error applying plan:

1 error(s) occurred:

* data.template_file.policy: failed to render : 11:35: unknown variable accessed: bucket_name

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. terraform apply
bug providetemplate regression

Most helpful comment

What fixed the issue for me was the following.
My error was alerting on a userdata script i was using.
This is a bash script and was referencing variables created by the same script.

For example:
CHEF_CLIENT_VERSION="12.20.3"

Was referenced:
curl --silent --show-error --retry 3 --location https://omnitruck.chef.io/install.sh | bash -s -- -v "${CHEF_CLIENT_VERSION}"

After i removed the curly braces and changing the reference to:
curl --silent --show-error --retry 3 --location https://omnitruck.chef.io/install.sh | bash -s -- -v "$CHEF_CLIENT_VERSION"

It worked.

All 32 comments

Looks like your var is bucket_name, not bucket_named

@mitchellh sorry that was a typo, I renamed the variable to see if that made any difference. Unfortunately the problem still exists, I attached the debug log

K will take a look! Thanks

I have the same issue.

Terraform version 0.8.7 && 0.8.8

json file

[
   {
      "memory":32,
      "portMappings":[
         {
            "hostPort":0,
            "containerPort":8080,
            "protocol":"tcp"
         }
      ],
      "essential":true,
      "mountPoints":[

      ],
      "name":"${service_name}",
      "environment":[
         {
            "name":"SERVICE_NAME",
            "value":"${service_name}"
         },
         {
            "name":"CONSUL_ACL_TOKEN",
            "value":"${consul_acl_token}"
         },
         {
            "name":"AWS_ACCESS_KEY_ID",
            "value":"${aws_access_key_id}"
         },
         {
            "name":"AWS_SECRET_ACCESS_KEY",
            "value":"${aws_secret_access_key}"
         },
         {
            "name":"AWS_REGION",
            "value":"${aws_region}"
         },
         {
            "name":"CONSUL_S3_BUCKET_NAME",
            "value":"${consul_s3_bucket_name}"
         },
         {
            "name":"CONSUL_SNAP_INTERVAL",
            "value":"${consul_snap_interval}"
         }
      ],
      "image":"${registry_docker_image}",
      "dockerLabels":{
         "image":"${registry_docker_image}",
         "tag":"${docker_image_tag}"
      },
      "logConfiguration":{
         "logDriver":"json-file",
         "options":{
            "max-size":"100m",
            "max-file":"2"
         }
      },
      "cpu":0
   }
]

tf

data "template_file" "consul_snapshot_definition" {
  template = "${file("${path.root}/task-definitions/consul-snapshot.json")}"

  vars {
    service_name          = "consul-snapshot"
    registry_docker_image = "${replace(module.consul-snapshot-repository.url, "https://", "")}:${var.consul_snapshot_version}"
    docker_image_tag      = "${var.consul_snapshot_version}"
    consul_acl_token      = "${var.consul_snapshot_acl_token}"
    aws_access_key_id     = "${var.consul_snapshot_access_key_id}"
    aws_secret_access_key = "${var.consul_snapshot_secret_access_key}"
    aws_region            = "${var.aws_region}"
    consul_s3_bucket_name = "${module.consul.bucket_name}"
    consul_snap_interval  = "1h"
  }
}

err: data.template_file.consul_snapshot_definition: failed to render : 15:17: unknown variable accessed: service_name

Same issue, with a variable name that I've quintuple checked

I'm Having same issue on
Terraform Version v0.8.6

I'm running into this with 0.9.2... and the template i've been using has existed in this form for quite some time in plans dating back to 0.7.2 for me.

So as I remove references to variables I know I'm passing... the error changes to the next variable to replace. So it would appear that the variables aren't passing in correctly.

Across the several files this plan spans.... I have a grand total of 9 data "template_file" resources, the ones that are failing are resources with more than 1 for count.

To get past this bug, I had to rename the template resource i.e.:

data "template_file" "policy"{....} to data "template_file" "policy-someotheranme"{...}

I changed to uploading the template in a file content block instead of the
template rendered syntax

just to mention, that I have now discovered the error message was caused because the templated file resulted in an invalid IAM policy. When running terraform apply aws was rejecting the policy assignment; however terraform was reporting this somewhat strangely as a variable missing error (which clearly it wasn't).

Fixing the invalid policy in the template solved this issue and the error.

I'm now seeing this on 0.9.4 with a template that has been working fine with the 0.9 series so far..

just to mention, that I have now discovered the error message was caused because the templated file resulted in an invalid IAM policy. When running terraform apply aws was rejecting the policy assignment; however terraform was reporting this somewhat strangely as a variable missing error (which clearly it wasn't).

I just ran into this issue, and I think I have a similar case. Although in my case, it's because I'm trying to reference a module output that I just added, which for some reason isn't getting pushed to the state. But if I rename or remove that module output, I get an error right away that the module doesn't have such an output. Not sure what's going on...

Edit: Aha! Just as I posted this, I figured out my problem. The output variable wasn't accessing the right attribute of a resource that I was trying to export, so it wasn't defined properly. The error kind of makes sense, but it feels like it should have been raised further up in the context. Maybe others have similar issues? Although if the same template file was working in earlier versions, perhaps it is a bug/regression for you guys.

Incidentally, the way I figured out that this was the issue was by pasting in a raw value for the passed-in var rather than referencing the resource attribute I meant it to be. When that worked, I realized that the problem was with the module output.

Turns out our template variable error was the result of a 404 in aterraform_remote_state data provider.

Ahh... that sounds like a good clue, @jharley. Thanks!

Are you able to share which backend you were using when you saw that? If possible I'd like to try to reproduce your result as part of debugging this.

Of course: happy to help however we can!

S3 backend (no locking or anything, FWIW).

Having the same issue with 0.9.4 and 0.9.6 with a consul backend.

same issue on 0.9.6 with legacy 0.8.x state

I'm not sure if my issue is related but found myself having a similar issue. I eventually figured out what was wrong. Perhaps this could help others.

definition.json:

[
  {
    "name": "service_name",
    "image": "${image}",
    "cpu": ${cpu},
    "memory": ${memory},
    "essential": true,
    "links": [],
    "portMappings": [
      { "containerPort": 8080, "protocol": "tcp" }
    ],
    "entryPoint": [],
    "environment": [
        { "name": "REDIS_HOST", "value": "${redis-host}" },
        { "name": "REDIS_PORT", "value": "${redis-port}" },
        ...
    ],
    ...
  }
]

main.tf:

data "template_file" "definition" {
  template = "${file("path/to/definition.json")}"

  vars {
    environment = "${var.environment}"

    image  = "${var.image}"
    cpu    = "${var.cpu}"
    memory = "${var.memory}"

    redis-host = "${var.redis-host}"
    redis-port = "${var.redis-port}"
    ...
  }
}

redis-host and redis-port are passed into this module from the output of an elasticache module. When I ran terraform apply, it failed with:

data.template_file.definition: failed to render : 4:17: unknown variable accessed: image

However, image was available. Turns out the issue was I had one of these statements inside the elasticache resource body:

count = "${var.environment != "production" ? 1 : 0}"

Meaning the variables that were missing were redis-host and redis-port rather than image.

I may have experienced the above; however in the myriad of changes I've gone back to implementing fresh cuts of my template_file invocations and the problem has gone away for me. It may very well have been a mis-identification of an attribute.

Ok, I figured out what my problem was and - just like a couple of other cases - the main problem is that the error message is misleading. It was reporting the wrong variable name.

The problem was that one of the template variables was taking its value from a different module output. The particular module output was using the lookup function, to find the value of a map variable. Of course, lookup does not support that, so it was failing somewhere along the way.

FWIW, I still think there's a bug here in that the error reporting needs to be fixed. This would have been a much simpler problem to resolve.

@gsaslis would you mind sharing the config you had which illustrates the bad error message? I'd like to see about fixing it, but I'm not sure I exactly understand what went wrong there... the part about using lookup with maps is what I'm curious about, since that seems different than the other examples shared here.

Absolutely:

So in a module I've called naming, I had:

variable "frontend_fqdn_aliases" {
  type = "map"

  default = {
    development = ["localhost"]
    test = ["host1"]
    staging = ["redacted1", "redacted2", "redacted3"]
    preprod = ["redacted4", "redacted5", "redacted6"]
    production = ["redacted7", "redacted8", "redacted9"]
  }

}

output "frontend_fqdn_aliases" {
  value = "${lookup(var.frontend_fqdn_aliases, var.environment, "No way this should happen")}"
}

Then, when using this module, I had:

data "template_file" "staging" {
    template = "${file("${path.module}/../templates/chef_environments/staging.tpl.json")}"

    vars {
        ... 
             frontend_fqdn_aliases = "${module.naming.frontend_fqdn_aliases}"
        ... 
    }
}

resource "null_resource" "local" {
  triggers {
    template = "${data.template_file.staging.rendered}"
  }

  provisioner "local-exec" {
    command = "echo '${data.template_file.staging.rendered}' > ../../chef-repo/environments/staging.json"
  }
}

And here is the relevant segment of the staging.tpl.json:

...
      "frontend": {
        "server_name": "${frontend_fqdn}",
        "server_aliases": "${frontend_fqdn_aliases}"
      }
...

I got this with Terraform v0.8 but not with v0.9.8. I suspect there was a bug and it got fixed at some point between these two releases.

More details for the curious:

Specifically, my error was:

Error running plan: 1 error(s) occurred:

* 1:3: unknown variable accessed: var.aws_region in:

${var.aws_region}

For me, this occurs whether I:

  • Provide aws_region as a variable with -var, or
  • Set aws_region in kubernetes-cluster.tf to a hardcoded value, or
  • Provide aws_region as a variable in terraform.tfvars.

If I set the aws_region in this module's variables.tf (shown below) to a default value (say, us-east-1), it does not yield this error. I also get this issue even after changing the name of this identically-named variable at the root of the folder.

This is what I have:

kubernetes_controller.tf

module "kubernetes-cluster" {
  source = "./modules/kubernetes-cluster"
  aws_region = "${var.aws_region}"
  kubernetes_controller_instance_size = "${var.kubernetes_controller_instance_size}"
}

variables.tf

variable "aws_region" {
  description = "The AWS region onto which this infrastructure will be deployed."
}

variable "kubernetes_controller_instance_size" {
  description = "The size for our Kubernetes controller."
  default = "t2.micro"
}

modules/kubernetes-cluster/variables.tf

variable "aws_region" {
  description = "The region onto which this cluster will be deployed."
}

variable "kubernetes_controller_instance_size" {
  description = "The size of your Kubernetes controller."
}

modules/kubernetes-controller/main.tf

provider "aws" {
  region = "${var.aws_region}"
}

resource "aws_instance" "kubernetes_controller" {
  ami = "${data.aws_ami.kubernetes_instances.id}"
  instance_type = "${var.kubernetes_controller_instance_size}"
}

What fixed the issue for me was the following.
My error was alerting on a userdata script i was using.
This is a bash script and was referencing variables created by the same script.

For example:
CHEF_CLIENT_VERSION="12.20.3"

Was referenced:
curl --silent --show-error --retry 3 --location https://omnitruck.chef.io/install.sh | bash -s -- -v "${CHEF_CLIENT_VERSION}"

After i removed the curly braces and changing the reference to:
curl --silent --show-error --retry 3 --location https://omnitruck.chef.io/install.sh | bash -s -- -v "$CHEF_CLIENT_VERSION"

It worked.

FYI I had this same issue, and it was because I had this:

output "app_repository" { value = "${aws_ecr_repository.app.url}" }

which was outputting a null value, as the attribute is repository_url and not url.

This error is happening for a reason, just the wording is very misleading.
If you have this issue and have double checked the variable names, check that the values are populated!

😄 👌

This is horrible :-(

I fixed my issue - I was looking up a value in a map that was not there. Ok, mistakes happen but the fundamental issue at heart here is that terraform is so bloody hard to debug. Is there not a way to easily see inputs and outputs of modules?

I ran into this issue. It was from a module's output being used as a variable in another module that's rendering a template. The problem is that the output wasn't ever generated (it's not in the tfstate) because a count=0.

I'm on terraform v0.10.8

I am seeing this problem as well, and haven't found a root cause. I agree the error is completely misleading because I get the error even when using variables that are used successfully elsewhere. I suspect it has something to do with how it's appearing in the template.

v0.11.7

The solution from @nathaniela solved my problem. 👍

I was facing this error when trying to render a user_data template
I just removed the curly braces and it is working now.

This issue has been automatically migrated to terraform-providers/terraform-provider-template#41 because it looks like an issue with that provider. If you believe this is _not_ an issue with the provider, please reply to terraform-providers/terraform-provider-template#41.

I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

Was this page helpful?
0 / 5 - 0 ratings