Terraform: Refresh does not sync with new s3 remote state until next apply

Created on 27 Sep 2016  ยท  10Comments  ยท  Source: hashicorp/terraform

If you update your terraform configuration to use a remote state, terraform refresh will not sync with the remote state until you terraform apply again even if you manually update the S3 bucket key with a state created locally. terraform plan will plan to create all of the resources anew.

Terraform Version

0.7.4

Affected Resource(s)

  • remote S3 state

Possibly an issue with how Terraform manages remote state at its core.

Terraform Configuration Files

variable "aws_region" {}
variable "aws_profile" {}

provider "aws" {
    region = "${var.aws_region}"
    profile = "${var.aws_profile}"
}

# Add this block after creating the resources initially
variable "state" {}
data "terraform_remote_state" "helper" {
    backend = "s3"
    config {
        bucket = "${var.state}"
        key = "helper.tfstate"
        region = "${var.aws_region}"
        profile = "${var.aws_profile}"
    }
}

resource "aws_iam_role" "helper" {
    name = "helper"
    assume_role_policy = <<CONFIG
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "Service": "ec2.amazonaws.com"
            },
            "Action": "sts:AssumeRole"
        }
    ]
}
CONFIG
}

Debug Output

This is the output of terraform plan after you have applied initially _without_ a remote backend.
https://gist.github.com/ajcrites/a42d53bb7716ac796e6dc8bf523dab72

Expected Behavior

Terraform should be able to refresh from remote state that you have uploaded manually.

Actual Behavior

Terraform does not refresh from remote state until it is updated with apply.

Steps to Reproduce

  1. Comment out or remove the remote state from the configuration above
  2. terraform apply -state=helper.tfstate
  3. Create a bucket for remote state. Upload helper.tfstate to the bucket
  4. terraform plan

Notice that terraform does not refresh from the manually updated remote state; it will plan to recreate the resources. terraform plan -state=helper.tfstate works as expected.

If you terraform apply again with the remote state definition, it will begin to work properly.

backens3 bug

Most helpful comment

I'm experiencing something similar when using Terraform to provision an ECS cluster and service. The service pulls in the cluster remote state via a data resource. I've updated an IAM role in the cluster which caused a new role to be created. I can see the correct data in the ECS cluster remote state file on S3. However when running terraform plan in the ECS service it always uses the remote state data resource from the service remote state file on S3 and never actually grabs the latest cluster remote state file from S3. I had to manually recreate the deleted role to get the plan to succeed and then delete after apply.

All 10 comments

I think this just may be a misunderstanding of how to use remote state. Now if I use terraform remote config and set all of the same keys as in the example above, terraform plan/refresh constantly say

Remote state cache has no remote info

and I can proceed no further.

I was under the impression based on the documentation that you could use data in the config to set the remote state, but this doesn't seem to be the case. The fact that terraform remote config is causing this error for me now is probably unrelated.

It looks like the data resource for s3 remote state is not working here either. No matter how I do it, it never actually gets the data from the remote state and always uses local terraform.tfstate locally to plan and apply. The documentation should be augmented with better instructions: do I need to run terraform remote config regardless? If so, what do I gain with the data way of doing it?

I'm experiencing something similar when using Terraform to provision an ECS cluster and service. The service pulls in the cluster remote state via a data resource. I've updated an IAM role in the cluster which caused a new role to be created. I can see the correct data in the ECS cluster remote state file on S3. However when running terraform plan in the ECS service it always uses the remote state data resource from the service remote state file on S3 and never actually grabs the latest cluster remote state file from S3. I had to manually recreate the deleted role to get the plan to succeed and then delete after apply.

I got same problem. because I run terraform apply to create S3 bucket and got crash. then I have to manually delete S3 bucket.

This is still an issue for myself. I find myself having to comment out a ton of data resources when depending on new outputs from a referred to state file, simply so I can just RUN the refresh command. It seems to impact data resources the most. Do these happen to run before the actual remote state refresh?

I lay out my modules in a hierarchy and use remote state to wire up lower level modules to higher level outputs.

This bug bites me when I change the contents of an output (say rename a key in a map) that a lower level module uses. After renaming and running terraform apply (on the highest level module which contains a given output), I can see the renamed key in tfstate in S3. However, when I try to run terraform apply on a lower level module it won't find the renamed key because it is expecting the new name, but has the old name in its copy of the data source. I'll get this error:

$ terraform apply

Error: Error asking for user input: 1 error(s) occurred:

  • local.vpc_cidr: local.vpc_cidr: key "idc1-us-east-2-aws_cidr" does not exist in map data.terraform_remote_state.global_vars.my_prv_cidr_map in:

${data.terraform_remote_state.global_vars.my_prv_cidr_map["idc1-us-east-2-aws_cidr"]}

My workaround was to change the name of the remote state data resource from:
data "terraform_remote_state" "global_vars" {
...
}

to

data "terraform_remote_state" "global_vars_v2" {
...
}

and change all references to ${data.terraform_remote_state.global_vars.my_prv_cidr_map...}

Then when I do terraform apply it will pull state data for the "new" data source and the problem disappears.

This seemed less error prone than modifying the remote state data in S3 directly, though I think that would also be an effective workaround.

I am seeing this issue when running terraform plan -refresh=false - it will not pick up changes in remote-states?

I see something like this as well. I share 1 S3 bucket among 3 tfstates for their state.. and I have it fully setup using one environment/tfstate, but when I try to do refresh and plan on the 2nd state, it updates me that it needs to add acl: private and force_destroy: false to the bucket, even though the configuration used is exactly the same.

Hello! :robot:

This issue relates to an older version of Terraform that is no longer in active development, and because the area of Terraform it relates to has changed significantly since the issue was opened we suspect that the issue is either fixed or that the circumstances around it have changed enough that we'd need an updated issue report in order to reproduce and address it.

If you're still seeing this or a similar issue in the latest version of Terraform, please do feel free to open a new bug report! Please be sure to include all of the information requested in the template, even if it might seem redundant with the information already shared in _this_ issue, because the internal details relating to this problem are likely to be different in the current version of Terraform.

Thanks!

I'm going to lock this issue because it has been closed for _30 days_ โณ. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

Was this page helpful?
0 / 5 - 0 ratings