Terraform: Local and remote state conflict on no changes

Created on 3 Dec 2015  ยท  16Comments  ยท  Source: hashicorp/terraform

Hi,

I have stumbled upon issue with remote states. We use these for different TF deployments. Whereas usually it works, in one simple case it breaks. Steps to reproduce:

  1. terraform apply -var env=test
  2. terraform remote config -backend=s3 -backend-config="bucket=test-state" -backend-config="key=bucket.tfstate"
  3. Go to a different system/container and check out terraform files
  4. terraform remote config -backend=s3 -backend-config="bucket=test-state" -backend-config="key=bucket.tfstate"

This results in

Remote configuration updated
Error while performing the initial pull. The error message is shown
below. Note that remote state was properly configured, so you don't
need to reconfigure. You can now use `push` and `pull` directly.

Unknown refresh result: Local and remote state conflict, manual resolution required

The local .terraform/terraform.tfstate file is

{
    "version": 1,
    "serial": 0,
    "remote": {
        "type": "s3",
        "config": {
            "bucket": "test-state",
            "key": "bucket.tfstate"
        }
    },
    "modules": [
        {
            "path": [
                "root"
            ],
            "outputs": {},
            "resources": {}
        }
    ]
}

The remote bucket.tfstate file is

{
    "version": 1,
    "serial": 0,
    "remote": {
        "type": "s3",
        "config": {
            "bucket": "test-state",
            "key": "bucket.tfstate"
        }
    },
    "modules": [
        {
            "path": [
                "root"
            ],
            "outputs": {
                "bucket": "test-state",
                "environment": "test",
                "region": "eu-west-1"
            },
            "resources": {
                "aws_s3_bucket.terraform-state-s3": {
                    "type": "aws_s3_bucket",
                    "primary": {
                        "id": "test-state",
                        "attributes": {
                            "acl": "private",
                            "arn": "arn:aws:s3:::test-state",
                            "bucket": "test-state",
                            "cors_rule.#": "0",
                            "force_destroy": "false",
                            "hosted_zone_id": "Z1BKCTXD74EZPE",
                            "id": "test-state",
                            "policy": "",
                            "region": "eu-west-1",
                            "tags.#": "0",
                            "versioning.#": "1",
                            "versioning.69840937.enabled": "true",
                            "website.#": "0"
                        }
                    }
                }
            }
        }
    ]
}

The terraform files:
bucket.tf

provider "aws" {
  region = "${var.region}"
}

resource "aws_s3_bucket" "terraform-state-s3" {
  bucket = "${var.env}-state"
  acl = "private"
  force_destroy = "false"
  versioning {
    enabled = true
  }
}

outputs.tf

output "environment" {
  value = "${var.env}"
}

output "region" {
  value = "${var.region}"
}

output "bucket" {
  value = "${aws_s3_bucket.terraform-state-s3.id}"
}

variables.tf

variable "env" {
  description = "Environment name"
}

variable "region" {
  description = "AWS region"
  default     = "eu-west-1"
}

I am not sure what is TF trying to merge. There is no local state before fetching remote. What's even more curious, I can fetch remote state from unrelated TF build OK (a build which has completely different terraform files). I suspect the issue is somewhere in the remote bucket.tfstate file itself...

bug core

All 16 comments

If you don't have local state before fetching remote, then what do you mean by The local .terraform/terraform.tfstate file is. Try to change serial value in local state file (serial is increasing each time you modify state).

Hi,

when I do terraform remote config -backend=s3 -backend-config="bucket=test-state" -backend-config="key=bucket.tfstate" in point 4, it creates that .terraform/terraform.tfstate, even tough the command fails.

I also don't understand how can I get a merging error, when I am starting from scratch and there's nothing to merge against...

Try to add -pull=false to not fetch latest one from s3, as it has conflicting serial. https://terraform.io/docs/commands/remote-config.html

Well, that works... except it doesn't pull remote state. It just creates that local .terraform/terraform.tfstate file (which is the same as in previous steps). When I try to pull, I get the same error as described in 1st post.

@mtekel i'll assume that you have fixed this but I found that I can create a remote state file with just the remote information and an empty modules array. In your case something like this:

{
    "remote": {
        "type": "s3",
        "config": {
            "bucket": "test-state",
            "key": "bucket.tfstate"
        }
    },
    "modules": [
    ]
}

Note that I removed both the version and serial as well. Just the remote info and empty modules array.
Run the terraform pull after that and it should just grab the remote file.

No, this still doesn't fix the issue. I have updated to 0.6.9. I still get

Initialized blank state with remote state enabled!
Error while performing the initial pull. The error message is shown
below. Note that remote state was properly configured, so you don't
need to reconfigure. You can now use `push` and `pull` directly.

Unknown refresh result: Local and remote state conflict, manual resolution required

The terraform.tfstate file contains this after attempted pull:

{
    "version": 1,
    "serial": 0,
    "remote": {
        "type": "s3",
        "config": {
            "bucket": "eee-state",
            "key": "bucket.tfstate"
        }
    },
    "modules": [
        {
            "path": [
                "root"
            ],
            "outputs": {},
            "resources": {}
        }
    ]
}

The remote tfstate looks like this:

{
    "version": 1,
    "serial": 0,
    "remote": {
        "type": "s3",
        "config": {
            "bucket": "eee-state",
            "key": "bucket.tfstate"
        }
    },
    "modules": [
        {
            "path": [
                "root"
            ],
            "outputs": {
                "bucket": "eee-state",
                "environment": "eee",
                "region": "eu-west-1"
            },
            "resources": {
                "aws_s3_bucket.terraform-state-s3": {
                    "type": "aws_s3_bucket",
                    "primary": {
                        "id": "eee-state",
                        "attributes": {
                            "acl": "private",
                            "arn": "arn:aws:s3:::eee-state",
                            "bucket": "eee-state",
                            "cors_rule.#": "0",
                            "force_destroy": "true",
                            "hosted_zone_id": "Z1BKCTXD74EZPE",
                            "id": "eee-state",
                            "policy": "",
                            "region": "eu-west-1",
                            "tags.#": "0",
                            "versioning.#": "1",
                            "versioning.69840937.enabled": "true",
                            "website.#": "0"
                        }
                    }
                }
            }
        }
    ]
}

FWIW I'm also experiencing this exact problem. Even if the state is in sync (file checksums are the same) it throws an error unless the serial is changed or removed (and that fixes it for exactly one terraform command).
I _think_ it might have to do with the remote configuration only used partially or something, because not all the commands seem to be working with respect to remote storage - for example, terraform remote config -disable says that it's not configured when it plainly is (I just configured it and it returned with success).

Also seeing this -- just started today. I've been manipulating the serial number for a long time. I'm now getting the same error, but now it's happening all the time for no apparent reason.

Failed to read state: Unknown refresh result: Local and remote state conflict, manual resolution required

$ rm .terraform/terraform.tfstate
$ make remote
terraform remote config \
-backend=s3 \
-backend-config="region=us-east-1" \
-backend-config="bucket=genfare-awsops-resources" \
-backend-config="key=develop/terraform/src/test/us-east-1/terraform.tfstate" \
-backup=-
Initialized blank state with remote state enabled!
Remote state configured and pulled.

$ make refresh
terraform refresh
Failed to load state: Unknown refresh result: Local and remote state conflict, manual resolution required
Makefile:32: recipe for target 'refresh' failed
make: *** [refresh] Error 1

Also ran into this

I have a fix in #7320 which allows refreshing a local state with no resources.

I had the exact same problem as described in this issue: no existing local state, trying to configure remote and pull resulted in a state conflict. However it only occurred in one of three projects I started using Terraform.

Comparing the remote state files revealed a difference between the two working projects and the one which was not working: while the working state files had a serial above 0, the state file I was not able to pull had a serial of 0. I manually changed this to 1 (on the remote side) and from there on it started to work.

I just had this issue and it seems to be due to the fact that my first s3 push had a serial=0 and initializing locally from scratch also uses 0. So it thinks they are the same version but with conflicts.

To prove it, I went into my local .terraform/terraform.tfstate and changed state from 0 to -1 and then I was able to move forward.

I guess until this is fixed, I will need to do two pushes from the same local copy to bump the serial number up.

Still waiting on a review of #7320.

+1

Same problem as highlighted. terraform apply, push remote state, delete all local state files, try to re-add remote to pull and get the merge issue.

_edit: terraform 0.7.4_

_edit: This appears to be happening when doing a terraform apply locally and adding a remote after. Everything is successfully pushed/created however trying to pull the remote from a clean local always fails_

I'm going to lock this issue because it has been closed for _30 days_ โณ. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

Was this page helpful?
0 / 5 - 0 ratings