terraform remote config overwrites existing state file if local exists - s3 - (v0.6.16)

Created on 31 May 2016  ยท  8Comments  ยท  Source: hashicorp/terraform

Terraform Version

0.6.16

Affected Resource(s)

The remote config command:

terraform remote config \                                                       
  -pull=false \                                                                 
  -backend=s3 \                                                                 
  -backend-config="bucket=sample_tf_bucket" \                               
  -backend-config="region=us-east-1" \                                          
  -backend-config="key=dev/security_groups.tfstate" \                                         
  -backend-config="access_key=ABCD..." \           
  -backend-config="secret_key=AbCd..."

Expected Behavior

My expectations were that it would either cause an error: "State files mismatched", or maybe "Local state exists, unable to sync" Something other than overwriting what's in s3 with what's local.

Actual Behavior

When I run the command above, for a template that's already been configured, I get the simple output:

Remote configuration updated
Remote state configured and pulled.

The versioned s3 file notes that it has indeed been overwritten. This can cause accidental data loss with the states.

NOTE: The state file is also being overwritten when I do a "plan" as well.

bug core

All 8 comments

Hi @craigmonson! Sorry for the delay in responding to this. This doesn't look too good, and I agree the behaviour is confusing. I'll investigate this, but want to loop in @phinze or @mitchellh to discuss what the intended outcomes are here.

I faced the same issue where the remote tfstate file was overridden.

Terraform Version

0.6.16

Affected Resource(s)

The remote config command:

$ terraform destroy
$ terraform remote config \
  -backend=s3 \
  -backend-config="bucket=<bucket_name>" \
  -backend-config="key=terraform/tfstates/<filename>.tfstate" \
  -backend-config="region=us-west-2"

Expected Behavior

My expectations were that it would either cause an error "State files mismatched" or give a comprehensive message about any override and never override the remote file unless I run terraform remote push.

Actual Behavior

When I run the remote config command above, for a template that's already been configured, I get the simple output:

Remote configuration updated
Remote state configured and pulled.

The versioned s3 file notes that it has indeed been overwritten. This caused me an accidental state data loss.

I noticed this issue happened when the local file has a higher serial number comparing to the remote tfstate file.

feels like this should get some more attention as this potentially puts you in a very nasty conflict state

yea, this caused a lot of confusion for us since we have builds that work on different environments / tfvars files. It's not nice for sure. Wish I had time to investigate more... :(

We have a similar issue where we have multiple environments. We use S3 for our remote state.

We run a terraform plan and apply in one environment and then switch the remote config to point to the other environment and when we run plan again we get errors and more importantly the remote state for the first environment has been overwritten with data from the second.

Our work around currently is when we need to work on another environment we delete the .terraform directory, change the remote config, run terraform get and then we are good.

Same issue here when switching between environments (v0.7.13) and .it is apparently not planned to be fixed.

We managed it, deleting the existing ".terraform" folder with the tfstate files before configuring another environment.

See below link for references:
https://www.bountysource.com/issues/30091898-state-getting-uploaded-to-s3-when-it-shouldn-t-be

Hi @craigmonson, and everyone else who commented here! Thanks for opening this issue and sorry we've let it languish here for so long.

The architecture for remote state was completely redesigned for 0.9, and in the process we put in place several mechanisms to prevent these sorts of collisions:

  • State files now have a "lineage" as well as a "serial". Lineage is a unique value assigned when a new, empty state is created and never changed afterwards, so it allows us to recognize the difference between a different version of the same state and a different state entirely.
  • With the backend setup that's now folded in to the terraform init command, there's first-class support for migrating states from one backend to another with explicit confirmation, and with checks to ensure that one state isn't overwritten by another one of differing lineage.

A common cause for this problem (which I hit myself several times, so I can empathize!) is deploying the same configuration to multiple environments with separate state files for each. It used to require lots of care to switch between them using terraform remote config without accidentally clobbering one environment's state with another. In addition to the above safeguards added in 0.9, the new state enviornments feature gives Terraform a first-class idea of multiple states for a single configuration which is gradually rolling out across all of the remote backends, with it present in the consul and s3 backends at the time of writing and more to come.

All of the above changes should have now indirectly addressed the concern this issue was raising, so I'm going to close it. Thanks to everyone for your patience!

I'm going to lock this issue because it has been closed for _30 days_ โณ. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

Was this page helpful?
0 / 5 - 0 ratings