Terraform: S3 remote backend in terraform.tfstate should not be common for all workspaces

Created on 6 Apr 2019  ยท  7Comments  ยท  Source: hashicorp/terraform

Hi,

First off, apologies if I've not followed a known pattern for multi-account configuration. If so, please can you point me to examples that I can use as references

I've configured my terraform code to use different backend.tfvars files for different AWS accounts. Unfortunately terraform is persisting the remote backend bucket and dynamodb_table information in a local .terraform/terraform.tfstate file as an atomic/singular data set for all workspaces.

Therefore, when switching between the backend.tfvars for the different accounts terraform tries to migrate the statefiles to the second account.

  • terraform init -backend-config=./backend-prod.tfvars -> backend info saved in terraform.tfstate
  • terraform init -backend-config=./backend-dev.tfvars -> backend mismatch -> s3 migration

In the multi-account aws architecture docs it states workspaces are isolated.
Therefore this .terraform/terraform.tfstate file should be defined on the workspace level or have separation within the json per workspace:

{
    "version": 3,
    "serial": 1,
    "lineage": "b98b7964-4198-df2f-c4e1-57e069hmm9b0",
    "workspaces": [
        {
            "workspace": "prod",
            "backend": {
            "type": "s3",
            "config": {
                "bucket": "redacted_prod_bucket",
                "dynamodb_table": "something_else_redacted_prod",
                "encrypt": true,
                "key": "this_too",
                "profile": "yup_redacted_prod",
                "region": "eu-west-1"
            },
            "hash": 839014315025159908048
            }
        },
        {
            "workspace": "dev",
            "backend": {
            "type": "s3",
            "config": {
                "bucket": "redacted_dev_bucket",
                "dynamodb_table": "something_else_redacted_dev",
                "encrypt": true,
                "key": "this_too",
                "profile": "yup_redacted_dev",
                "region": "eu-west-1"
            },
            "hash": 839014315025158808048
            }
        }
    ]
}

Terraform Version

Terraform v0.11.11
+ provider.aws v1.60.0

Terraform Configuration Files

The terraform.tfstatefile can be found in {path_to_root_module}/.terraform/terraform.tfstate. It's contents are similar to:

{
    "version": 3,
    "serial": 1,
    "lineage": "b98b7964-4198-df2f-c4e1-57e069hmm9b0",
    "backend": {
        "type": "s3",
        "config": {
            "bucket": "redacted",
            "dynamodb_table": "something_else_redacted",
            "encrypt": true,
            "key": "this_too",
            "profile": "yup_redacted",
            "region": "eu-west-1"
        },
        "hash": 839014315025157708048
    }
}

Expected Behavior

terraform workspace new prod
terraform init -backend-config=./backend-prod.tfvars

Initializing the backend...

Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.

##########

terraform workspace new dev
terraform init -backend-config=./backend-dev.tfvars

Initializing the backend...

Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.

Actual Behavior

terraform workspace new prod
terraform init -backend-config=./backend-prod.tfvars

Initializing the backend...

Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.

##########

terraform workspace new dev
terraform init -backend-config=./backend-dev.tfvars

Initializing the backend...
Backend configuration changed!

Terraform has detected that the configuration specified for the backend
has changed. Terraform will now check for existing state in the backends.

Steps to Reproduce

  • Ensure this is a clean environment with no .terraform directory in the module root path
  • configure two backend.tfvars files
  • run terraform init -backend-config=./backend-prod.tfvars
  • you will see a successful initialization
  • run terraform init -backend-config=./backend-dev.tfvars
  • you will see the migrate all workspaces to s3? message

Additional Context

I am configuring my terraform code for multiple AWS accounts, hence the multiple backend configurations.

My config:

  • backend.tfvars contain the account information (profile for credentials = AWS account)
  • I have multiple vpcs, each vpc = an environment
  • workspaces are used to allow common code for multiple environments

EDITS:

  • added the workspace creation before running terraform init
question

Most helpful comment

Hi @abdul-baki-slalom,

The terraform init command has an option -reconfigure that allows you to switch between backends without any migration. It causes Terraform to ignore altogether any existing backend configuration and just initialize the new one.

It's redundant to use both workspaces and multiple backend configurations at the same time, because each distinct backend has its own set of workspaces. The multi-account AWS architecture guide shows how to use multiple workspaces in a _single_ backend (that is, in the same S3 bucket in the same AWS account) while using assume_role to manage resources in other accounts. If you don't want to keep all of your environment states in the same AWS bucket and AWS account then that guide is not applicable to you.

Instead, you can ignore the workspaces feature entirely and just switch backends:

terraform init -reconfigure -backend-config=./backend-prod.tfvars
terraform init -reconfigure -backend-config=./backend-dev.tfvars

You can just use the default workspace in each case, because these backends are (assuming you've configured them properly) already distinct and thus don't need workspaces in order to store multiple states in them. Multiple workspaces are needed only if you wish to keep all of your states in the same backend, which is what the "Multi-account AWS Architecture" guide is about.

There's more information on this in the documentation section When to use Multiple Workspaces.

All 7 comments

Unfortunately, we have a very similar problem. We use Consul as backend and want to use one workspace per stage. Furthermore, we don't want to manage a whole stage in only one state. Therefore we have separate "stacks" (something like solr, webapp, proxy,...) for each stage, each stack with its own consul backend configured (separate backend-path's).

Switching from one backend to another isn`t possible without Terraform wanting to migrate the state, although different backends are configured per stack.

In my eyes terraform should not only manage backends per workspace within ~/.terraform/terraform.tfstate, it should manage a distinct state for each terraform backend defined. And there might be more backends than workspaces as in our scenario described above.

It sounds like you are describing a situation where workspaces would help:

terraform workspace new prod
terraform init -backend-config=./backend-prod.tfvars

[ ...do stuff...]

terraform workspace new dev
terraform init -backend-config=./backend-dev.tfvars

@mildwonkey, when running the commands you suggested this is what happens:

terraform init -backend-config=./backend-prod.tfvars

Initializing the backend...

Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.

##########

terraform workspace new dev
terraform init -backend-config=./backend-dev.tfvars

Initializing the backend...
Backend configuration changed!

Terraform has detected that the configuration specified for the backend
has changed. Terraform will now check for existing state in the backends.

Do you want to migrate all workspaces to "s3"?
  Both the existing "s3" backend and the newly configured "s3" backend
  support workspaces. When migrating between backends, Terraform will copy
  all workspaces (with the same names). THIS WILL OVERWRITE any conflicting
  states in the destination.

I've updated my original post to refect the workspace reference to be more explicit.

The reason, from what I understand:
I am using workspaces also, the problem is that workspaces are being linked to a single account.
docs around multi-account architecture suggest workspaces are account independent.
This is not the case as a local .terraform/terraform.tfstate is saved to track the workspace statefile location:

{
    "version": 3,
    "serial": 1,
    "lineage": "5ced3fdf-a040-c1be-bla7-8e2d9b16f889",
    "backend": {
        "type": "s3",
        "config": {
            "bucket": "redacted_prod_bucket",
            "dynamodb_table": "something_else_redacted_prod",
            "encrypt": true,
            "key": "this_too",
            "profile": "yup_redacted_prod",
            "region": "eu-west-1"
        },
        "hash": 839014315025159908048
    },
    "modules": [
        {
            "path": [
                "root"
            ],
            "outputs": {},
            "resources": {},
            "depends_on": []
        }
    ]
}

When workspaces are changed, rather than scrapping and recreating these cached references, terraform tries to migrate. The migration makes sense in certain situations (such as moving from local to remote statefiles) but in this case the caching and migration behaviour is locking workspaces to a single account.

Hi @abdul-baki-slalom,

The terraform init command has an option -reconfigure that allows you to switch between backends without any migration. It causes Terraform to ignore altogether any existing backend configuration and just initialize the new one.

It's redundant to use both workspaces and multiple backend configurations at the same time, because each distinct backend has its own set of workspaces. The multi-account AWS architecture guide shows how to use multiple workspaces in a _single_ backend (that is, in the same S3 bucket in the same AWS account) while using assume_role to manage resources in other accounts. If you don't want to keep all of your environment states in the same AWS bucket and AWS account then that guide is not applicable to you.

Instead, you can ignore the workspaces feature entirely and just switch backends:

terraform init -reconfigure -backend-config=./backend-prod.tfvars
terraform init -reconfigure -backend-config=./backend-dev.tfvars

You can just use the default workspace in each case, because these backends are (assuming you've configured them properly) already distinct and thus don't need workspaces in order to store multiple states in them. Multiple workspaces are needed only if you wish to keep all of your states in the same backend, which is what the "Multi-account AWS Architecture" guide is about.

There's more information on this in the documentation section When to use Multiple Workspaces.

@apparentlymart,
Thanks for the post, I think the -reconfigure is the missing piece, I'll test it and let you know how I get on

Hello again!

We didn't hear back from you, so I'm going to close this in the hope that a previous response gave you the information you needed. If not, please do feel free to re-open this and leave another comment with the information my human friends requested above. Thanks!

I'm going to lock this issue because it has been closed for _30 days_ โณ. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

Was this page helpful?
0 / 5 - 0 ratings