terraform env command doesn't work with different AWS account ID that needs 2 IAM roles

Created on 17 Apr 2017  路  12Comments  路  Source: hashicorp/terraform

Summary

Configuring terraform env with 2 AWS account ID (1 staging env account, 1 production env account) does not work.

Terraform Version

0.9.3

Affected Resource(s)

terraform env

Expected Behavior

terraform env should be able to be used with 2 AWS accounts

Actual Behavior

terraform env does not work for 2 separate AWS accounts

Steps to Reproduce

  1. Current default environment runs on staging
  2. Create new, empty production environment
    terraform env new prod
  3. Switch to production env
  4. Run STS assume role to assume role for production environment
  5. I want to pull remote state, therefore I run
terraform init     
    -backend-config="bucket=prod-bucket" \
    -backend-config="key=tfstate/prod.tfstate" \
    -backend-config="region=ap-northeast-1" \

NOTE: S3 configuration is already under a separate file, called backend.tf
```terraform {
backend "s3" {
encrypt = "true"
}
}

6. Run `terraform plan`, but terraform shows that it wants to recreate existing resources
7. I want to switch back to `default`, access denied
8. Run STS assume role command to switch back to staging account, `terraform env switch default` works
9. I want to delete `production` env, run `terraform env delete production`, access denied
10.  Run STS assume role command to switch back to prod account, run `terraform env delete production`, access denied
11. Now I'm stuck with an unusable `production` terraform environment, because it seems like everytime i want to delete, Terraform wants to check both environments, and my STS assume-role has access tokens for only 1 environment at a time

### Important Factoids
Previously in terraform 0.8, i use the `remote` command to switch environments:
1. Run STS assume-role to switch to `production` environment role
2. Run the following command

terraform remote config -disable

rename any backup tfstate

mv ${WORKSPACE}/tf/*.tfstate ${WORKSPACE}/tf/.terraform/sento.tfstate.backup.$(date "+%Y-%m-%d.%H:%M:%S")

pull and resync tfstate from new env in S3

terraform remote config \
-state="${TFSTATE}" \
-backend=S3 \
-backend-config="bucket=${S3_BUCKET_NAME}" \
-backend-config="key=tfstate/${TFSTATE}" \
-backend-config="region=${REGION}" \
-backend-config="encrypt=true"
```

I would expect the same with the new env and init commands but apparently it's not that simple anymore.

backens3 enhancement

Most helpful comment

I'm doing this as follows:

I have an AWS organisation, with multiple accounts. I always authenticate with the AWS credentials of the root account. That account can AssumeRoleinto the sub accounts to provision them.

So, given:

variable "account_ids" {
  description = "Terraform state environments mapped to the target account ID"
  type        = "map"

  default = {
    "dev"     = "111111111111"
    "staging" = "222222222222"
    "prod"    = "333333333333"
  }
}
provider "aws" {
  allowed_account_ids = ["${lookup(var.account_ids, terraform.env)}"]
  region              = "${var.region}"

  assume_role {
    role_arn = "arn:aws:iam::${lookup(var.account_ids, terraform.env)}:role/OrganizationAccountAccessRole"
  }
}
...

then

terraform env select dev
terraform plan

This will plan/apply into the correct AWS account by assuming the correct role, and will nicely fail if you have forgotten to select the terraform env select or the selected env does not exist in account_ids.

All 12 comments

Hi @jrlonan,

This is working as expected right now, though I think we can handle a similar workflow with some manual intervention. I'll work on an example with the new backend system.

I think we will want to at least have a documented way to access different environments with different credentials, even if it still requires the user to properly setup the S3 bucket policies.

@jbardin
Thank you! Looking forward to the enhancement! :)

By the way, is there any way to fix the terraform production env, apart from adding cross-account bucket permissions to the IAM role?

@jrlonan,

Yes, besides being able to write to the corresponding state file, the credentials you're operating with will need to be able to list all keys in the bucket (or at least keys prefixed with env:/. I don't want to specify the implementation details, but that may be required for a multi-user policy).

You can't use environments if you're going to try and reconfigure the backend between switching your logical "environments" locally. The point of the env command is change to a named state file, so changing the bucket and key defeats that purpose.

When you ran terraform env new prod, that created an environment named "prod" and the associated state file in the backend you already had configured. When you changed the configuration to a "prod" bucket and state file, you reconfigured the backend to use those new settings, possibly leaving behind the old state files (you didn't specify what the migration output was during the init command). The terraform init command doesn't pull anything, the state is always stored and accessed remotely.

@jbardin as for the sake of this use case.
a workaround could be to handle two backends:

  • avoid using env
  • have for example a symlinked file backend.tf pointing to the env defined as the current env i.e: backend.stg
  • switching env would be:

    • do assume role

    • symlink to your env i.e: backend.tf -> backend.prd

do you see any drawbacks with the mentioned approach ?

@dav009,

Off the top of my head, I think that would work as long as you don't migrate the states when switching. It might be good to just remove .terraform/terraform.tfstate when switching envs before running init again.

I don't like the idea of having to recommend users touch the .terraform/ files, since how terraform uses them may change over time.

I mentioned it in an unrelated issue, but I was thinking about a flag like terraform init -reconfigure to allow a user to change the init configuration without checking the previously stored config.

I'm doing this as follows:

I have an AWS organisation, with multiple accounts. I always authenticate with the AWS credentials of the root account. That account can AssumeRoleinto the sub accounts to provision them.

So, given:

variable "account_ids" {
  description = "Terraform state environments mapped to the target account ID"
  type        = "map"

  default = {
    "dev"     = "111111111111"
    "staging" = "222222222222"
    "prod"    = "333333333333"
  }
}
provider "aws" {
  allowed_account_ids = ["${lookup(var.account_ids, terraform.env)}"]
  region              = "${var.region}"

  assume_role {
    role_arn = "arn:aws:iam::${lookup(var.account_ids, terraform.env)}:role/OrganizationAccountAccessRole"
  }
}
...

then

terraform env select dev
terraform plan

This will plan/apply into the correct AWS account by assuming the correct role, and will nicely fail if you have forgotten to select the terraform env select or the selected env does not exist in account_ids.

Since the release of workspace for the new backend has this been addressed? Please provide documentation on the correct way to use workspaces around separate aws accounts or the work around as I seem to can't find it anywhere. Thank you.

Are there any plans to support this? An ideal solution would allow the usage of multiple accounts and allow the state to be stored in different buckets depending on the account. That way you can keep your dev/stg/prd states in your dev/stg/accounts.

Did anyone find a recommended way to do this with workspaces?

I don't think this works back as far as Terraform 0.9, but recent versions have direct support for assuming roles, so you can specify the role_arn in the backend configuration to specify which role you wish to use.

The S3 backend docs have a guide on setting up cross-account access which shows one way to do this using AssumeRole for the AWS _provider_ but a centralized S3 bucket. This is the setup that works best with Terraform's workspace workflow.

Workspaces are generally not the best way to separate different environments. They work better for creating _temporary_ separate deployments for development/testing purposes. To fully isolate your environments, it's better to instead have a separate root configuration for each and use modules to define the common elements. The various separate root modules then give you a proper place to keep the various differences between environments, such as different backend configuration.

@apparentlymart Thank you for this, I find this concept very interesting and will go to read about modules.

The headline seems relevant in as much as it says:

Modules are used to create reusable components in Terraform as well as for basic code organization.

I presume in my case the "reuse" part would be what I'm after. e.g. it might not be best practice, but I could create a single module called "infrastructure" which I can "reuse" in each environment

To make sure I understand correctly though, are modules more suitable than workspaces even in the condition that the infrastructure being managed is 100% identical? In my specific case it would seem that workspaces fit my needs all apart from being able to "tie" a workspace to a particular AWS account.

I touched on this here and with you being a contributor would love your input on that stackoverflow question.

Hi @davidgoate! I wrote a more lengthy comment on a similar topic over in #18632 recently. I think that answers the questions you asked here.

Was this page helpful?
0 / 5 - 0 ratings