Terraform: Terraform S3 Backend does not recognize multiple AWS credentials

Created on 31 Aug 2018  ยท  10Comments  ยท  Source: hashicorp/terraform

I've trying to store terraform state in an S3 bucket in a non default AWS account. When initializing the terraform S3 backend I get an access denied error. I enabled debugging on an found that terraform s3 backend was using the default account in my shared aws credentials file. The terraform backend should really honor what I'm defining in my aws provider profile.

terraform_version 0.11.8

provider "aws" { region = "us-east-1" profile = "non-default aws account" }

terraform { backend "s3" { bucket = "TF-S3-Bucket" key = "folder/statefile" } }

terraform init
Initializing modules...

  • module.mine

Initializing the backend...
Error inspecting states in the "s3" backend:
AccessDenied: Access Denied
status code: 403, request id: 37C678457C37B5FD, host id: +zQJP6lg11NEvpMPkqNNy3AAgb8rxOs+G2Jf+RpT405CUABwEkeN2xi4Se0t2v1H8E7OPjLSCFk=

~/.aws/credentials
[default]
aws_access_key_id = xxxx
aws_secret_access_key = xxxxxx

[non-default aws account]
aws_access_key_id = xxxx
aws_secret_access_key = xxxxxx

Relates to https://github.com/hashicorp/terraform/issues/13589

backens3

Most helpful comment

The backend will not use the provider configuration. If you are using profiles or roles in your backend, you must put profile = and/or role_arn = directly into the backend {...} block. And yes, this means you cannot use any interpolation in there. HC recommends you use Backend Partial Configuration, which runs at init time only.

So good luck sharing remote state in an environment where AWS permissioning profiles/roles may vary (ie: multi-account CI/CD with state stored in a consistent account). I guess HC wants to sell you Terraform Enterprise for this.

All 10 comments

I'm seeing this too. Only work around I've found is in this comment

EDIT: @KevinKirkpatrick I just installed boto into my general Python environment and that fixed the profile issue for me. So there is a dependency on Boto for the profile feature to function.

aws.provider.profile will; be ignored if AWS_ACCESS_KEY_ID or AWS_SECRET_ACCESS_KEY is set.
Issuing: unset AWS_ACCESS_KEY_ID; unset AWS_SECRET_ACCESS_KEY worked for me!

The backend will not use the provider configuration. If you are using profiles or roles in your backend, you must put profile = and/or role_arn = directly into the backend {...} block. And yes, this means you cannot use any interpolation in there. HC recommends you use Backend Partial Configuration, which runs at init time only.

So good luck sharing remote state in an environment where AWS permissioning profiles/roles may vary (ie: multi-account CI/CD with state stored in a consistent account). I guess HC wants to sell you Terraform Enterprise for this.

As a workaround, I use this:

for var in AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY AWS_ACCOUNT_ID AWS_DEFAULT_REGION AWS_REGION; do
  if [ -n "${!var}" ] ; then
    echo "$var is set, unsetting" # to ${!var}"
    eval "unset $var"
  fi
done

Right before terraform plan

@useafterfree Still not working for me. I have to set either AWS_PROFILE or AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.

$ set | grep AWS
$ ~/bin/terraform plan
Failed to load backend: 
Error configuring the backend "s3": No valid credential sources found for AWS Provider.
    Please see https://terraform.io/docs/providers/aws/index.html for more information on
    providing credentials for the AWS Provider

Please update the configuration in your Terraform files to fix this error.
If you'd like to update the configuration interactively without storing
the values in your configuration, run "terraform init".
$ export AWS_PROFILE=terraform
$ ~/bin/terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.


------------------------------------------------------------------------

No changes. Infrastructure is up-to-date.

This means that Terraform did not detect any differences between your
configuration and real physical resources that exist. As a result, no
actions need to be performed.

I'm using the latest Terraform release,

$ ~/bin/terraform version
Terraform v0.11.10
+ provider.aws v1.50.0

boto libs are current,

$ pip list | grep boto
boto3           1.9.59 
botocore        1.12.59

The profile attribute is being ignored.

$ cat *.tf
terraform {
  backend "s3" {}
}

provider "aws" {
  version = "~> 1.50"
  region  = "us-east-1"
  profile = "terraform"
}

The issue is still happening with terraform 0.12.6.

If AWS_PROFILE, AWS_ACCESS_KEY_ID andAWS_SECRET_ACCESS_KEY env vars are set, terraform fails to init multiple backends.

Using multple profiles with AWS CLI works fine:

$ cat ~/.aws/credentials
[default]
region=eu-west-2

[ops]
aws_access_key_id=xxx
aws_secret_access_key=xxx

[dev]
aws_access_key_id=xxx
aws_secret_access_key=xxx

----------------------------

$ aws s3 ls --profile ops
2019-07-09 10:38:26 terraform-ops-state-xxx

$ aws s3 ls --profile dev
2019-06-12 10:32:55 terraform-dev-state-xxx

However, if i add an additional s3 backend, terraform will fail to authenticate.

$ cat  backends.tf
provider "aws" {
  alias = "dev"

  region  = "eu-west-2
  version = "~> 2.21"
  profile = "dev"
}

data "aws_caller_identity" "dev_peer" {
  provider = "aws.dev"
}

data "terraform_remote_state" "dev_base_networking" {
  backend = "s3"
  config = {
    bucket  = format("terraform-%s-state-xxx", "dev")
    key     = "base-networking/terraform.tfstate"
    region  = "eu-west-2"
    profile = "dev"
  }
}

----------------------------

$ terraform plan
...
data.aws_caller_identity.dev_peer: Refreshing state...

Error: Error loading state error
  on backend.tf line 16, in data "terraform_remote_state" "dev_base_networking":
  16:   backend = "s3"

error loading the remote state: AccessDenied: Access Denied

The only way to make it work is to unset both AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY

This is true for #metoo on v0.12.21. :-(
Workaround is to run
AWS_PROFILE= terraform apply

This is still happening on Terraform 0.12.25, only way to do it is using the AWS_PROFILE=profile_name before calling terraform

Multiple fixes for credential ordering, automatically using the AWS shared configuration file if present, and profile configuration handling of the S3 Backend have been merged and will release with version 0.13.0-beta2 of Terraform.

I'm going to lock this issue because it has been closed for _30 days_ โณ. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

Was this page helpful?
0 / 5 - 0 ratings