Terraform: Terraform S3/Backend ignoring profile parameter

Created on 17 Jul 2019  ยท  11Comments  ยท  Source: hashicorp/terraform

Hi there

We are facing issues with Terraform in a Cross-Account Setup. Our Terraform version is shown below:

terraform version
Terraform v0.12.4

Terraform does not use the provided profile = xxx parameter in the backend configuration and therefore we get an access denied error when executing terraform plan | apply.

terraform {
  required_version = ">= 0.12"

  required_providers {
    aws = ">= 2.18.0"
  }

  backend "s3" {
    encrypt        = true
    bucket         = "xxxxxxx"
    key            = "terraform.tfstate"
    region         = "eu-central-1"
    dynamodb_table = "xxxxxxx"
    profile        = "xxxxxxx"
  }
}

Debug Output

It just uses the credentials we provide using the AWS Environment variables, but these are not valid for the AWS S3 Bucket and Backend where we configured different credentials which can be seen in the above mentioned backend configuration.

2019/07/17 10:59:31 [INFO] AWS Auth provider used: "EnvProvider"

Crash Output

Leading to this error:

Error: Error loading state: AccessDenied: Access Denied
        status code: 403

Expected Behavior

Terraform should use the configured backend profile for each backend related API call and the Environment/Provider credentials for each Resource/Data related API call.

Actual Behavior

Terraform uses only the Environment Credentials for each API Call, despite it is related to the Backend or resource creation.

Steps to Reproduce

1. awsume aws-profile-xxx
2. terraform init | plan | apply 

Terraform will always use the assumed environment credentials despite a profile is configured for the backend.

backens3 bug v0.12

Most helpful comment

This is affecting me as well.

All 11 comments

This is related to the aws-sdk-go-base wrapper that does not pass the profile while creating session. See: https://github.com/hashicorp/aws-sdk-go-base/issues/19

This is affecting me as well.

I am also trying to use a profile for the backend configuration and terraform is not honoring it.

I was able to work around this by making sure AWS_SECRET_KEY and AWS_SECRET_ACCESS_KEY environment variables were not set.

this affects me by always picking the default profile thus unable to access the backend,
I don't have AWS_SECRET_KEY and AWS_SECRET_ACCESS_KEY environment variables set either

according to this old thread :
https://github.com/hashicorp/terraform/issues/18402
I worked this around by using
AWS_SDK_LOAD_CONFIG=1 AWS_PROFILE=profile_name terraform init

according to this old thread :

18402

I worked this around by using
AWS_SDK_LOAD_CONFIG=1 AWS_PROFILE=profile_name terraform init

But this only works if the backend is within the same account the resources are as well.

according to this old thread :

18402

I worked this around by using
AWS_SDK_LOAD_CONFIG=1 AWS_PROFILE=profile_name terraform init

I couldn't get this working at all with my config. My backend looks like this:

terraform {
  backend "s3" {
    profile = "profile_with_role_arn"
    region = "<region>"
    bucket = "<bucket name>"
    dynamodb_table = "<dynamodb table>"
    encrypt = "true"
    kms_key_id = "<key arn>"
    key = "<key>"
  }
}

My AWS config file:

[profile profile_with_role_arn]
role_arn = <role arn>
source_profile = profile_with_role_arn
output = json
region = <region>

My AWS credentials file:

[profile_with_role_arn]
aws_access_key_id = XXXXXXXX
aws_secret_access_key = YYYYYYYYY
aws_session_token = ZZZZZZZZZZ

Running terraform init always gives me:

Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.
Error refreshing state: AccessDenied: Access Denied
    status code: 403, request id: ABC123, host id: ...

I have tried with various combinations of export AWS_SDK_LOAD_CONFIG=1, export AWS_PROFILE=profile_with_role_arn but always the same error. Adding profile=profile_with_role_arn into the backend config fixes the issue, but this is not a workable solution for me right now.

according to this old thread :

18402

I worked this around by using
AWS_SDK_LOAD_CONFIG=1 AWS_PROFILE=profile_name terraform init

But this only works if the backend is within the same account the resources are as well.

Anyone found a proper solution to this problem yet?

I'm storing my backend on a client's "master" account which I have access to via a role_arn. When trying to provision their UAT environment I'm now unable to do so.

This is the setup:

[primary]
aws_access_key_id=
aws_secret_access_key=

[client-primary]
source_profile = primary
role_arn = arn:aws:iam::blah:role/OrganizationAccountAccessRole

[client-uat]
source_profile = primary
role_arn = arn:aws:iam::blah:role/OrganizationAccountAccessRole

When trying to provision resources on client-uat, I need the backend to be on client-primary. As it stands, if we're forced to use environment variables, this will not be possible.

Multiple fixes for credential ordering, automatically using the AWS shared configuration file if present, and profile configuration handling of the S3 Backend have been merged and will release with version 0.13.0-beta2 of Terraform.

I'm going to lock this issue because it has been closed for _30 days_ โณ. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

Was this page helpful?
0 / 5 - 0 ratings