Terraform: S3 Remote State and Credentials in non-default profiles

Created on 12 Apr 2017  路  54Comments  路  Source: hashicorp/terraform

Terraform Version

0.9.2

Affected Resource(s)

terraform s3 remote state backend

Terraform Configuration Files

~/.aws/credentials - This would be the entire file, no other entries

[non-default-profile]
aws_access_key_id = <removed>
aws_secret_access_key = <removed>

MAIN.TF

provider "aws" {
    region = "us-west-2"
    profile = "non-default-profile"
}

terraform {
  backend "s3" {
    bucket = "some-s3-bucket"
    key    = "backend/test"
    region = "us-west-2"
    profile = "non-default-profile"
  }
}

Debug Output

NA

Panic Output

NA

Expected Behavior

What should have happened?

The expectation is that I should be able to use a non-default profile credential to complete the backend init (terraform init) call.

Actual Behavior

What actually happened?

command fails with the following error:

Error inspecting state in "s3": AccessDenied: Access Denied
        status code: 403, request id: 7286696E38465508

Prior to changing backends, Terraform inspects the source and destionation
states to determine what kind of migration steps need to be taken, if any.
Terraform failed to load the states. The data in both the source and the
destination remain unmodified. Please resolve the above error and try again.

if i simply move the same credentials under the default profile in the standard shared credentials file it works without issue.

Steps to Reproduce

terraform init

Important Factoids

none

References

not that i was able to find for 0.9.2

backens3 bug

Most helpful comment

Ok, so after deleting state (local .terraform folder and state file in S3) and re-running terraform init it worked with custom credentials, and no default credential set. Not sure how someone with this issue in production would handle it, obviously deleting state isn't a possibility

All 54 comments

Hi @jbruett,

Unfortunately the backend had its own code to configure the aws client, taken from the old s3 remote state which looks like it didn't properly configure the profile.

The S3 backend in the next Terraform release will be sharing the configuration code with the aws provider which solves this issue.

@jbardin, can you confirm that's 0.9.3 0.9.4?

@jbruett, it should actually be released in 0.9.3. I tested it myself on master before the release today. If it's still not working for you we can reopen this and continue investigating.

@jbardin still seeing the same error with the same workaround.

@jbruett, thanks for the update. Unfortunately 0.9.3 fixed the issue for me, so I'm not sure how to reproduce it yet. The fact you're getting AccessDenied indicates you are loading credentials of some sort, just not the ones you expect.

With 0.9.3 you will be able to see the AWS debug messages is you run TF_LOG=trace terraform init. You should be able to see what the Auth Provider is and what account credentials it ended up using, which may help us narrow down what's going on.

Just updated from 0.9.2 to 0.9.3 and tested. Still seeing the same behavior.

As soon as I change one of my AWS profiles to default then comment out both profile lines in provider and backed, it works fine.

Error loading previously configured backend: 
Error configuring the backend "s3": No valid credential sources found for AWS Provider.
  Please see https://terraform.io/docs/providers/aws/index.html for more information on
  providing credentials for the AWS Provider

Please update the configuration in your Terraform files to fix this error.
If you'd like to update the configuration interactively without storing
the values in your configuration, run "terraform init".

~/.aws/credentials

[staging]
aws_access_key_id = <removed>
aws_secret_access_key = <removed>
[production]
aws_access_key_id = <removed>
aws_secret_access_key = <removed>

main.tf

provider "aws" {
  region    = "${var.aws_region}"
  profile   = "staging"
}
terraform {
  backend "s3" {
    bucket  = "some-bucket-name"
    key     = "staging.tfstate"
    region  = "us-west-2"
    profile = "staging"
  }
}

@jbardin I don't have AWS_PROFILE env var set, and no default profile configured in the shared creds file.

@jbruett, yes, but you _are_ trying to access the bucket with credentials of some sort. Can you look at the debug output and see what they are and we may be able to determine where they are being loaded from? This may still be related to what @SeanLeftBelow is seeing, but I can't be sure.

@SeanLeftBelow, If you have changed the credentials required to access the state, and you know that the state file itself is unchanged in the same location, you need to select "no" at the prompt to copy the state. I'll look into the wording there, and see if we can make it more clear one may want to choose no if there is no change to the state data or location.

@jbardin I don't get that far on the terraform init.
The error I posted above is the one I get when both of profiles in ~/.aws/credentials are set to custom values.

Ok, so after deleting state (local .terraform folder and state file in S3) and re-running terraform init it worked with custom credentials, and no default credential set. Not sure how someone with this issue in production would handle it, obviously deleting state isn't a possibility

Thanks for the feedback @SeanLeftBelow and @jbruett,

It seems both of these are manifestations of the same underlying issue. If the credentials that were stored from initialization are no longer valid for any reason (including just changing the credentials file), you may not be able to init again because the stored backend will always fail.

If the credentials exist but are incorrect, you can bypass the failures by avoiding the state migration. If the credentials no longer exist, the aws client will fail to initialize early on in the process.

I think we may want to add something like a -reconfigure flag for the case when a backend config changes, but we don't want terraform to attempt any sort of state migration, or even load the saved configuration at all.

Hello,

I am having issue as well to use backend "S3"

My terraform version is 0.9.3

using TF_LOG=trace terraform init i get :

2017/04/24 17:41:53 [DEBUG] New state was assigned lineage "e4bfa1b9-2cd7-4653-95d7-86592c8d9132"
2017/04/24 17:41:53 [INFO] Building AWS region structure
2017/04/24 17:41:53 [INFO] Building AWS auth structure
2017/04/24 17:41:53 [INFO] Ignoring AWS metadata API endpoint at default location as it doesn't return any instance-id

Error configuring the backend "s3": No valid credential sources found for AWS Provider.
Please see https://terraform.io/docs/providers/aws/index.html for more information on
providing credentials for the AWS Provider

Please update the configuration in your Terraform files to fix this error
then run this command again.

I am experiencing the same issue whilst trying to update from 0.8.7 -> 0.9.6.
Since this is a production environment deleting state file is not an option.

I have specified the profile name to be used in a similiar fashion how @SeanLeftBelow above specifies. The trace log shows that it completely ignores the profile parameter therefore resulting in "Access Denied".

I realized that if I define the aws credentials as environment variables I could use the S3 backend..

Give it a try and see if it works for you.

FWIW, I got here searching for the same error - in my case it was because I created the S3 bucket in a different region (the one in my ~/.aws/config) than the one specified in the backend config. :man_facepalming:

I'm currently having this issue with 0.9.8. I'm circumventing by using direnv to automatically set AWS_PROFILE variables for specific directories.

FYI: Another reason you may get the No valid credential sources found for AWS Provider error is if the AWS credential profile name you specified in your .tf or in your backend doesn't exactly match something in square brackets in ~/.aws/credentials.

FYI: Another reason you may get the No valid credential sources found for AWS Provider error is if the AWS credential profile name you specified in your .tf or in your backend doesn't exactly match something in square brackets in ~/.aws/credentials.

Bingo! For some reason, I did event have a .aws directory. For anyone who reached here searching for solution: Go on and create ~/.aws/credentials file and put your credentials in this format

[default]
aws_access_key_id =
aws_secret_access_key =

Bingo! For some reason, I did event have a .aws directory. For anyone who reached here searching > for solution: Go on and create ~/.aws/credentials file and put your credentials in this format

[default]
aws_access_key_id =
aws_secret_access_key =

Right, the issue here is that when you use a named profile (aka not default), it's not being used by TF. I have triple checked and verified that my names matched up between my .tf file and the profile name in my ~/.aws/credentials file and it still is not working correctly.

As a workaround I'm using direnv and setting my AWS_PROFILE environment variable to the profile I want on a per-directory basis.

Just noting this is related to #5839, right?

@combinatorist:

That issue is regarding the accessing remote state resource, which is now data source. There are some comments about it working with different terraform remote configurations, but the remote backend config has completely changed since then too. This issue is primarily about access to the s3 remote state backend.

There may be hints there though, as I'm suspecting some behavior of the aws skd itself here.
We haven't been able to replicate setting a named profile, and having Terraform not read it from the credentials file, and I routinely use only non-default profiles for testing myself.

@jbardin, TBH, these tickets seem really similar to the extent that I understand terraform and your comment, but if you think the difference should be clear to others, then I will catch up eventually.

In case anyone else's stuck, here's the extent of my understanding: both tickets mention (AWS) S3 remote state in their title. Both discuss the mysterious role of the default (AWS) profile in getting credentials for the remote state (as opposed to the profile specified to deploy the actual infrastructure). I picked up a hint that remote state as a data source is different and possibly(?), partially(?) replaces terraform backend. Are these different functionalities or just different syntax / paradigms to accomplish the same thing: i.e. to backup terraform state remotely?

And maybe to clarify this discussion, here are 3 different syntaxes that I easily get easily confused:

Example 1: "terraform (remote state) _backend_", from this ticket

terraform {
  backend "s3" {
    bucket = "some-s3-bucket"
    key    = "backend/test"
    region = "us-west-2"
    profile = "non-default-profile"
  }
}

Example 2: i.e. "remote state _resource_", from another ticket:

resource "terraform_remote_state" "ops" {
    backend = "s3"
    config {
        bucket = "eu3-terraform-ops"
        key = "terraform.tfstate"
        region = "${var.region}"
    }
}

Example 3: i.e. "remote state _data source_", from the docs:

data "terraform_remote_state" "vpc" {
  backend = "atlas"
  config {
    name = "hashicorp/vpc-prod"
  }
}

I get confused because I've only used option 1 and since terraform _won't_ build the backend for me, it seemed like it was effectively a data source. I haven't used option 2, but since I don't see how a terraform project _could_ build it's our remote state / backend, then I just assume it was also just a data source. So best I can understand, option 3 just calls it what it's always been. They may be implemented in different versions in different ways (i.e. different bugs), but they serve the same use case and I should always use option 3 (assuming it's peculiar bugs don't effect my application / I can trust those bugs will be fixed soon).

Am I close?

@combinatorist, I agree this can be confusing because of the rapid evolution of Terraform over the past year or so.

The term "remote state" can be used in two different ways, and are very different concepts. You can configure a backend to store your state "remotely" (option 1, documented here), or you can use another remote state as a data source to reference its resources (option 3).

The resource "terraform_remote_state" configuration was deprecated quite a while ago in favor of defining it as a data source (option 3), but they are used conceptually in the same way.

That older issue also has many references to "remote state" in terraform remote config, which is no longer used in terraform, and has be completely superseded by the "backend" configuration.

Ok, thanks for clarifying, @jbardin - that helps a lot!

A key difference I just picked up from @ehumphrey-axial is that the "remote state backend" (option 1) lets you sync your current state remotely, so if you change your infrastructure, then it will _write_ your changes into that remote state. However, "remote state data source" (option 3) let's you pull in information about another terraform project. In this case, it can read from that project's remote state, but it won't ever write to it, because it's not trying to deploy or manage infrastructure for that other project.

It was confusing because neither of them creates the s3 bucket* that holds your remote state, so they seemed like the same thing. But, option 1 _does write_ into that bucket while option 2 can only _read_ from that bucket.

*I'm selfishly talking about the S3 backend, but I'm assuming similar behavior applies to other backends.


Finally, I noticed in your last comment that there's another thing, you called:

"remote state" in terraform remote config

which we could call option 0. I understand correctly, then, option 1 replaced option 0 for managing the current project's remote state and option 3 replaced option 2 for reading another project's (necessarily remote) state.

So this seems 100% reproducible.

I have a ~/.aws/credentials like so :

[default]
key=test1
secret=secret1

[newpro]
key=test2
secret=secret2

I have no env vars set and start with a .tf like so :

provider "aws" {
  region     = "us-west-1"
  profile    = "newpro"
}

any resources added here with debug output formulate requests with a key of test2.

as soon as I add the backend to my .tf file it changes to use test1 key :

terraform {
  required_version = ">= 0.10.0"
  backend "s3" {
    region          = "us-west-1"
    bucket          = "my-tfstate"
    key             = "providers/aws/terraform.tfstate"
    encrypt         = "true"
    dynamodb_table  = "my-tflock"
    acl               = "bucket-owner-full-control"
    profile         = "newpro"
    access_key      = "test2"
    secret_key      = "secret2"
  }
}

I added profile, access_key, and secret_key to the backend block with no change. It would be great if it used the same config as the provider by default - but that's an enhancment I suppose and for now just trying to get this default behavior sorted.

Hi @dchesterman,

Thanks, but unfortunately I still can't reproduce that from a clean setup. The default profile in my tests is just never used, even if I revoke various credentials.

Is it possible to get some information about your setup?

  • What command is failing, and what is the specific error?
  • Are these profiles referencing standard, vanilla aws users?
  • Can you check the log output to see what credentials the sdk is attempting to use, or possibly more info on why auth is failing?
  • Does the behavior change if you use shared_credentials_file to point to a different credentials file?

Thanks again!

I started having this issue as well when I upgraded from an earlier terraform to 10.0 and I was able to get it to work by editing the existing "terraform.tfstate" file and then running terraform init again.

Error I was getting:

Error loading previously configured backend:
Error configuring the backend "s3": No valid credential sources found for AWS Provider.
Please see https://terraform.io/docs/providers/aws/index.html for more information on
providing credentials for the AWS Provider

Please update the configuration in your Terraform files to fix this error.
If you'd like to update the configuration interactively without storing
the values in your configuration, run "terraform init".

My Solution:

What was happening for me was the terraform.tfstate was located in a subdirectory: .terraform. Running init wanted to pull it down a directory (putting it in ./terraform.tfstate, the default location, vs. .terraform/terraform.tfstate, where it had been initially configured).

It couldn't use the s3 backend already in the tf files because terraform was seeing that code as the "new" s3 backend config. And since the AWS credential profile wasn't properly configured in the .terraform/terraform.tfstate, it couldn't authenticate. Which is why it would work if you changed the profile to default: The old backend state used the default profile to authenticate, because the profile field was missing from the old backend state.

Potential fixes:

Here's two potential fixes for those who are stuck (or to help determine the best steps for how to deal with it in the next Terraform version):

  1. Add the profile manually to the existing tfstate file (see example below)
  2. Point to your existing statefile with a flag or config change if your terraform.tfstate is not in the default location.

Example tfstate file with added profile:

````json
{
[...]
"backend": {
"type": "s3",
"config": {
"bucket": "my-terraform",
"dynamodb_table": "terraform_state_lock",
"encrypt": true,
"key": "statefile",
/* Add this Line */
"profile": "an_aws_profile",
"region": "us-west-2",
"shared_credentials_file": "$HOME/.aws/credentials"
},
"hash": 0123456789
},
[...]
}

````

Hi @ThatGerber,

Thanks for the feed back here!

It is becoming more clear that keeping the name of Terraform's private data file as terraform.tfstate within .terraform/ was a mistake (they share the same data structures, so the default name was used). This file is not related to the actual state of the managed resources, and terraform should never move this to ./terraform.tfstate or use it as your "state". And the usual disclaimer about editing internal bits; the usage of that file isn't guaranteed to remain consistent across releases, and editing that file directly may produce unexpected results.

As for this use case, I think it's covered by init -reconfigure which will initialize a new backend configuration, ignoring the saved config.

I encountered (and solved) this issue with 0.10.2. I was attempting to migrate old remote state to new remote state, using terraform init.

In my .terraform/terraform.tfstate file, which was created with 0.8.8, I had both remote and backed keys, both configured for S3.

However, the remote config did _not_ have profile (and role) set - which ultimately meant Terraform 0.10.2 was not assuming my non-default profile when trying to read the _old_ state.

I just had the same issue with 0.10.2 and my AWS credentials file containing a default profile, as well as an additional one.

As already reported, it looks like when declaring the remote backend, Terraform ignores the "profile" directive and only picks up the default profile.

I was able to prove this - and fix my problem as well - simply by calling "default" my additional profile and giving the was-default one another name. I was lucky as in my case it was the right thing to do anyway :D

Worth noting that you can sidestep this bug by calling terraform with a profile name in the environment var AWS_PROFILE

ie:

AWS_PROFILE=none-default-profile terraform init works but terraform init does not

I have the following in my backend config:

terraform {
    backend "s3" {
        bucket = "bucket"
        key    = "tfstate/production"
        region = "eu-west-1"
        profile = "none-default-profile"
    }
}

@TimJDFletcher Unfortunately your solution doesn't work for me (I'm using 0.10.7). My config is identical to yours, yet when I run it in a clean environment I receive:

root@user:/app# AWS_PROFILE=default ./terraform init

Initializing the backend...

Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.
Error loading state: AccessDenied: Access Denied
    status code: 403, request id: 0CC6149E9728592F, host id: NspHQGKR2o7Po+nKn6T7RF3esMGI3/OAKY0YNXvnop55aZ9HDKTAifTGZsx96zdUghRMN6IdBWw=

This same error occurs whether or not I change/hardcode any of the profile or config options (access_key, secret_key, profile, etc), such as the solutions posted in #5839. However, none of those solutions that others posted work.

Does anyone have any successful solutions? Am I perhaps missing a permission setting on AWS?

I don't have a default aws profile defined at all.

My s3 access is a cross account access with the following policy applied at a bucket level:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AllowList",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::1234567890:user/terraform"
            },
            "Action": [
                "s3:GetBucketLocation",
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3:::bucket-name"
            ]
        },
        {
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::1234567890:user/terraform"
            },
            "Action": [
                "s3:PutObject",
                "s3:GetObject",
                "s3:DeleteObject"
            ],
            "Resource": [
                "arn:aws:s3:::bucket-name/production*"
            ]
        }
    ]
}

Unfortunately this issue still exist with Terraform 0.10.7 + S3 backend storage.
Nothing other than default profile can be used.

As I explained I do not have a default defined at all and using an environment var I can at least tell the backend which which profile to use, thus:

AWS_PROFILE=none-default-profile terraform init works but terraform init does not because I do not have a default profile setup at all.

I managed to make this work by adding s3:ListObjects and s3:ListBucket to my instance profile policy

@fcisneros would you be so kind and guide me through your process, please? I'd like to simulate it :)

Hi @anugnes I think that the policy I posted above is the minimum you can apply to a Terraform remote state bucket. The policy restricts terraform to a single label (basically a directory) within a single bucket.

I'm not sure if you can remove the delete object perm and just allow terraform to overwrite the state files and just use the S3 object history feature

@anugnes I think this is related to this issue, so the policy shown by @TimJDFletcher (similar to the one I ended up configuring) should do the trick

I've also been flailing with the aws provider using non-default profiles. FWIW, here's my usecase:

I have multiple AWS accounts and want to deploy my Terraform resources to each account, storing their state in an S3 bucket in the given account. I'm attempting to do this by with the following setup

  • A Terraform Workspace for each AWS account
  • A profile for each AWS account in ~/.aws/credentials
  • I use a wrapper script to format the provider and backend with the appropriate aws profile and bucket.

Is there a better way to achieve this? I'm continuously stuck in weird limbo states when attempting to create/switch workspaces and notice Terraform is always falling back to the default profile.

I'd try removing or renaming your default account. To retain the functionality of just typing "aws XXX yyy" set the env var AWS_PROFILE

I just found my way to this issue when I came to the realization that I need to prefix all of my terraform commands with AWS_PROFILE=staging. I would much rather rely on the tf configuration to do this than for me to remember to type this, or export it. Major thanks to whoever fixes this very important feature.

Can confirm that with Terraform v0.11.1 I still got the same issue. My current workaround is as others already said, set env variable before using terraform export AWS_PROFILE=myCustomProfile. Hopefully, someone fixes that soon so we can use profile configuration in .tf files rather than environment variable.

this should be fixable. I will investigate and see where i get too.

@jbardin

The S3 backend in the next Terraform release will be sharing the configuration code with the aws provider which solves this issue.

Does it actually work? Just started a new project and it seem to always look at the default credentials from ~/.aws/credentials during terraform init despite having the aws provider configured. I've looked at the output of TF_LOG=trace to confirm this.

I even hardcoded my secrets for the aws provider to make sure it's not a issue related to interpolation of variables.

Only when I explicitly use -backend-config during terraorm init, the local .terraform dir is created and credentials are stored within terraform.tfstate.

I'm using ver 0.10.6.

This is still present on 0.11.1 and AWS provider 1.7. If I specify config with -backend-config, it works, otherwise it breaks. The on-screen prompts to fill in the config do not work.

@slykar, I haven't had any reports that it doesn't work. You say "despite having the aws provider configured", but the provider configuration is completely separate form the backend configuration. Are you certain that you have the backend configured correctly too?

To those who have added that this is still not working for them, I sympathize that there may still be an undiagnosed issue, but the example configurations are known to work with various user profiles, and numerous production use cases are also working without issue.

What we really need here is a _complete_ and _verifiable_ example showing that the incorrect user profile is being used for init. Once we have a way to reproduce the issue it will be much easier to fix the root cause.

@jbardin I have everything set up and it does not work, both provider and backend:

provider "aws" {
  region  = "us-east-1"
  profile = "dev"
}

terraform {
  backend "s3" {
    bucket  = "xyz"
    key     = "terraform.tfstate"
    profile = "dev"
    region  = "us-east-1"
  }
}

And when I run terraform init I still get:

Error loading previously configured backend: 
Error configuring the backend "s3": No valid credential sources found for AWS Provider.
  Please see https://terraform.io/docs/providers/aws/index.html for more information on
  providing credentials for the AWS Provider

Please update the configuration in your Terraform files to fix this error.
If you'd like to update the configuration interactively without storing
the values in your configuration, run "terraform init".

Terraform 0.11.2 and newest AWS provider 1.7. As you can notice even the error message says about missing AWS Provider credential even though it is a backend as you mentioned.

Additionally if you want to parametrize the profile, then you can't really use string interpolation with variables in backend configuration, which would be also not convenient.

Error loading backend config: 1 error(s) occurred:

* terraform.backend: configuration cannot contain interpolations

The backend configuration is loaded by Terraform extremely early, before
the core of Terraform can be initialized. This is necessary because the backend
dictates the behavior of that core. The core is what handles interpolation
processing. Because of this, interpolations cannot be used in backend
configuration.

@bowczarek, The error starts with Error loading previously configured backend:, which means that the stored configuration doesn't contain the aws credentials, probably because you were previously using the environment variables. The -reconfigure option mentioned above was added for this case, where you need to set the backend configuration while ignoring the previous config.

I'm getting the same thing with brand new state, first time init. It breaks from the command prompts but not when using -backend-config. However, init seems to work if the state is already there.

@jbardin thanks! it's actually working now. This option is documented in Backend Initialization section rather than General Options, that's why I probably didn't notice that or forgot about it, when I was reading it some time ago.

Same problem. profile option in backend configuration is ignored, only setting AWS_PROFILE env var helps.

He everyone,

As I've stated above, what we need here is a complete, reproducible example that demonstrates the stated issue. Any examples I have seen have been resolved as other issues, but I do accept that there may be edge cases that have yet to be covered.

This issue is getting unwieldy, and has too many diversions for someone trying to read through. I'm going to lock this for now, but keep it open to hopefully make it easier to find. If anyone comes across this with the same issue, please feel free to open a new one along with the configuration to reproduce it.

Hello! :robot:

This issue relates to an older version of Terraform that is no longer in active development, and because the area of Terraform it relates to has changed significantly since the issue was opened we suspect that the issue is either fixed or that the circumstances around it have changed enough that we'd need an updated issue report in order to reproduce and address it.

If you're still seeing this or a similar issue in the latest version of Terraform, please do feel free to open a new bug report! Please be sure to include all of the information requested in the template, even if it might seem redundant with the information already shared in _this_ issue, because the internal details relating to this problem are likely to be different in the current version of Terraform.

Thanks!

Was this page helpful?
0 / 5 - 0 ratings

Related issues

franklinwise picture franklinwise  路  3Comments

zeninfinity picture zeninfinity  路  3Comments

rjinski picture rjinski  路  3Comments

ronnix picture ronnix  路  3Comments

sprokopiak picture sprokopiak  路  3Comments