Terraform: Error loading state: AccessDenied: Access Denied (AWS S3 backend)

Created on 6 Sep 2018  ·  60Comments  ·  Source: hashicorp/terraform

Terraform Version

Terraform v0.11.8

Terraform Configuration Files

provider "aws" {
  shared_credentials_file = "~/.aws/credentials"
  region                  = "${var.base["region"]}"
}

terraform {
  backend "s3" {
    bucket = "mybucket"
    key    = "backend/state"
    region = "us-east-1"
    encrypt = true
    shared_credentials_file = "~/.aws/credentials"
  }
}

Debug Output

Crash Output

oerp@oerp:~/src/timefordev-ias$ terraform init

Initializing the backend...

Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.
Error loading state: AccessDenied: Access Denied
    status code: 403, request id: somerequestid, host id: someid

Also tried with profile. Same thing.

And when try this:

oerp@oerp:~/src/timefordev-ias$ terraform workspace list
AccessDenied: Access Denied
    status code: 403, request id: aaaa, host id: bbb

Expected Behavior

Actual Behavior

Steps to Reproduce

Additional Context


User that is trying to access S3, have these policies set:

AmazondEC2FullAccess
AmazondS3FullAccess

I also tried adding AdministratorAccess, but it did not change anything.

References


https://github.com/hashicorp/terraform/issues/13589

backens3

Most helpful comment

Got this same error and I want to provide the workaround that worked for me
"I deleted the .terraform folder and ran the init again- and it worked" ..... Chukwunonso Agbo

All 60 comments

I am encountering this same issue exception I am using default profile without a shared credentials file.

deibp001@WW-AM04086471 ~/repos/wdw-cast-dots-s3 (develop) $ terraform init -backend-config="bucket=terraform-wdpr-sandbox-${MY_TF_ENV}" -backend-config="key=wdw-cast/us-east-1/dots/s3/${MY_TF_ENV}.tfstate"

Initializing the backend...
Backend configuration changed!

Terraform has detected that the configuration specified for the backend
has changed. Terraform will now check for existing state in the backends.

Error inspecting states in the "s3" backend:
AccessDenied: Access Denied
status code: 403, request id: DBF975E76F0816AD, host id: OExcuKc+cWTmECVC6X1I3ct6cJTFbDda4dkw9XdX1BrbC1LU9pVlKfQNuOc5CYqltYRCvaeK8Jg=

Prior to changing backends, Terraform inspects the source and destination
states to determine what kind of migration steps need to be taken, if any.
Terraform failed to load the states. The data in both the source and the
destination remain unmodified. Please resolve the above error and try again.

Does anyone know a workaround here? I read that it is somehow working with environment variables. But I don't get it how to use that so it would work?

@oerp-odoo if you have aws credentials set as environment variables, those will override whatever you is set in your terraform configuration (including the credentials file) - is that what you meant?

Another common confusion I've seen is when the AWS credentials used for the backend (the s3 bucket) are not be the same credentials used for the AWS provider.

What i want is simply to use S3 for backend. Im using same credentials for
aws provider and backend. I get access for provider, but not for backend..
so dont get it why S3 denies access if its the same credentials.

Regarding environment variables, I probably don't want that, but I'm looking for a workaround so it would at least let me use remote state somehow.
On Wed, Sep 12, 2018, 22:07 Kristin Laemmert notifications@github.com
wrote:

@oerp-odoo https://github.com/oerp-odoo if you have aws credentials set
as environment variables, those will override whatever you is set in your
terraform configuration (including the credentials file) - is that what you
meant?

Another common confusion I've seen is when the AWS credentials used for
the backend (the s3 bucket) are not be the same credentials used for the
AWS provider.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/hashicorp/terraform/issues/18801#issuecomment-420762964,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AHc3esvKZ4dn6oBnQ3F379OISnasiGk8ks5uaVtugaJpZM4WddaZ
.

My situation is similar. My AWS environment uses SAML authentication with assumed roles. For CLI we get a token, and set environment variables. This has been working as recently as yesterday in account A. However, when I switched to account B, Terraform is no longer able to connect to the remote S3 state. This is using an administrator (i.e. full permissions) role.

TF_LOG=DEBUG terraform init -backend-config=env/backend-prod.tfvars
2018/09/20 10:26:55 [INFO] Terraform version: 0.11.8
2018/09/20 10:26:55 [INFO] Go runtime version: go1.10.3
2018/09/20 10:26:55 [INFO] CLI args: []string{"/usr/local/Cellar/terraform/0.11.8/bin/terraform", "init", "-backend-config=env/backend-prod.tfvars"}
...
2018/09/20 10:26:55 [INFO] CLI command args: []string{"init", "-backend-config=env/backend-prod.tfvars"}
2018/09/20 10:26:55 [DEBUG] command: loading backend config file: /Users/cgwong/workspace/terraform/superset-reporting-service
Initializing modules...
- module.asg
2018/09/20 10:26:55 [DEBUG] found local version "2.8.0" for module terraform-aws-modules/autoscaling/aws
2018/09/20 10:26:55 [DEBUG] matched "terraform-aws-modules/autoscaling/aws" version 2.8.0 for
- module.rds
2018/09/20 10:26:55 [DEBUG] found local version "1.21.0" for module terraform-aws-modules/rds/aws
2018/09/20 10:26:55 [DEBUG] matched "terraform-aws-modules/rds/aws" version 1.21.0 for
- module.rds.db_subnet_group
- module.rds.db_parameter_group
- module.rds.db_option_group
- module.rds.db_instance

2018/09/20 10:26:55 [DEBUG] command: adding extra backend config from CLI
Initializing the backend...
2018/09/20 10:26:55 [WARN] command: backend config change! saved: 9345827190033900985, new: 1347110252068987742
Backend configuration changed!

Terraform has detected that the configuration specified for the backend
2018/09/20 10:26:55 [INFO] Building AWS region structure
2018/09/20 10:26:55 [INFO] Building AWS auth structure
has changed. Terraform will now check for existing state in the backends.


2018/09/20 10:26:55 [INFO] Setting AWS metadata API timeout to 100ms
2018/09/20 10:26:56 [INFO] Ignoring AWS metadata API endpoint at default location as it doesn't return any instance-id
2018/09/20 10:26:56 [INFO] AWS Auth provider used: "EnvProvider"
2018/09/20 10:26:56 [INFO] Initializing DeviceFarm SDK connection
...
2018/09/20 10:27:01 [DEBUG] [aws-sdk-go] <?xml version="1.0" encoding="UTF-8"?>
<Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>BCCC2ADD33902C31</RequestId><HostId>pXrkrZ+7Ui33C5IN4SyV/UjqQEeY4W27zBVpIXxwRUAIcIlaSKWCqvfDo7+fzfkJan3iCkDTb94=</HostId></Error>
2018/09/20 10:27:01 [DEBUG] [aws-sdk-go] DEBUG: Validate Response s3/ListObjects failed, not retrying, error AccessDenied: Access Denied
    status code: 403, request id: BCCC2ADD33902C31, host id: pXrkrZ+7Ui33C5IN4SyV/UjqQEeY4W27zBVpIXxwRUAIcIlaSKWCqvfDo7+fzfkJan3iCkDTb94=
2018/09/20 10:27:01 [DEBUG] plugin: waiting for all plugin processes to complete...
Error inspecting states in the "s3" backend:
    AccessDenied: Access Denied
    status code: 403, request id: BCCC2ADD33902C31, host id: pXrkrZ+7Ui33C5IN4SyV/UjqQEeY4W27zBVpIXxwRUAIcIlaSKWCqvfDo7+fzfkJan3iCkDTb94=

Prior to changing backends, Terraform inspects the source and destination
states to determine what kind of migration steps need to be taken, if any.
Terraform failed to load the states. The data in both the source and the
destination remain unmodified. Please resolve the above error and try again.

Really strange since the role has full access, is working in account A, and was working in account B when last I checked. I am still digging into this and will be using an EC2 instance as a test/workaround.

Using an EC2 instance (admin access) more silently fails, though the error is different:

2018/09/20 16:24:34 [DEBUG] [aws-sdk-go] <?xml version="1.0" encoding="UTF-8"?>
<Error><Code>NoSuchKey</Code><Message>The specified key does not exist.</Message><Key>loc360-superset/prod/terraform.tfstate</Key><RequestId>3EDB0ACE5E60CAED</RequestId><HostId>9URnCiboGfgOwd44LwqU3ZcDItf2ZZS3vctjdaVSIval2lUSHHLbBiTvXy0hXqoEM9FnAvNhrCA=</HostId></Error>
2018/09/20 16:24:34 [DEBUG] [aws-sdk-go] DEBUG: Validate Response s3/GetObject failed, not retrying, error NoSuchKey: The specified key does not exist.

Even after I create the folder/key it gives the same error, though again, unless I enable DEBUG mode it says it was successful.

started getting the same issue. I have provider set up in the main script as below.
provider aws {} But I guess tf wont be referring to main script while running init.

I am calling terrafrom by exporting current profile as AWS_PROFILE value and subsequently running
terraform init

It is working well for the same user but not for another. I changed the profile of another user to have all admin level access to dynamodb and S3 bucket both. Still no luck.

Disabled encrypt on terraform config still no luck. Was wondering if state file gets encrypted specific to profile user.

@oerp-odoo

  • try running aws sts get-caller-identity & aws sts get-caller-identity --profile=desiredProfile checkout the profile being configured for each call.
  • Check env variable for AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY & AWS_SESSION_TOKEN if present set it to empty or unset
  • Check env variable for AWS_PROFILE if configured correctly.

With above 2 points addressed you could simply remove shared credentials file from both main.tf and terraform.tf. Just provide the profile in provider block hardcoded or like ${var.profile} (need to declare variable profile in variables.tfvarsand env var asTF_VAR_profile` with the desired profile name.

See if that works.

EDIT: Follow-up

My issue proved to be nothing to do with terraform. The instance profile lacked several key S3 permissions in its IAM policy. Not sure if terraform even gets useful info back from AWS on those errors to produce meaningful error states. Regardless, if you're seeing this, try looking over your S3
policies to make sure you can ListBucket, GetObject, PutObject, and DeleteObject.


Has anyone had additional luck with this issue? I have a scenario where my s3 backend receives the following error

Failed to save state: failed to upload state: AccessDenied: Access Denied
status code: 403, request id: REDACTED, host id: REDACTED

Error: Failed to persist state to backend.

my provider:

provider "aws" {
    region = "${var.aws_region}"
}

terraform {
    backend "s3" {
    bucket         = "REDACTED"
    key            = "terraform.tfstate"
    region         = "us-east-2"
    dynamodb_table = "REDACTED"
    encrypt        = true
  }
}

My init and plan run just fine. I've seen this work when I'm using AWS Access key / Secret Key, but in this case my worker node is using an assumed role. I can run any aws cli commands from the command-line just fine. Perhaps Terraform has to be specially configured when running from a worker node that infers its permissions from an instance profile?

I have similar issue, but only on AWS EC2 instance. I use shared credentials file.

ubuntu@ip-172-17-2-175:~/test$ terraform init -reconfigure

Initializing the backend...

Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.
Error loading state: AccessDenied: Access Denied
    status code: 403, request id: 5CCE9150B36AA433, host id: vkguLMArsd3MdiP4JKx1AUnFddaceg+v1UfAacFpJbjzRZ9hM7oTD6iu2QRpoppajhbTdHGfRFM=

I tested on different cloud providers and works well. I think this issue only happen on AWS EC2 instance.

Terraform conf. file
```
terraform = {
backend "s3" {
bucket = "bucket_name"
key = ".../terraform.tfstate"
region = "us-east-1"
encrypt = true
profile = "default"
dynamodb_table = "terraform-table"
shared_credentials_file = "$HOME/.aws/credentials"
}
}
````
Terraform version: 0.11.10

@mildwonkey

To fix this issue Error loading state: AccessDenied: Access Denied you need to apply some policy on the role for the EC2 instance, in my case: s3 and dynamodb.

In my case, I was using the wrong AWS credentials, which makes sense :)

A good start is to run terraform init with DEBUG set,
TF_LOG=DEBUG terraform init

If you see the 403 comes from a ListObjects action over a bucket in a X-account operation, then you may just need to get the ACLs sorted in the destination bucket (for the tfstate). SO go to that bucket, Permissions, ACLs, and in Other Accounts or similar, add the Canonical ID of the calling account (set surely in your .aws conf). The a terraform init (X-account remember) won't error.

In my case, it caused by Terraform always grep the [default] block credentials from ~/.aws/credentials file. Even you trying to export AWS_DEFAULT_PROFILE=your-profile-name will not works.

I have similar issue, when I run terraform init -backend-config="profile=myProfileAws". And I receive the following error:

_Error inspecting states in the "s3" backend:
AccessDenied: Access Denied
status code: 403, request id: 752DEFCBA5D4DB53_

P.S:I used that profile for create my s3

@fadavidos what does it output when you run it like
TF_LOG=DEBUG terraform init -backend-config="profile=myProfileAws"

@jgarciabgl Thanks for the quick reply!

That command gives me the following:


2019/03/21 06:06:44 [DEBUG] [aws-sdk-go] DEBUG: Response s3/ListObjects Details:
---[ RESPONSE ]--------------------------------------
HTTP/1.1 403 Forbidden
Connection: close
Transfer-Encoding: chunked
Content-Type: application/xml
Date: Thu, 21 Mar 2019 11:06:43 GMT
Server: AmazonS3
X-Amz-Bucket-Region: us-east-1
X-Amz-Id-2: Hj/K+QYv3Vw0B2NwV2Z+EGBOr3tnJtdDyZaegTSMdfd842GY8iPOPSvHDOmctRPXHMeFbyXCWhk=
X-Amz-Request-Id: A6FCDB6D99F1D56C


2019/03/21 06:06:44 [DEBUG] [aws-sdk-go]
AccessDeniedAccess DeniedA6FCDB6D99F1D56CHj/K+QYv3Vw0B2NwV2Z+EGBOr3tnJtdDyZaegTSMdfd842GY8iPOPSvHDOmctRPXHMeFbyXCWhk=
2019/03/21 06:06:44 [DEBUG] [aws-sdk-go] DEBUG: Validate Response s3/ListObjects failed, not retrying, error AccessDenied: Access Denied
status code: 403, request id: A6FCDB6D99F1D56C, host id: Hj/K+QYv3Vw0B2NwV2Z+EGBOr3tnJtdDyZaegTSMdfd842GY8iPOPSvHDOmctRPXHMeFbyXCWhk=
2019/03/21 06:06:44 [DEBUG] plugin: waiting for all plugin processes to complete...
Error inspecting states in the "s3" backend:
AccessDenied: Access Denied
status code: 403, request id: A6FCDB6D99F1D56C, host id: Hj/K+QYv3Vw0B2NwV2Z+EGBOr3tnJtdDyZaegTSMdfd842GY8iPOPSvHDOmctRPXHMeFbyXCWhk=

My profile has the following configuration
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "",
"Resource": "
"
}
]
}

I just found my mistake. The name of bucket in backend was different from how it is on AWS.

Check your default in .aws credentials. Default overrides all other profiles you might be trying to use for Terraform.

You can also add profile config option to your backend if you have multiple profiles in your .aws credentials file.

backend.tf

terraform {
  backend "s3" {
    bucket = "surya-terraform-state-bucket"
    region = "us-west-2"
    key    = "terraform/tfstate"
  }
}

Error Details Below:

Successfully configured the backend "s3"! Terraform will automatically use this backend unless the backend configuration changes. Error loading state: AccessDenied: Access Denied

Workaround

cat ~/.aws/credentials

export AWS_PROFILE= matching-credential-profile-name

terraform init works now! 🔥 🔥 🔥

I managed to work around this problem by replacing the [default] section of my ~/.aws/credentials, but I think this is still a terragrunt bug, since terraform init works with the same configuration.

Some extra detail:

  1. My procedure for setting up the ~/.aws/credentials file was simply to run aws configure and enter my login data.
  2. Once I did that, terraform 0.11.13 runs terraform init just fine without any aws provider block.
  3. Once I move the backend configuration into the terragrunt block of terraform.tfvars, terragrunt init fails with the 403 error.
  4. Then, copying my personal profile from the [username-redacted] section of the ~/.aws/credentials file into the [default] section of that file allows terragrunt init to run.

I'm not actually sure where aws configure sets what it and terraform appear to agree is the default profile. Maybe in the [profile username-redacted] section of ~/.aws/config? I don't have any AWS_* environment variables set. Regardless, terragrunt and terraform behave differently here and I think terraform 0.11.13 has it right.

I was getting this error:

Error inspecting states in the "s3" backend:
AccessDenied: Access Denied
status code: 403

Finally, I figure it out that the state files where write with different terraform versions:

Terraform doesn't allow running any operations against a state
that was written by a future Terraform version. The state is
reporting it is written by Terraform '0.11.13'

Please run at least that version of Terraform to continue.

I did update the version and everything works well.

Just incase anyone is was making a silly mistake like me. Dont forget that being in the wrong directory can cause this error lol. Remember to switch to the directory with .tf files.

I experienced this today with Terraform v0.12.1 + provider.aws v2.14.0. I created a new S3 bucket, created an IAM policy to hold the ListBucket, GetObject, and PutObject permissions (with the appropriate resource ARNs), then attached that to my user.

That user's key/secret are in a named profile in my ~/.aws/credentials file. Here's my entire main.tf in a clean directory:

terraform {
    backend "s3" {
        region  = "us-east-1"
        bucket  = "BUCKET_NAME_HERE"
        key     = "KEY_NAME_HERE"
    }
    required_providers {
        aws     = ">= 2.14.0"
    }
}

provider "aws" {
    region                  = "us-east-1"
    shared_credentials_file = "CREDS_FILE_PATH_HERE"
    profile                 = "PROFILE_NAME_HERE"
}

When I run TF_LOG=DEBUG terraform init, the sts identity section of the output shows that it is using the creds from the default section of my credentials file. If I add -backend-config="profile=PROFILE_NAME_HERE" to that call, it _still_ uses the default profile creds.

Amending it to

AWS_PROFILE=PROFILE_NAME_HERE TF_LOG=DEBUG terraform init

finally got it to use the actual profile. Seems like a legit bug, given that both methods of passing the profile name failed (in the provider block's profile arg, and as a CLI argument), but overriding via env var worked.

So if you're experiencing this failure, and your bucket and IAM policy are correct, it appears the workaround is to either export AWS_PROFILE, or to rename the [default] section of your credentials file to something else.

Edit: I also tried adding shared_credentials_file and profile arguments to the terraform.backend block directly, just for kicks (I have no idea if those are even valid args in that block)--it didn't change anything.

From what I found I had to add the profile again in the back end as it was using the default profile along with the profile for the env.
provider "aws" {
region = "${var.region}"
profile = "myprofile"
version = "~> 1.50"
}
terraform {
backend "s3" {
bucket = "REDACTED"
key = "sysops/servername.tfstate"
region = "us-east-1"
encrypt = true
profile = "myprofile"
}
}
data "terraform_remote_state" "sysops" {
backend = "s3"

config {
bucket = "REDACTED"
key = "REDACTED"
region = "us-east-1"
profile = "myprofile"
}
}

I ran into this as well, and my problem was different then everything here. I'd recently built out a "dev" stack of configuration directories; VPC, security groups, etc. I then copied all those over to a "prod" stack, and proceeded to update the backend stanza with the prod bucket. BUT, I didn't delete the _.terraform_ directory from each resource directory before running _init_. So even though the backend was correct in my .tf file and my credentials were correct it was failing to _list_ the _dev_ bucket (where it thought there was an existing state file). Dumb mistake in the end, but the root cause didn't jump out at me.

I can confirm seeing the same problem as @nballenger, i.e the profile config sometimes doesn't work for the following versions: Terraform v0.11.13 + provider.aws v2.20.0, only updating the env variable AWS_PROFILE seems to solve the issue.

Note: I'm saying sometimes, because running another test just now by initializing the profile through the profile config works fine, will add more details if I run into this problem again.

Something worth mentioning is that after configuring it successfully with the env variable, it seems the s3 backend configuration is persisted in .terraform/terraform.tfstate, so this may be a good way to confirm whether the backend is using the correct profile.

It seems like a lot of issues are producing the same error, I'd be happy to contribute a fix if we can successfully determine a source for the issue where the code requires a change 😄

If you happen to have permissions errors when using terragrunt, don't be like me and make sure you empty out your terragrunt-cache folders if you're starting a new project and have previously used terragrunt.

I am also having this problem with using a shared credentials file. The only way I was able to get around it was to manually change my desired credentials profile keys into the [default].

I had the same issue but I figured out that there is a very easy fix for this.
Export your aws profile name to the environment variable and it will be picked up by terraform (even terragrunt)

So lets say your provider config is like this:-

`provider "aws" {

region = "${var.aws_region}"

shared_credentials_file = "~/.aws/credentials"

profile = ""

}`

then open your terminal and run

export AWS_PROFILE=<profilename>

If you just have one aws profile configured and are still getting the issue, just put in 'default' in the above command i.e.

export AWS_PROFILE=default

Run terraform command and enjoy :)

Note that this will be for the shell, if you open up a new terminal window or terminate shell session, you would need to run the export command again

In case someone else experiences the same thing as I did-
For me, the issue was due to Force_MFA policy that forced MFA before performing any action.
Solved it by using aws-vault which prompts for MFA token.

In case if anyone is facing this issue on windows platform..
try this..
1) windows key+r then type "." and press enter. --- (without double quotes"
2) Go to .aws folder
3) you will see 2 files
a) config b) credential

open Credential on a notepad editor and replace the access key and Secret key with the new
4) Save and close
5) Now do Terraform init , plan and run.

Hi,

Is this inside a ECS container ?

_TLDR;_
bucket.region needs to equal provider.region else error

I've run into this issue a few times now; it is far and few in-between. Usually when setting up new infrastructure for a brand new project.

The region that the bucket is in will matter for this step
terraform init -backend-config="bucket=my-bucket

If the bucket region is us-east-2 _(ohio)_
and your AWS_DEFAULT_REGION=us-east-1...

Then that will produce this error
Failed to get existing workspaces: AccessDenied: Access Denied

I went ahead and deleted the bucket and recreated it in the proper region to get it working. _(takes some time for the name to get released)_

For me, the issue was due to Force_MFA policy that forced MFA before performing any action.

Same issue I'm having with the same Force_MFA policy. After reading the TF docs I noticed there was a "AWS_SHARED_CREDENTIALS_FILE=$HOME/.aws/credentials" variable you could do. I'm using aws-mfa to setup my MFA credentials in the .aws/credentials file.. I ended up just creating a bash script that would run..

AWS_SHARED_CREDENTIALS_FILE=$HOME/.aws/credentials terraform $@

Seems to be working now with MFA. Probably circle back later and see if there's something else I should be doing instead.

This is what worked for me. I use Azure SAML, therefore the aws-azure-login (https://github.com/sportradar/aws-azure-login)

rm ~/.aws/credentials
aws-azure-login --profile profile-configured-in-aws-cli
export AWS_SDK_LOAD_CONFIG=1
export AWS_PROFILE=profile-configured-in-aws-cli

then, if you use an escalation role, do it in your provider:

provider "aws" {
  region = "us-east-1"
  version = "2.33.0

  assume_role = {
    role_arn     = "arn:aws:iam::XXXXXXXXXX:role/the-escalation-role"
    session_name = "SOMENAME"
  }
}

Also had MFA enabled for my IAM user. Reporting my solution:

An earlier comment mentioned using aws-vault to handle the MFA complexities.

Wary of additional tooling without context, I cross referenced this Gruntwork article -- gives a straightforward overview of provisioning creds with MFA token (worked like a charm) and at the end plugs aws-vault and aws-auth as ease-of-use tools.

Hope this is helpful 👍

Fixed by explicitly exporting AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY. For some reason exporting AWS profile didn't work

EDIT: Follow-up

My issue proved to be nothing to do with terraform. The instance profile lacked several key S3 permissions in its IAM policy. Not sure if terraform even gets useful info back from AWS on those errors to produce meaningful error states. Regardless, if you're seeing this, try looking over your S3
policies to make sure you can ListBucket, GetObject, PutObject, and DeleteObject.

Has anyone had additional luck with this issue? I have a scenario where my s3 backend receives the following error

Failed to save state: failed to upload state: AccessDenied: Access Denied
status code: 403, request id: REDACTED, host id: REDACTED
Error: Failed to persist state to backend.

my provider:

provider "aws" {
    region = "${var.aws_region}"
}

terraform {
    backend "s3" {
    bucket         = "REDACTED"
    key            = "terraform.tfstate"
    region         = "us-east-2"
    dynamodb_table = "REDACTED"
    encrypt        = true
  }
}

My init and plan run just fine. I've seen this work when I'm using AWS Access key / Secret Key, but in this case my worker node is using an assumed role. I can run any aws cli commands from the command-line just fine. Perhaps Terraform has to be specially configured when running from a worker node that infers its permissions from an instance profile?

For anyone still having issues, I had to include these actions as well as the ones above: s3:GetBucketObjectLockConfiguration, s3:GetEncryptionConfiguration, s3:GetLifecycleConfiguration, s3:GetReplicationConfiguration

Got this same error and I want to provide the workaround that worked for me
"I deleted the .terraform folder and ran the init again- and it worked" ..... Chukwunonso Agbo

Sorry, this is tripping you up, trying to handle multiple accounts in the backend can be confusing.

The access denied is because when you run init and change the backend config, terraform's default behavior is to migrate the state from the previous backend to the new backend. So your new configuration may be correct, but you don't probably have the credentials loaded to access the previous state.

This is what the -reconfigure flag was added to support, which ignores the previous configuration altogether.

I have similar issue

Description:

  • I have 2 AWS accounts.

I have copied module implementation from one repository to another(one repo per account). When you run terraform init you terraform is trying to get information from the old bucket(another account).

Solution:

Remove .terraform file inside the module.

So today I learnt that s3 remote states resolve their credentials separately from the back-end and provider...

So if you're passing a Role around for auth ensure that it's in your remote state block as well or you'll get this error when the the remote state resolves the credential chain.

Had the same issue, removed .terraform directory and init again

I run into Error copying state from the previous "local" backend to the newly configured "s3" backend: failed to upload state: AccessDenied: Access Denied

For me I have default encryption enabled on the backend but was required to specify -backend-config=encrypt=true to init when migrated from local state to S3

After updating the aws provider to version 52 or later, I started getting this sort of error as well. After adding s3:GetBucketAcl it started working.

I just got this when changing environments / credentials.
As many have mentioned, the fix was to remove the .terraform directory and run terraform init again.

I just put _profile_ in the config bloc like
terraform { backend "s3" { bucket = "mybucket" key = "path/to/my/key" region = "us-east-1" profile = "your_profile" } }
and everything works well

In my case, I was trying to create bucket for static website hosting, however before, I had manually changed to block all public access. The message was access denied(probably should have been another one) for Administrator account. Changing it to setting where buckets with public access can exist and trying terraform apply resolved everything.

As another poster mentioned above, the only solution that worked for me was to remove the .terraform directory and re-initialize.

Got this same error and I want to provide the workaround that worked for me
"I deleted the .terraform folder and ran the init again- and it worked" ..... Chukwunonso Agbo

This guy is a genius

If you are switching S3 as backend at the middle of the work. You may need to run terraform init -reconfigure instead terraform init .

Most probably, the option-reconfigure is what you are looking for

I'm getting this error "
Error: error configuring S3 Backend: error validating provider credentials: error calling sts:GetCallerIdentity: InvalidClientTokenId: The security token included in the request is invalid.
status code: 403, request id: 0bda7445-fc52-43fa-a1e9-1a6040db165b"

here is my backend is configured

terraform {
backend "s3" {
bucket = "mybucket"
region = "us-east-1"
key = "mybucket/terraform.tfstate"
access_key = "mykey"
secret_key = "secretkey"
}
}

While the policy I created and attached to the user is

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::mybucket"
},
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject"
],
"Resource": "arn:aws:s3:::mybucket/*"
}
]
}

but still getting the error mentioned above

resource "aws_s3_bucket" "terraform state" {
    bucket = "BUCKET_NAME"
    region=var.AWS_REGION
    versioning {
      enabled = true
    }
    lifecycle {
      prevent_destroy = true
    }
    tags = {
      Name = "terraform remote state"
    }      
}
terraform {
  backend "s3" {
    bucket = "BUCKET_NAME"
    region = var.AWS_REGION
    key = "terraform.tfstate"
    encrypt = true
  }
}

Check region. I made that mistake

Just an observation: this situation can happen when using the same repository with different AWS accounts. Then you can delete the .terraform directory an initialize again:

rm -rf .terraform
terraform init

I was so dump to forget the region in the backend configuration to the correct region.

~/.aws/credentials file profile changed to "default"
aws provider and backend configuration corrected to the same region and it fixed the issue.

Might have some insight for why @bzamecnik solution works. If you do not remove the .terraform folder, it tries to use the old backend configuration, even if the reason for the re-init is you just changed the backend config. In my case, that meant it was trying to use the wrong bucket name. An rm of the entire .terraform folder and it works as expecte

I encountered this problem when switching between AWS accounts and AWS profiles while working in the same repo/workspace. You can delete the local .terraform folder and rerun terraform init to fix the issue. If your state is actually remote and not local this shouldn't be an issue.

the .terraform/terraform.tfstate file clearly showed that it was pointing to an S3 bucket in the wrong account which the currently applied AWS credentials couldn't read from.

I was on TF 0.12.29 and feels like a bug that TF will fail and display this error instead of displaying an error that you can't run a version older than the latest TF version that has written to remote state. Update your local Terraform version, clear local state, and try again:

$ cat .terraform-version
latest:^0.12
$ tfenv install 0.12.30
$ rm -rf .terraform
$ terraform init
Was this page helpful?
0 / 5 - 0 ratings