The main AWS provider appears to, but the docs for the S3 remote configuration simply state:
https://www.terraform.io/docs/commands/remote-config.html
"S3 - ... Supports and honors the standard AWS environment variables AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY and AWS_DEFAULT_REGION. These can optionally be provided as parameters in the access_key, secret_key and region variables respectively..."
What about reading AWS_SECURITY_TOKEN ?
I am seeing errors:
Error reloading remote state: InvalidAccessKeyId: The AWS Access Key Id you provided does not exist in our records.
status code: 403, request id:
If I stop assuming a role before calling terraform then the reported 403 error goes away.
I actually got 404's when trying to configure a remote configuration in an S3 bucket from my PC using an STS environment variable setup that otherwise works fine for running terraform plan/apply (region: eu-west-1)
When I redid the command on a server running with the same role that I had assumed with STS, then it worked. So I believe that you're correct in assuming that there's some kind of funkyness involved with remote config and STS...
This is what I'm seeing:
$ terraform remote config -backend=S3 -backend-config="bucket=<bucketname-that-exists>" -backend-config="key=state/<keyname>" -backend-config="region=eu-west-1"
Error while performing the initial pull. The error message is shown
below. Note that remote state was properly configured, so you don't
need to reconfigure. You can now use `push` and `pull` directly.
Error reloading remote state: 404NotFound: 404 Not Found
status code: 404, request id: 26FAAAA4F60118D3
This seems like it's the way that credentials are loaded in func getCreds in buildin/providers/aws/config.go might be better at handling STS (or environment variables set up with the content from a STS call) than what's used for s3 remote state in terraform/state/remote/s3.go around line 65 (where SessionToken seems to be explicitly set to "") -- but I can't really figure out why it's better.
I'm currently using S3 remote states with STS Assume Role credentials. I believe I had the same issues, so I ended up adding a Bucket Policy on the bucket containing the state, that allowed the role I was assuming in the other account to update the state files. This works for us.
I do not see the error you mentioned above. I pass in these environment variables to terraform (provided by script):
AWS_DEFAULT_REGION
AWS_REGION
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
AWS_SESSION_TOKEN
I don't think that I will be be allowed to set the bucket policy that you're able to.
With the same credentials, I can make aws_s3_bucket_object's in the account that I'm "assuming" into without modifying bucket policies. And I can enable remote state when I'm on a server with the same instance role.
My environment variables:
AWS_ACCESS_KEY_ID
AWS_DEFAULT_REGION
AWS_PROFILE
AWS_REGION
AWS_SECRET_ACCESS_KEY
AWS_SECURITY_TOKEN
AWS_SESSION_TOKEN
Once you get the token from STS, set the env var AWS_SESSION_TOKEN instead of AWS_SECURITY_TOKEN and you should be good to go
If using cross-account STS/IAM tokens, we ran into a problem where the proper role_arn was not being used. It's been fixed here: https://github.com/hashicorp/terraform/pull/10067
Closing since this is answered by @n-my. Other comments are different questions!
I'm going to lock this issue because it has been closed for _30 days_ โณ. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.