I just updated to Terraform v0.6.11 and am seeing a bug I haven't run across before. I'm trying to set up remote state storage using the terraform remote config command:
terraform remote config -backend=s3 -backend-config=bucket=my-s3-bucket -backend-config=key=terraform.tfstate -backend-config=encrypt=true -backend-config=region=us-east-1
To provide AWS credentials, I set the TF_VAR_AWS_ACCESS_KEY_ID and TF_VAR_AWS_SECRET_ACCESS_KEY environment variables, which are also what I use in my actual Terraform templates, so these two variables work for the plan and apply commands. Everything worked fine until the recent upgrade where I started seeing this error:
Error while performing the initial pull. The error message is shown
below. Note that remote state was properly configured, so you don't
need to reconfigure. You can now use `push` and `pull` directly.
Error reloading remote state: AccessDenied: Access Denied
status code: 403, request id:
If I completely unset the TF_VAR_AWS_ACCESS_KEY_ID and TF_VAR_AWS_SECRET_ACCESS_KEY environment variables, I get a different error, which tells me those variables _are_ being used:
Unable to determine AWS credentials. Set the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables.
(error was: NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors)
Finally, if I set the exact same values in the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY variables, everything works fine.
Any ideas what's going on?
Hi @brikis98. Sorry to hear you're having troubles with variables.
This TF_VAR_* syntax is a way of setting the values of Terraform variables declared using the variable block in the configuration. If you happen to have some blocks like this then what you were doing with TF_VAR_AWS_ACCESS_KEY_ID etc ought to be working:
variable "AWS_ACCESS_KEY_ID" {}
variable "AWS_SECRET_ACCESS_KEY" {}
provider "aws" {
region = "us-east-1"
access_key = "${var.AWS_ACCESS_KEY_ID}"
secret_key = "${var.AWS_SECRET_ACCESS_KEY}"
}
But as you've seen, it's not actually necessary to explicitly provide variables to the AWS provider because it has built-in support for the "standard" credential environment variables used by other AWS clients. It worked when you set the variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables because those are read directly by the AWS provider whenever access_key and secret_key arguments are not provided in the provider "aws" block.
So if you don't have variable blocks for these then Terraform is working as intended. If you're planning to use environment variables anyway, then setting the standard AWS variables is easier and usually preferable to passing the credentials in via Terraform's own variables mechanism, since it'll then work the same way as any other tools you're using to interact with AWS.
Sorry, I just re-read and saw that you said that the variables do seem to be used, which suggests that you are using some construct like my example in my previous comment. Would you mind sharing the relevant part of your config (similar to my example), with any secrets removed, so we can see what's going on?
Our provider config looks like this:
variable "AWS_ACCESS_KEY_ID" {}
variable "AWS_SECRET_ACCESS_KEY" {}
provider "aws" {
access_key = "${var.AWS_ACCESS_KEY_ID}"
secret_key = "${var.AWS_SECRET_ACCESS_KEY}"
}
What I'm trying to figure out is why setting TF_VAR_AWS_ACCESS_KEY_ID and TF_VAR_AWS_SECRET_ACCESS_KEY seems to break the terraform remote config command, why it produces such an odd error message (403 AccessDenied for valid credentials?), and what changed in the last few releases that led to this behavior?
Oh, the subtlety I missed is that you were talking about terraform remote config in particular.
The remote config backends are not actually related to providers, so the S3 remote state provider does not interact with the aws provider and instead just figures out the credentials itself.
The simplest way to make this work is to remove the two variable blocks and the access_key and secret_key attributes from your config. In this case, you would set the environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY and this would be used equally for both the AWS provider _and_ the remote state backend.
I think the reason for the difference in error message in your two scenarios is that unsetting the TF_VAR environment variables caused the AWS provider to effectively look like this:
provider "aws" {
access_key = ""
secret_key = ""
}
When these attributes are empty it is the same as them being not set at all, so the provider attempted to figure out the credentials using environment variables. So in your first case the error message was coming from the S3 remote state backend, while in the second case it was coming from the provider instantiation.
Ah, gotcha, thank you. I removed the AWS_XXX variables from my templates, set my AWS credentials as environment variables (without the TF_VAR_ prefix), and it all works as expected.
check to make sure your environment is using the correct AWS account. You may have multiple accounts on your .aws/credentials
He think if you are running it remotely you just need to export the creds to your env var on your terminal.
$ export AWS_ACCESS_KEY_ID=***
$ export export AWS_SECRET_ACCESS_KEY=+++++++++++***
This solved it for me. I also think it is better to do it this was than putting in your creds in your tf.
I'm going to lock this issue because it has been closed for _30 days_ โณ. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.