When attempting to use the AWS provider for Terraform I am unable to only have my credentials in the credentials file. I must have them in either an environment variable or hard coded into the files.
According to the documentation Terraform should be able to fall back to this file to find credentials for AWS.
I have verified that the credentials in my credentials file are valid.
Terraform v0.6.15
No valid credential sources found for AWS Provider.
Please see https://terraform.io/docs/providers/aws/index.html for more information on
providing credentials for the AWS ProviderExpected Behavior
Without hard coding credentials into the files or supplying an environment variable I expect Terraform to read the credentials stored in the credentials file and use those to authenticate with AWS
Terraform fails to run plan because it can't find the proper credentials.
Use the following test case:
provider "aws" {
region = "us-east-1"
shared_credentials_file="/Users/david/.aws/credentials"
}
resource "aws_vpc" "vpc" {
cidr_block = "10.0.0.0/16"
enable_dns_support = true
enable_dns_hostnames = true
tags {
Name = "test"
Environment = "demo"
}
lifecycle {
create_before_destroy = true
}
}
terraform plan
You will get an error showing no Valid credential sources found.
Hi @davidfic! Thanks for opening an issue. Do you have a default profile set in your shared credentials file, and if so do you have the AWS_PROFILE
environment variable (or equivalent) set?
credentials file:
[default]
aws_access_key = *****
aws_secret_access_key = ******
config file:
[default]
region = us-west-2
output = json
I have no AWS related environment variables set.
I cleaned up everything, and started basically from scratch with the files and now Terraform is able to fallback to the credentials file and not need an environment variable. I'll chalk this up to user error although I'm not exactly sure where the error was :)
Closing
@davidfic I am experiencing a similar issue as you described above, where the credentials file is not being read. When you said you "cleaned up everything", what specifically did you do to start from scratch?
@dotariel One thing that always got me was the definition of the variables in the credentials file.
So in my default block of .aws/credentials I have:
[default]
aws_access_key_id *
aws_secret_access_key *
Double check that you your default ( or whatever profile you are using ) setup exactly like that.
In terms of cleaning up I was just referring to blowing away my Terraform directory and pulling again from Github.
In the end it was as simple as just verifying that communicating with AWS via Terraform is working.
If fixing your credentials file doesn't help post your provider block. Mine looks like this
provider "aws" {
region = "${var.region}"
profile = "${var.profile}"
}
In my variables file I have those variables defined
variable "profile" {
description = "What AWS profile to use for deployment"
}
variable "region" {
description = "The AWS region to launch"
}
Make a simple test file and see if you can get the communication to work.
Not sure if this is what other people hit, but I plan
ned using creds in my environment variables, and then attempted to apply
with the credentials file and hit this problem. Running plan without the environment variables first fixed it.
In my case, I resolved the issue by adding the following lines to my .tfvars
file.
aws_access_key = ""
aws_secret_key = ""
FYI if you want Terraform to seek out the vars in the ~/.aws/credentials
file you have to remove the variable declarations from your variables.tf
file. That worked for me.
Specifying the profile
fixed it for me. I had only been specifying the region
in my terraform code.
@prachikhadke can you please give an example of the command you run ? I am a novice and trying to figure this out. TIA
@cdutta I used the following in my Terraform.
provider "aws" {
region = "${var.region}"
profile = "${var.profile}"
}
I set the variables using environment variables from ~/.aws/config
and ~/.aws/credentials
files.
@prachikhadke thanks
My working configuration is as follows:
credentials file:
[aws]
aws_access_key_id = ****
aws_secret_access_key = ********
As Terraform says, the default location is ~/.aws/credentials on Linux and OS X, or "%USERPROFILE%.aws\credentials" for Windows users.
provider "aws" {
region = "${var.region}"
profile = "aws"
}
Nothing else has to be set if you're using credentials default location. Of course, the profile can be customized and specified in the variables.tf.
Version: Terraform v0.11.1
As far as I can tell, none of the solutions suggested here work. I always have to specify the AWS_PROFILE env var.
MacOS
Terraform v0.11.2
+ provider.aws v1.6.0
Also ran into this.
Needed both AWS_PROFILE
as an env var _and_ profile
as a provider attribute to get this to work.
Started having this since using custom backend
Nothing was working to fix my problem, then I realized my TF version was way outdated.
Updating to the latest version of Terraform fixed this problem for me. From 0.10.x to 0.11.10.
I also encountered this issue, and it turned out that the INI file reader in Terraform is just a little "stricter" than aws-cli's reader.
I have many (dozens) of different creds in my credentials file, and in one of them, I had lazily "disabled" a token like this:
[jp01qasrv]
aws_access_key_id = ...
aws_secret_access_key = #...
Commenting out the section properly instead allowed terraform to find my [default] credentials and use them:
#[jp01qasrv]
#aws_access_key_id = ...
#aws_secret_access_key = ...
So, if all else fails, double check that your credentials file is proper INI format.
As far as I can tell, none of the solutions suggested here work. I always have to specify the AWS_PROFILE env var.
MacOS Terraform v0.11.2 + provider.aws v1.6.0
I have the exact same issue in August 2019 - this seems like basic stuff to get right
I'm going to lock this issue because it has been closed for _30 days_ โณ. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
Most helpful comment
Also ran into this.
Needed both
AWS_PROFILE
as an env var _and_profile
as a provider attribute to get this to work.