Terraform v0.11.7
+ provider.aws v1.22.0
+ provider.null v1.0.0
provider "aws" {
alias = "SBX"
region = "eu-central-1"
profile = "SBX"
}
terraform {
backend "s3" {
bucket = "automation-myaccount"
key = "terraform/AwsLogs/terraform.tfstate"
region = "eu-central-1"
profile = "SBX"
encrypt = "true"
}
}
[profile MAIN]
output = json
region = eu-central-1
[profile SBX]
output = json
region = eu-central-1
role_arn = arn:aws:iam::012345678912:role/MyAccessRole
source_profile= MAIN
[MAIN]
aws_access_key_id=*******
aws_secret_access_key=************
[SBX]
aws_access_key_id=******
aws_secret_access_key=************
aws_session_token=****************************
NOTE: The SBX credentials are populated by our custom script that reads the config, assumes the role with the source_profile credentials and stores the resulting access data in the credentials file.
Running a terraform command uses the aws provider provided configuration, taking the SBX profile configuration, assuming the given role in the .aws\config file with the MAIN credentials as found in the .aws\credentials file.
This works when accessing the tfstate in the S3 backend, but fails when applied to the AWS provider.
AWS CLI, AWS Powershell modules, and AWS Eclipse plugin accept and use these .aws\credentials and .aws\config configurations without issue.
In a multi-account environment with many accounts, it is not practical to have to edit the credentials file and set the 'default' profile each time you have to work in a different account.
The S3 stored tfstate is accessed and can be inspected or modified, but refreshing the state or running commands that imply the AWS provider, fail with:
Error: provider.aws: No valid credential sources found for AWS Provider.
Please see https://terraform.io/docs/providers/aws/index.html for more information on
providing credentials for the AWS Provider
The only way in which credentials are taken from the credentials file seems to be if you set them as the 'default' profile, then it works, but it is not using the .aws\config configuration but just the .aws\credentials, and only if they are the default.
If no 'default' credentials exist in the .aws\credentials file, it always fails.
1.- terraform init
2.- terraform plan
I'm experiencing the same problem as stated here. I noticed that even though my IAM credentials were wrong, I was getting access and found that if I commented out my default profile, it failed due to being unable to find valid credentials.
When I'm using the s3 backend, and running the terraform init
command, it outputs the following error rather than waiting for the terraform plan
command.
$ terraform init
Initializing modules...
- module.vpc
Initializing the backend...
Error configuring the backend "s3": No valid credential sources found for AWS Provider.
Please see https://terraform.io/docs/providers/aws/index.html for more information on
providing credentials for the AWS Provider
Please update the configuration in your Terraform files to fix this error.
If you'd like to update the configuration interactively without storing
the values in your configuration, run "terraform init".
Copied from https://github.com/terraform-providers/terraform-provider-aws/issues/233#issuecomment-408624797:
I've been able to reproduce this locally and have managed to diagnose the issue and worked out why the AWS backend wasn't able to pick up the AWS_PROFILE
where as the AWS provider was picking it up.
In short, the reason is that this issue is fixed in this repo but version of this repo that is present in the Terraform repo is an old version before this change was made.
There's already a PR which should fix this https://github.com/hashicorp/terraform/pull/17901
Thank you for figuring this out Will!
On Sat, Jul 28, 2018 at 2:43 PM Will May notifications@github.com wrote:
Copied from terraform-providers/terraform-provider-aws#233 (comment)
https://github.com/terraform-providers/terraform-provider-aws/issues/233#issuecomment-408624797
:I've been able to reproduce this locally and have managed to diagnose the
issue and worked out why the AWS backend wasn't able to pick up the
AWS_PROFILE where as the AWS provider was picking it up.In short, the reason is that this issue is fixed in this repo
https://github.com/terraform-providers/terraform-provider-aws/commit/36382deae08935c13fb5a7e78fe3b4840192fecf
but version of this repo that is present in the Terraform repo is an old
version before this change was made
https://github.com/hashicorp/terraform/blob/master/vendor/github.com/terraform-providers/terraform-provider-aws/aws/config.go#L278
.The solution is that the version of the terraform-aws-plugin in this
repository needs to be updated.—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/hashicorp/terraform/issues/18402#issuecomment-408633731,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAHUlVR0uqb4BMpXMnE-CKS1Mww1R5fYks5uLMz9gaJpZM4VFIhK
.
The above PR has been merged and will release with Terraform 0.11.8. 👍
Thanks for merging that, @bflad. Since there's now a fix for this in master
, I'm going to close it.
Did this fix land in 0.11.8? I seem to be hitting it still.
@jmcfallargo Do you have the environment variables AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
set in your shell? These will override the value you specify in aws.profile
.
https://www.terraform.io/docs/providers/aws/#environment-variables
AWS_PROFILE=development terraform init
Seems to work under v0.11.8
However the following does not seem to.
provider "aws" {
region = "ap-southeast-2"
profile = "development"
}
@patbl looks like there is a dependency on Boto for this feature to function. I just installed it into my global python env and that appears to have fixed it.
Any update on this issue? I am still experiencing the same with latest version of terraform.
I have created a user without any permissions, apart from the ability to assume the develop
role, which has full permissions.
I have found that when the following doesn't work:
AWS_PROFILE=develop terraform init
This does work:
AWS_SDK_LOAD_CONFIG=1 AWS_PROFILE=develop terraform init
Though, annoyingly terraform apply
and terraform plan
don't work when AWS_SDK_LOAD_CONFIG
is set to 1
... This means AWS_SDK_LOAD_CONFIG
can't just be set globally to fix this.
This makes me think there is a difference in the way that the AWS SDK is used to load credentials when using apply
/plan
vs init
Experiencing same issue. Would be nice to be able to use assume role. That works fine, but given that we cant use S3 backend with assume role it is a major blocker.
Being able to use interpolation or some other way of passing values to the S3 backend config would be a great improvement as well.
Not being able to assume role also means we have to create Terraform accounts on each account when we do VPC peering. Would love to see things like this get more attention!
I'm still seeing the same issue using 0.11.13
AWS_PROFILE=nonprod terraform init
does not work
neither does
terraform {
backend "s3" {
bucket = "redacted"
key = "terraform.tfstate"
region = "us-east-1"
profile = "nonprod"
}
}
Initializing the backend...
Error configuring the backend "s3": NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
Please update the configuration in your Terraform files to fix this error
then run this command again.
I have confirmed with 0.11.13 that this is still an issue.
Confirmed. Use of profile doesn't work. In my case I don't have access key or secret as we are authenticated thru aws-okta. It works if you don't create a backene but fails as soon as you use s3 backend. I am using latest version..
Confirmed.
My use case involves using a combination of AWS credential profile's credential_process directive and aws-vault's credential helper implementation to avoid storing keys in ~/.aws/credentials
. This still relies on the use of profiles in ~/.aws/config
to support the use of project-specific credentials in each project:
[default]
region=ca-central-1[profile foo]
credential_process = aws-vault exec -j -t 15m foo[profile bar]
credential_process = aws-vault exec -j -t 15m bar
aws-vault stores the credentials in my macOS keychain.
Within each project, direnv is used to set AWS_PROFILE, AWS_DEFAUTL_REGION and AWS_SDK_LOAD_CONFIG in the environment.
The end result is that my experience at the command line is significantly improved.
I can execute aws ec2 describe-regions
rather than aws-vault exec -t 15m foo -- aws ec2 describe-regions
.
Similarly, when not using the s3 backend, I can execute terraform plan
rather than aws-vault exec -t 15m foo -- terraform plan
.
It'd be great if the s3 backend could support AWS_PROFILE. For now, I have a wrapper script for terraform.
Thanks for your awesome tool!
I was able to circumvent this issue (terraform init
was failing with the s3 error) by exporting both the key and secret like so:
export AWS_ACCESS_KEY_ID="your_key_id"
export AWS_SECRET_ACCESS_KEY="your_secret"
Still not sure why it worked but hey... <(^_^)>
for terraform, I built a small wrapper script named terraform
that calls aws-vault exec <profile> -- /path/to/real/terraform $*
. This avoids having to explicitly export the access id / secret key and regain some of the value of aws-vault.
Also experiencing the same problem with v0.11.13
. However, if I prefix any terraform
command with AWS_PROFILE=<profile_name>
then it works fine. This is not _great_...
I have created a user without any permissions, apart from the ability to assume the
develop
role, which has full permissions.I have found that when the following doesn't work:
AWS_PROFILE=develop terraform init
This does work:
AWS_SDK_LOAD_CONFIG=1 AWS_PROFILE=develop terraform init
I'm getting the same issue. When using the s3 backend, AWS_PROFILE=admin terraform apply
won't work, but AWS_SDK_LOAD_CONFIG=1 AWS_PROFILE=admin terraform apply
(although unlike @Stretch96, "terraform apply" does work with this setting).
But this may be working as intented? Reading the documentation from the aws golang sdk
Sessions can be created using the method above that will only load the additional config if the AWS_SDK_LOAD_CONFIG environment variable is set. Alternatively you can explicitly create a Session with shared config enabled. To do this you can use NewSessionWithOptions to configure how the Session will be created. Using the NewSessionWithOptions with SharedConfigState set to SharedConfigEnable will create the session as if the AWS_SDK_LOAD_CONFIG environment variable was set.
My aws role configuration is in what I believe to be the "shared config file" at ~/.aws/config:
[profile admin]
role_arn = arn:aws:iam::<redacted>:role/Admin
region = us-east-1
If I understood correctly, one could force the creation of a session when the shared config file exists by using the SharedConfigEnable option:
// Force enable Shared Config support
sess := session.Must(session.NewSessionWithOptions(session.Options{
SharedConfigState: session.SharedConfigEnable,
}))
Maybe this is what Terraform should be doing? I find it confusing that if I want to define my profile with AWS_PROFILE I must also pass this AWS_SDK_LOAD_CONFIG around. This seems to be a somewhat common issue with several tools that integrate with AWS (such as kops), aggravated by the fact that AWS_SDK_LOAD_CONFIG is not well documented.
EDIT: this seems to be an issue with Go and JS AWS SDKs only, I believe other SDKs don't require AWS_SDK_LOAD_CONFIG=1...
If anyone else has this issue and ends up here, I've opened a couple of PRs documenting the AWS_SDK_LOAD_CONFIG requirement:
https://github.com/hashicorp/terraform/pull/21122
https://github.com/terraform-providers/terraform-provider-aws/pull/8451
I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
Most helpful comment
Experiencing same issue. Would be nice to be able to use assume role. That works fine, but given that we cant use S3 backend with assume role it is a major blocker.
Being able to use interpolation or some other way of passing values to the S3 backend config would be a great improvement as well.