_This issue was originally opened by @GreenyMcDuff as hashicorp/terraform#22680. It was migrated here as a result of the provider split. The original body of the issue is below._
terraform v0.11.14
terraform {
backend "s3" {
region = "eu-west-2"
dynamodb_table = "terraform-lock"
key = "my-path/terraform.tfstate"
encrypt = true
}
}
provider "aws" {
version = "~> 2.24.0"
region = "${var.region}"
profile = "<aws_account_id>_AdministratorAccess"
shared_credentials_file = "/root/.aws/credentials"
}
# partial backend config (shared.backend)
bucket = "my-terraform-state"
kms_key_id = "kms_key_id"
/terraform/infra # terraform init -backend-config=../shared.backend
2019/09/04 10:31:23 [INFO] Terraform version: 0.11.14
2019/09/04 10:31:23 [INFO] Go runtime version: go1.12.4
2019/09/04 10:31:23 [INFO] CLI args: []string{"/bin/terraform", "init", "-backend-config=../shared.backend"}
2019/09/04 10:31:23 [DEBUG] Attempting to open CLI config file: /root/.terraformrc
2019/09/04 10:31:23 [DEBUG] File doesn't exist, but doesn't need to. Ignoring.
2019/09/04 10:31:23 [INFO] CLI command args: []string{"init", "-backend-config=../shared.backend"}
2019/09/04 10:31:23 [DEBUG] command: loading backend config file: /terraform/infra
Initializing modules...
2019/09/04 10:31:23 [TRACE] module source: "../modules-project/infra/"
- module.infra
2019/09/04 10:31:23 [TRACE] module source: "../modules-project/service-infra/"
- module.service_infra
- module.vpc
2019/09/04 10:31:23 [TRACE] module source: "terraform-aws-modules/vpc/aws"
2019/09/04 10:31:23 [TRACE] "terraform-aws-modules/vpc/aws" is a registry module
2019/09/04 10:31:23 [DEBUG] found local version "1.66.0" for module terraform-aws-modules/vpc/aws
2019/09/04 10:31:23 [DEBUG] matched "terraform-aws-modules/vpc/aws" version 1.66.0 for 1.66.0
- module.infra.artifacts_bucket
2019/09/04 10:31:24 [TRACE] module source: "../../modules-generic/s3-versioned-bucket"
- module.infra.data_bucket
2019/09/04 10:31:24 [TRACE] module source: "../../modules-generic/s3-versioned-bucket"
- module.service_infra.iam_packer
2019/09/04 10:31:24 [TRACE] module source: "../../modules-generic/iam-role/"
- module.service_infra.iam_service_account
2019/09/04 10:31:24 [TRACE] module source: "../../modules-generic/iam-role/"
- module.service_infra.es_parameter
2019/09/04 10:31:24 [TRACE] module source: "../../modules-generic/ssm-parameter/"
- module.service_infra.kb_parameter
2019/09/04 10:31:24 [TRACE] module source: "../../modules-generic/ssm-parameter/"
- module.service_infra.bu_parameter
2019/09/04 10:31:24 [TRACE] module source: "../../modules-generic/ssm-parameter/"
2019/09/04 10:31:24 [DEBUG] command: adding extra backend config from CLI
Initializing the backend...
2019/09/04 10:31:24 [DEBUG] command: no data state file found for backend config
2019/09/04 10:31:24 [DEBUG] New state was assigned lineage "efea3de3-4cbc-5ded-4955-029dfa0ecf98"
2019/09/04 10:31:24 [INFO] Setting AWS metadata API timeout to 100ms
2019/09/04 10:31:24 [INFO] Ignoring AWS metadata API endpoint at default location as it doesn't return any instance-id
2019/09/04 10:31:25 [DEBUG] plugin: waiting for all plugin processes to complete...
Error configuring the backend "s3": No valid credential sources found for AWS Provider.
Please see https://terraform.io/docs/providers/aws/index.html for more information on
providing credentials for the AWS Provider
Please update the configuration in your Terraform files to fix this error
then run this command again.
Terraform Backend should have been initialised successfully
Terraform provider failed to locate valid credentials
Create a Docker image using the below Dockerfile by running
docker build -t test/terraform:0.11.14 .
Dockerfile
FROM hashicorp/terraform:0.11.14
RUN apk update --no-cache
RUN apk add --no-cache nodejs
RUN apk add --no-cache --update nodejs-npm
RUN apk add python3 && \
python3 -m ensurepip && \
rm -r /usr/lib/python*/ensurepip && \
pip3 install --upgrade pip setuptools && \
if [ ! -e /usr/bin/pip ]; then ln -s pip3 /usr/bin/pip ; fi && \
if [[ ! -e /usr/bin/python ]]; then ln -sf /usr/bin/python3 /usr/bin/python; fi && \
rm -r /root/.cache
RUN pip3 install awscli
ENTRYPOINT ["terraform"]
Run the container using the following command:
docker run -it --entrypoint sh \
--mount type=bind,source="$(shell pwd)",destination=/terraform \
-w /terraform \
-v /c/Users/<user_name>/.aws:/root/.aws \
--env TF_DATA_DIR=terraform/infra/.terraform \
test/terraform:0.11.14
This will attach a process to the shell inside the container. From here if you run:
terraform init -backend-config=shared.backend
(You may notice this isn't the same command I ran to generate the DEBUG output. That is just because my partial config is stored one level up in my actual project. I figured it was unnecessary to add that complexity to the setup)
You will get the error detailed above
/c/Users/user_name/.aws/credentials
) which is then mounted to the default location in Alpine Linux at run time (/root/.aws/credentials
)I can see the credentials and config files have been mounted correctly by running cat /root/.aws/credentials
and cat /root/.aws/config
from inside the container. Below are the outputs from these commands respectively
# credentials file
[<aws_account_id>_AdministratorAccess]
aws_access_key_id=<ACCESS_KEY>
aws_secret_access_key=<SECRET_KEY>
aws_session_token=<SESSION_TOKEN>
# config file
[default]
region=eu-west-2
output=json
Some more info:
If I change my config
and credentials
file to the below:
# credentials file
[default]
aws_access_key_id=<ACCESS_KEY>
aws_secret_access_key=<SECRET_KEY>
aws_session_token=<SESSION_TOKEN>
# config file
[default]
region=eu-west-2
output=json
And the provider.tf
file to:
provider "aws" {
version = "~> 2.24.0"
region = "${var.region}"
}
It works. However, as soon as I change them to:
# credentials file
[test_user]
aws_access_key_id=<ACCESS_KEY>
aws_secret_access_key=<SECRET_KEY>
aws_session_token=<SESSION_TOKEN>
# config file
[profile test_user]
region=eu-west-2
output=json
provider.tf
provider "aws" {
version = "~> 2.24.0"
region = "${var.region}"
profile = "test_user"
}
It doesn't
Had the same issue, I renamed my profile in .aws/credentials file from "default" to something else, then came back to terraform a few days later.
I got the no valid credentials error referenced using S3 backend, despite updating the profile under S3 backend config to match. Only after I found this post and copied the existing profile as "default" did terraform init complete successfully.
Terraform v0.12.19
Hi folks 👋 Version 3.0 of the Terraform AWS Provider will include a few authentication changes that should help in this case. Similar enhancements and fixes were applied to the Terraform S3 Backend (part of Terraform CLI) in version 0.13.0-beta2.
The Terraform AWS Provider major version update will release in the next two weeks or so. Please follow the v3.0.0 milestone for tracking the progress of that release. If you are still having trouble after updating when its released, please file a new issue. Thanks!
This has been released in version 3.0.0 of the Terraform AWS provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading.
For further feature requests or bug reports with this functionality, please create a new GitHub issue following the template for triage. Thanks!
I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.
If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thanks!
Most helpful comment
Some more info:
If I change my
config
andcredentials
file to the below:And the
provider.tf
file to:It works. However, as soon as I change them to:
provider.tf
It doesn't