Terraform v0.11.7
AWS provider v1.25
I have a server that is set up to run in a production AWS account with an IAM role attached. I then use the aws ini configuration to set up a profile for the production
account, and also a profile for the non-prod
uction account which has staging resources in it. There is a trust relationship between the role attached to the instance, and the role in the non-production account. On awscli this works as expected.
~/.aws/config
[profile production]
credential_source = Ec2InstanceMetadata
output = json
region = eu-west-1
[profile non-prod]
role_arn = arn:aws:iam::000000000000:role/Terraform
credential_source = Ec2InstanceMetadata
output = json
region = eu-west-1
In terraform I then point it to the non-prod
profile, however I get access denied to resources.
provider "aws" {
version = "~> 1.25"
region = "eu-west-1"
profile = "non-prod"
}
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
aws_s3_bucket.staging: Refreshing state... (ID: ***)
Error: Error refreshing state: 1 error(s) occurred:
* aws_s3_bucket.staging: 1 error(s) occurred:
* aws_s3_bucket.staging: aws_s3_bucket.staging: error reading S3 bucket "***": Forbidden: Forbidden
I expect profile non-prod
to authenticate by using assuming the non-production account role, using the role attached to the instance.
It appears to just authenticate as the role attached to the instance instead, which cannot access resources outside of it's own account.
I have also tried the assume_role {...}
provider config, however I get "No valid credential sources found for AWS Provider.
"
Explicit profiles are much the preference in any case, as they can be configured independently; using restricted key/secret pairs on an employee's machine, and the role attached to the instance in production.
Quick question, does it work if you set the AWS_SDK_LOAD_CONFIG=1
environment variable?
Ah no, it doesn't appear to.
$ AWS_SDK_LOAD_CONFIG=1 terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
aws_s3_bucket.staging: Refreshing state... (ID: ***)
Error: Error refreshing state: 1 error(s) occurred:
* aws_s3_bucket.staging: 1 error(s) occurred:
* aws_s3_bucket.staging: aws_s3_bucket.staging: error reading S3 bucket "***": Forbidden: Forbidden
$ aws --profile non-prod s3api list-objects --bucket ***
{
"Contents": [
{
"LastModified": "2018-06-28T18:08:02.000Z",
"ETag": "\"d41d8cd98f00b204e9800998ecf8427e\"",
"StorageClass": "STANDARD",
"Key": "testfile",
...
}
]
}
It might be related to https://github.com/aws/aws-sdk-go/pull/2005 - looks like the go sdk doesn't yet support credential_source
Using AWS_SDK_LOAD_CONFIG=1 fixed a similar issue for me where I was using roles to switch from a master account. Obviously not related to this issue, but thought I'd leave a comment here for future seekers
https://github.com/aws/aws-sdk-go/pull/2201 just got merged which adds support for credential_source
Is there a prevision to fix it? aws/aws-sdk-go#2201 was merged recenlty.
I'm with same problem. Is there some alternative?
Looks like this is not lined up for next release: https://github.com/terraform-providers/terraform-provider-aws/commit/6f2ae992e94bb4b8d07bbc0d402bbfc097f7881f
Support was added in v1.15.54, the sdk bump for aws provider 1.41.0 is v1.15.53.
This should be supported in version 1.41.0 since the aws-sdk-go dependency was updated to v1.15.55 in #6164. See also: https://github.com/terraform-providers/terraform-provider-aws/blob/v1.41.0/vendor/vendor.json#L177-L184
thanks @bflad I missed that one
Hmm, seems I can't get it to work.. aws provider v1.42.0. AWS_SDK_LOAD_CONFIG
is set to 1
My ~/.aws/config
:
[profile acme]
credential_source = Ec2InstanceMetadata
[profile pixelart-internal]
role_arn = arn:aws:iam::1234:role/service-role/acme-dev-gitlab-runner
credential_source = Ec2InstanceMetadata
[profile pixelart-old]
role_arn = arn:aws:iam::5678:role/service-role/acme-dev-gitlab-runner
credential_source = Ec2InstanceMetadata
My providers config:
provider "aws" {
version = "~> 1.41"
profile = "acme"
region = "${var.aws_region}"
allowed_account_ids = ["${var.aws_account}"]
}
provider "aws" {
alias = "kms"
version = "~> 1.41"
profile = "acme"
region = "eu-central-1"
allowed_account_ids = ["${var.aws_account}"]
}
provider "aws" {
alias = "ci-cd"
version = "~> 1.41"
profile = "pixelart-internal"
region = "eu-central-1"
allowed_account_ids = ["1234"]
}
provider "aws" {
alias = "jenkins-old"
version = "~> 1.41"
profile = "pixelart-old"
region = "eu-central-1"
allowed_account_ids = ["5678"]
}
Still I get
* provider.aws.ci-cd: Account ID not allowed (251396230315)
* provider.aws.jenkins-old: Account ID not allowed (251396230315)
Anything I could provide further?
@GroovyCarrot I'm having this same problem. Have you found a work-around?
I was able to get assume role to work by setting skip_metadata_api_check = true
in the provider. This seems like the opposite of how this should work, though.
@kipkoan Can you explain what you did along with setting skip_metadata_api_check = true
?
I set up my ~/.aws/config file with a profile, the whole file looks like this:
[profile dev]
role_arn = arn:aws:iam::12345678901:role/dev-account-role
credential_source = Ec2InstanceMetadata
region = us-west-2
The role and S3 bucket with the state file do not live in the same account as the EC2. The instance has IAM permission to assume role role_arn = arn:aws:iam::12345678901:role/dev-account-role
. And dev-account-role has permission to read from the S3 bucket defined in the terraform backend config.
Then my provider looks like this:
provider "aws" {
region = "us-west-2"
profile = "dev"
skip_metadata_api_check = true
}
I also tried assume role. And I tried getting access and secret keys for the role using aws sts assume role and adding them to the provider.
I tested the instance's permissions by running aws --profile dev s3api list-objects ...
and I am able to get the object from the bucket like that. I have s3:*
permission on the dev role for now. When I run terraform init
with TF_LOG=DEBUG
, I see that requests are still being made by the instance profile role and not the assumed role defined in provider > profile. How did you get terraform to use the profile
defined?
@shanee-spring - The s3 backend does not use the provider
block. It uses the terraform { backend
block. I was only able to get the provider
block to work with skip_metadata_api_check = true
, not the backend
block. I tested with Terraform v0.12a and found that the backend
block also works. I think the tracking issue for that is: https://github.com/hashicorp/terraform/issues/18213
@kipkoan thanks. I added role_arn to the terraform { backend "s3" {} }
block and it worked. Thanks for pointing that out.
running Terraform v0.11.11
@shanee-spring (and future readers of this) the thing that doesn't work until TF v0.12 is using the ~/.aws/config to get the role arn (allowing you to not specify that in the Terraform backend directly).
Easy test case for this with 2 accounts (this is Terraform v0.12.5
) ...
In account 1: Create an EC2 Instance, assign an IAM role to that instance
In account 2: Create a role with a policy that allows account 1 to assume it (here it's called dev-account-role
)
In account 1: On the instance, plop the following into ~/.aws/config where 12345678901
is account 2 id:
[profile dev]
role_arn = arn:aws:iam::12345678901:role/dev-account-role
credential_source = Ec2InstanceMetadata
region = us-west-2
Run this .tf on this instance:
# instance profile:
provider "aws" {}
# assumed role from credential_source:
provider "aws" {
profile = "dev"
alias = "assumed_role"
}
data "aws_caller_identity" "instance" {}
data "aws_caller_identity" "assumed_role" {
provider = "aws.assumed_role"
}
output "instance_profile_role_arn" {
value = "${data.aws_caller_identity.instance.arn}"
}
output "assumed_role_arn" {
value = "${data.aws_caller_identity.assumed_role.arn}"
}
Expected:
assumed_role_arn and instance_profile_role_arn are not the same.
Actual:
they are the same.
As stated, skip_metadata_api_check = true
fixes it. Also worth mentioning that this fix doesn't seem to work when this same tf file is applied by Atlantis. 😢
Hi again folks 👋 You may want to try this with version 2.20.0 of the Terraform AWS Provider -- this AWS Go SDK dependency update is specifically surrounding the support of AWS profiles using both credential_source
and role_arn
: https://github.com/terraform-providers/terraform-provider-aws/pull/9305 / https://github.com/aws/aws-sdk-go/pull/2674
Still does not work using 2.20
of AWS provider and Terraform 0.12.5
.
$ terraform --version
Terraform v0.12.5
+ provider.aws v2.20.0
~/.aws/config
[profile account1]
role_arn=arn:aws:iam::account1-id:role/foo
credential_source=Ec2InstanceMetadata
main.tf
provider "aws" {
region = "${var.region}"
profile = "account1"
}
env
AWS_SDK_LOAD_CONFIG=1
AWS_PROFILE=account1
Using assume_role
block in provider or setting skip_metadata_api_check
worked.
I also tried @sndwch's example, and had the same result.
Also im doing a similar setup but in ECS cluster in fargate
So, I have the following config:
~/.aws/config
[profile mosecurity-production]
region=us-east-1
output=json
role_arn=arn:aws:iam::12345678:role/Deployment
credential_source=EcsContainer
[profile mo-production]
region=us-east-1
output=json
source_profile=mosecurity-production
role_arn=arn:aws:iam::123456:role/Deployment
I recreated the environment on the EC2 instance and it works the confif is the following
~/.aws/config
[profile mosecurity-production]
region=us-east-1
output=json
role_arn=arn:aws:iam::12345678:role/Deployment
credential_source=Ec2InstanceMetadata
[profile mo-production]
region=us-east-1
output=json
source_profile=mosecurity-production
role_arn=arn:aws:iam::123456:role/Deployment
with the
AWS_SDK_LOAD_CONFIG=1
AWS_PROFILE=mosecurity-production
and the
terraform --version
Terraform v0.12.4
+ provider.aws v2.21.1
setting the skip_metadata_api_check IS NOT working.
The root cause of this issue explained here https://github.com/hashicorp/aws-sdk-go-base/issues/7
I have a slightly different use case - running TF in EKS pod that uses IAM attached to Service Account as described here https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html
The long story short, I get ~/.aws/config
file like this:
[profile profile1]
role_arn = arn:aws:iam::xxx:role/pod
web_identity_token_file = /var/run/secrets/eks.amazonaws.com/serviceaccount/token
[profile profile2]
source_profile = profile1
role_arn = arn:aws:iam::xxx:role/some-other-role-allowed-to-be-assumed-from-profile1
Then I have export AWS_PROFILE=profile2
just before calling terraform
. I have simple TF code to test this:
provider "aws" {
version = "2.34.0"
region = "us-west-2"
}
data "aws_caller_identity" "current" {}
output "aws_caller_identity" {
value = data.aws_caller_identity.current
}
Terraform picking up EKS node instance profile instead of everything defined in ~/.aws/config
. I think I have slightly better workaround than skip_metadata_api_check
- trick AWS SDK into thinking it's not running in AWS by defining AWS_METADATA_URL
environment variable to some absurd endpoint:
export AWS_METADATA_URL="http://localhost/not/existent/url"
For my particular use case, AWS metadata IP should be anyway iptabled out so not accessible by EKS pods, I just haven't got there yet. Still this is a bug in https://github.com/hashicorp/aws-sdk-go-base/ worth fixing - I can imagine there might be use cases these workarounds does not apply. Such as using role_arn
and instance profile for different provider instances or something if EKS node instance profile not intended to be hidden from it's pods. Anyway diverging from official AWS SDK credentials chain logic (or official AWS SDK in general) sounds like a bad practice. It may come in all sorts of unintended behaviour/bugs, AWS systems are pretty complex and heavily rely on conventions and standards like this AWS credentials chaining order.
@llibicpep I think your analysis nailed the problem, and was a huge help to me in putting together this proposed fix. It could still use some additional test cases if anyone else has time to pitch in.
https://github.com/hashicorp/aws-sdk-go-base/pull/20
@bflad could you review?
Mine is when EC2 attached to a Role because needed for AWS Session Manager and i use shared credentials file for Terraform Backend when running init
(try access S3) got 403 denied turns out the caller identity is the Role from EC2 Instance Profile not from shared credentials, using skip_metadata_api_check=true
not working but exporting AWS_METADATA_URL
to non existent url works. Thanks @dee-kryvenko
I encountered this issue when writing a metadata-server mock. There my solution was to just not implement instance-id on the metadata API, having the same effect as setting AWS_METADATA_URL=x
. For my prod use-case the AWS_METADATA_URL=x
approach is also working, so that the config profiles are picked up properly.
Hi folks 👋 Version 3.0 of the Terraform AWS Provider will include a few authentication changes that should help in this case including:
~/.aws/config
) by defaultAWS_METADATA_URL
)This major version update will release in the next two weeks or so. Please follow the v3.0.0 milestone for tracking the progress of that release. If you are still having trouble after updating when its released, please file a new issue. Thanks!
This has been released in version 3.0.0 of the Terraform AWS provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading.
For further feature requests or bug reports with this functionality, please create a new GitHub issue following the template for triage. Thanks!
I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.
If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thanks!
Most helpful comment
The root cause of this issue explained here https://github.com/hashicorp/aws-sdk-go-base/issues/7
I have a slightly different use case - running TF in EKS pod that uses IAM attached to Service Account as described here https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html
The long story short, I get
~/.aws/config
file like this:Then I have
export AWS_PROFILE=profile2
just before callingterraform
. I have simple TF code to test this:Terraform picking up EKS node instance profile instead of everything defined in
~/.aws/config
. I think I have slightly better workaround thanskip_metadata_api_check
- trick AWS SDK into thinking it's not running in AWS by definingAWS_METADATA_URL
environment variable to some absurd endpoint:For my particular use case, AWS metadata IP should be anyway iptabled out so not accessible by EKS pods, I just haven't got there yet. Still this is a bug in https://github.com/hashicorp/aws-sdk-go-base/ worth fixing - I can imagine there might be use cases these workarounds does not apply. Such as using
role_arn
and instance profile for different provider instances or something if EKS node instance profile not intended to be hidden from it's pods. Anyway diverging from official AWS SDK credentials chain logic (or official AWS SDK in general) sounds like a bad practice. It may come in all sorts of unintended behaviour/bugs, AWS systems are pretty complex and heavily rely on conventions and standards like this AWS credentials chaining order.