Tested with 0.9.4 through to 0.9.11 and cannot get the role assumption to work.
Authentication to access environment using assumed role.
_none_
Note that the configuration does not include a 'assume_role' block - this is defined at the command line.
Full debug dot provided; an output can be provided if this is to be investigated.
The relevant section of authentication within the log is (this is short enough to include without a gist):
2017/07/18 11:58:06 [DEBUG] plugin: terraform: aws-provider (internal) 2017/07/18 11:58:06 [INFO] No assume_role block read from configuration
2017/07/18 11:58:06 [DEBUG] plugin: terraform: aws-provider (internal) 2017/07/18 11:58:06 [INFO] Building AWS region structure
2017/07/18 11:58:06 [DEBUG] plugin: terraform: aws-provider (internal) 2017/07/18 11:58:06 [INFO] Building AWS auth structure
2017/07/18 11:58:06 [DEBUG] plugin: terraform: aws-provider (internal) 2017/07/18 11:58:06 [INFO] AssumeRoleARN:
2017/07/18 11:58:06 [DEBUG] plugin: terraform: aws-provider (internal) 2017/07/18 11:58:06 [INFO] Endpoint Config: &{CredentialsChainVerboseErrors:<nil> Credentials:<nil> Endpoint:<nil> EndpointResolver:<nil> EnforceShouldRetryCheck:<nil> Region:<nil> DisableSSL:<nil> HTTPClient:0xc420ab0f60 LogLevel:<nil> Logger:<nil> MaxRetries:<nil> Retryer:<nil> DisableParamValidation:<nil> DisableComputeChecksums:<nil> S3ForcePathStyle:<nil> S3Disable100Continue:<nil> S3UseAccelerate:<nil> EC2MetadataDisableTimeoutOverride:<nil> UseDualStack:<nil> SleepDelay:<nil> DisableRestProtocolURICleaning:<nil>}
2017/07/18 11:58:06 [DEBUG] plugin: terraform: aws-provider (internal) 2017/07/18 11:58:06 [INFO] Our Config: &{AccessKey: SecretKey: CredsFilename: Profile: Token: Region:us-east-1 MaxRetries:25 AssumeRoleARN: AssumeRoleExternalID: AssumeRoleSessionName: AssumeRolePolicy: AllowedAccountIds:[] ForbiddenAccountIds:[] CloudFormationEndpoint: CloudWatchEndpoint: CloudWatchEventsEndpoint: CloudWatchLogsEndpoint: DynamoDBEndpoint: DeviceFarmEndpoint: Ec2Endpoint: ElbEndpoint: IamEndpoint: KinesisEndpoint: KmsEndpoint: RdsEndpoint: S3Endpoint: SnsEndpoint: SqsEndpoint: Insecure:false SkipCredsValidation:false SkipGetEC2Platforms:false SkipRegionValidation:false SkipRequestingAccountId:false SkipMetadataApiCheck:false S3ForcePathStyle:false}
2017/07/18 11:58:06 [DEBUG] plugin: terraform: aws-provider (internal) 2017/07/18 11:58:06 [INFO] Credential chain: &{creds:{AccessKeyID: SecretAccessKey: SessionToken: ProviderName:} forceRefresh:true m:{state:0 sema:0} provider:0xc420ab0fc0}
2017/07/18 11:58:06 [DEBUG] plugin: terraform: aws-provider (internal) 2017/07/18 11:58:06 [INFO] Attempt to get credentials: {AccessKeyID:ALITTLELAMB SecretAccessKey:ANOTHERLAMB SessionToken: ProviderName:SharedCredentialsProvider}
2017/07/18 11:58:07 [DEBUG] plugin: terraform: aws-provider (internal) 2017/07/18 11:58:07 [INFO] Ignoring AWS metadata API endpoint at default location as it doesn't return any instance-id
2017/07/18 11:58:07 [DEBUG] plugin: terraform: aws-provider (internal) 2017/07/18 11:58:07 [INFO] AWS Auth provider used: "SharedCredentialsProvider"
2017/07/18 11:58:07 [DEBUG] plugin: terraform: aws-provider (internal) 2017/07/18 11:58:07 [INFO] Initializing DeviceFarm SDK connection
2017/07/18 11:58:07 [DEBUG] plugin: terraform: aws-provider (internal) 2017/07/18 11:58:07 [DEBUG] [aws-sdk-go] DEBUG: Request sts/GetCallerIdentity Details:
Additional debug, to print the contents of parameters exists within this output in order to try to determine the parameters being processed, has been added here (otherwise there's not a lot of extra information.
I work in DevOps, and my accounts on AWS are part of our 'DevOps / Build and Test account'. The teams I work with have their own accounts - the backend team use their 'Backend' account, and the R&D account have an 'Experimental account'. The usual way of working is that we have users within our own accounts, but we can assume roles within other accounts depending on our access needs. This way of working means that we can restrict the permissions in different areas across accounts, and not proliferate users within accounts that are not needed.
It is not appropriate for us to put the assume_role into the terraform files, because they should be generic - applicable to any team's account. Thus, it makes more sense for us to define the profile using the 'AWS_PROFILE'/'AWS_DEFAULT_PROFILE' settings.
I expect that I can create a shared credentials file (~/.aws/credentials) containing (with actual credentials obscured):
[default]
aws_access_key_id=ALITTLELAMB
aws_secret_access_key=ANOTHERLAMB
And a shared configuration file containing:
[profile backend]
assume_role_arn = arn:aws:iam::01234567890123:role/Maintenance
default_profile = default
And the environment variable AWS_PROFILE
set to 'backend
'.
Also the AWS_DEFAULT_PROFILE
is set to 'backend
' (as it seems unclear how these are used within the SDK).
It is possible that the AWS_SDK_LOAD_CONFIG
variable needs to be set to a 'truthy' value ('1
'), so I have tried this with the value set and not set, and neither was effective.
The Terraform behaviour appears to use the default account, not the assumed role.
See above.
See 'Intended usage' above.
During trying to testing there were a few configurations that resulted in crashes, which will be reported independantly.
We would like to use this behaviour, but after a day or so of testing with different combinations and adding further debugging to the go code, and reading the SDK code and documentation, I have not managed to make the assumed roles work in this way (assuming roles using the access/secret/session keys manually generated by aws sts assume-role
works, but is a much more awkward mechanism). The implication on the aws-go-sdk
documentation are that the use of NewSession would take care of these configuration settings; but I get lost in the use of the chained credentials that is called by the auth_helpers. Go code, and the internals of terraform, are not speciality, but although I'm happy to look at it, I've reached the end of what I can investigate and so... I ask for help on this - even if it's not a 'bug', but just that I'm Doing It All Wrong.
Diffs for the extra debug I had added:
I'm exploring using Terraform over Cloudformation and have run into this on practically day one. I'm keenly interested in using profiles to assume roles. I don't know much about idiomatic Go or the patterns employed in this project, but I'd like to help if I can. I took a glance at the code here on Github, and it looks like one simple way to fix this bug would be to set the provider configuration for AssumeRoleARN to the role arn from the shared credentials file if it is not set explicitly in the provider configuration. Then, it the rest of the existing code should "just work." However, I can understand if the project maintainers prefer to instead have the implementation try each, instead of mutating the configuration at parse-time. But this, too, looks like it'd be fairly easy, excising a function out of https://github.com/terraform-providers/terraform-provider-aws/blob/master/aws/auth_helpers.go#L179-L237 and invoking it for either the AssumeRoleARN (if provided) or the role from the shared credentials file for the provided profile. Can someone confirm this is the right direction? And if so, is anyone already working on fixing it?
The readme doesn't offer much in way of development here. I'll look at the documentation to figure out how to actually verify my plugin against a real Terraform configuration.
We have multiple AWS accounts for different purposes, which are managed by Terraform. Several developers have permissions to execute Terraform, but these permissions might be different per developer. Also a single developer might have multiple permission levels (eg on daily work we don't use permissions which allows us to delete a RDS instance; eg only a few devs are allowed to do IAM changes).
Until now, we had multiple AWS users (eg alice-ro
, alice-operator
, alice-admin
, bob-ro
, bob-operator
for each account). This also means each AWS user their own access keys and also maybe a login for the web UI.
We want¹ to change this into a role based system, where each developer has only a single AWS user in a single account. This user hasn't much permissions, but is used for assuming roles for all other accounts.
This scenario doesn't allows use hardcoded assume_role.role_arn
in the provider config, since each developer might need to use a different ARN.
¹ Actually we already did, but are blocked by this issue, now.
I just digged a bit through the code. One would assume we just have to add a provider to the list, but it looks like there is no such built-in provider.
I also took a look into the AWS SDK. It can do it correctly, when creating a session:
sess, err := session.NewSessionWithOptions(session.Options{
Config: aws.Config{
Region: "my-region",
},
SharedConfigState: session.SharedConfigEnable,
Profile: "my-profile",
})
Here are some interessing parts with are called by using this:
So, if I see it correctly, we would need to load the config files manually and create the credentials manually.
FYI: I am working on a fix. I hope it will be ready in the next days. It got bigger than expected, since it might be necessary to assume a role twice. One time in ~/.aws/config
and one time in the Terrafrom provider config. The first one is for authenticating the user and the second one for specials roles (think of multiple Terraform provider configs). We actually have this use case.
I think this duplicates, at least partially, #186 which also linked to an un-merged PR (that predates the providers split) https://github.com/hashicorp/terraform/pull/11734
However, I see the WIP PR for this issue from @svenwltr and would dearly love to see that merged (and happy to help if I can). I only mention this incase the previous PR provides a "leg up".
Neat, the same thing implemented twice :-D I also would be fine if the other PR gets merged, but not being able to assume roles via the config is currently a blocker for us.
Also I don't really see why hashicorp/terraform#11734 didn't get responses from the maintainers.
@svenwltr I've ported across the original PR to this repo (it predates the providers moving to their own repos, so would never get merged now). The approach taken there does seem simpler (gets the AWS SDK to do the work, rather than reading and processing the config file in Terraform).
Did you have any thoughts on testing for this?
@mikemoate I also don't see a simple way to test this. You would need to test the whole Client()
method, which requires some stub IAM endpoint.
Also, I tested your branch with our Terraform setup and it worked.
Shoot, just tried to use the profile
option in my terraform aws provider, where that profile in the AWS config file is setup to assume a role based on my default profile. Any way I can help test this one? (I've not attempted to build terraform from source yet, so pointers welcome.)
Hi folks,
Since the merge of https://github.com/terraform-providers/terraform-provider-aws/pull/1608, Terraform now allows to use extended profiles by using:
provider "aws" {
profile = "<myprofile>"
}
This will be released with 1.3.0 in a few days. Does it fix your issue totally or partially?
It's possible that it will work around the issue raised here (if I've understood the comments on the issue correctly) - but the specific issue I was raising here was for the environment variables setting the assumed role, rather than the configuration in the file.
In a lot of the cases that we use our terraform modules, they are generic and do not say which role they must be run in. That's something that is selected by the user and then the module is applied to the role that they have assumed. At present, we have a tool that selects the role through getting a session token in an Environment variable, and this works around the issue too.
I presume that we can always create a temporary file 'account.tf' that contains the example text as you mention, but then that's a configuration file that wouldn't be committed to source control, so I feel a little uncomfortable in requiring that.
I probably won't have an opportunity to try the change out until later in the month, but even though it doesn't address the issue explicitly, the general ability to do this within the configuration file _will_ address some issue elsewhere in our system where the configuration must always be applied to a given role, so I am looking forward to that.
It's unacceptable to specify a profile in the terraform resource files. Every person is going to name their profiles differently and resources should be portable across environments running in different AWS accounts and thus using different roles.
It is AWS best practice to have multiple accounts and assume role to access each. Terraform should support AWS best practices and support using the AWS_PROFILE environment variable to reference a profile containing a source_profile and role_arn. As is described in the original ticket, this should work:
~/.aws/credentials:
[mysecurityaccount]
aws_access_key_id = key
aws_secret_access_key = secret
~/.aws/config:
[profile subaccount]
output = json
region = us-east-1
role_arn = arn:aws:iam::<account ID>:role/subaccount_role
source_profile = mysecurityaccount
Running terraform:
export AWS_PROFILE=subaccount
terraform apply
If this doesn't work, there is an issue with terraform. The GO SDK fully supports this workflow.
@et304383 please see #1608
Not sure how that's relevant. The issue persists and that's a closed (merged) item.
@et304383 It's directly relevant, though, as others have pointed out, you will need to wait for the next releases to see it's effects.
Tested the above test case as defined by @et304383 with a build off of current master (yet-to-be cut 0.11.2).
Setting the profile value in the provider configuration within the Terraform plan will correctly read the ~/.aws/config and any assumed roles in it.
Exported AWS_PROFILE (and leaving the profile config blank in the provider configuration block empty) works only if the profile exists within the credentials file.
If the exported AWS_PROFILE is set to a profile that is only found in ~/.aws/config (e.g., subaccount
in example above), Terraform still complains that no valid credential sources could be found for AWS Provider.
@twang-rs Lucky you. I'm using AWS provider 1.6, built TF 0.11.2 but I can't get the S3 provider to work with AWS_PROFILE env vars.
I have the exact same configuration mentioned by et304383 in his comment, and it just not work.
@sterfield, & rest of people in thread, in case it is not clear, my report above is to claim that the patch mentioned in this thread does not fix the issue of using assumed roles via the AWS_PROFILE environment variable.
Indeed, I have submitted a new PR (#2883) that enables the use case described by @et304383.
@twang-rs Oh ok, I may have misread it. Sorry for the noise. I'll subscribe to your PR and wait for it to be merged then.
Thanks for taking care of this issue !
Duplicate: #362
At this point wouldn't it make sense to rewrite the entire credentials provider for Terraform to just delegate to the GO SDK?
@morganchristiansson I don't think this is a duplicate of #362, because #362 is about the S3 backend configuration and this is about the AWS provider configuration.
What can we do to push this along? This PR would solve several usability issues we have with Terraform. Chiefly, this allows us to push credentials out of plans and instead rely on environment to provide credentials to the plan. It enables CI pipelines that are designed to push code to "this" environment to treat plans as generic IaaC resources without the need to preprocess plans using some sort template processing. Furthermore it helps us push credentials out of repos and defer credential management to an external system (our CI/cd pipelines in our case). Finally, it follows the principles of least surprises by using credentials that other AWS tools support.
I've also just hit problems getting tf to assume a role when running.
The only way it seems to work is with an assume_role
in the provider block. I can't get a profile to work via AWS_PROFILE=foo
at all.
Would be really great to get this fixed.
Running into something that seems similar reading through this thread. I want to use the profile on backend to name a profile that uses role_arn to specify access to the account that contains the S3 bucket and Dynamo table. I have a backend.tf file that looks like:
terraform {
backend "s3" {
bucket = "tf-state"
key = "staging/data"
region = "us-west-2"
encrypt = true
dynamodb_table = "tf-state"
profile = "something"
}
}
and I have AWS cli config like so:
[profile something]
role_arn = arn:aws:iam::457257261972:role/terraform_backend
source_profile = default
region = us-west-2
When I run terraform init
it seems to work up to a point, I get:
robs-MBP:terraform rob$ terraform init
Initializing the backend...
Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.
Error refreshing state: AccessDenied: Access Denied
status code: 403, request id: 91D0A03F8D0E62B3, host id: qhWoaXOaqFJAx1I+/dKFcaA6YaZH+3eLEbRFMF20g2FW6FnTrXTtwob6wej/qhgqbdURUZSy7ik=
Terraform version is 0.11.5. Should I file another ticket this one and others all seem tied to the provider block and not related to backend. I am not sure why this stuff wouldn't work as it is standard AWS configuration and works with CLI and other tooling.
I think this should be closed so we can follow fresh reports.
@robottaway I have been using the exact setup as you have described for the last year without issue. The only difference is that I don't use the default
profile, because I think that's super dangerous to default to _anything_. If you have multiple accounts, I recommend you do the same, that way you hit an error when something is trying to use an unnamed profile instead of having unexpected results.
@cornfeedhobo:
That would be quite infuriating as terraform so obviously does not behave according to the standards that AWS defines for their credential lookup chain.
I get that terraform already fully supports switch role accounts if you define it in the provider and/or the backend stanza.
But there are use cases in which it would be preferred to simply rely on the default, and well understood behavior of the AWS SDK to use a credential chain to acquire temporary switch role credentials.
The pull request is very small, easy to review, and has minimal impact on existing behavior. Our organization has been waiting on this PR for quite some time so that we can stop maintaining some ugly template code that retrieves credentials and creates provider.tf and backend.tf all over our plans.
But there are use cases in which it would be preferred to simply rely on the default, and well understood behavior of the AWS SDK to use a credential chain to acquire temporary switch role credentials.
^^ This. A million times, this.
@twang-rs no, you did not understand me or wires were crossed. To be clear, here is a boiled down version of my setup:
~/.aws/credentials
[main]
aws_access_key_id = awesomekey
aws_secret_access_key = awesomekey
~/.aws/config
[profile main]
output = json
region = us-east-1
[profile production]
output = json
region = us-east-1
role_arn = arn:aws:iam:123456789:role/MyAwesomeRole
source_profile = main
main.tf
provider "aws" {
profile = "production"
}
Using profiles and roles specified in ~/.aws/credentials
and ~/.aws/config
, I am able to assume roles without specifying so in Terraform.
This follows the sdk specifications exactly as the aws cli does.
Update: you also may need AWS_SDK_LOAD_CONFIG
to use AWS_PROFILE
, but I'm not sure
@cornfeedhobo You're specifying the profile in the provider.
It should work with AWS_PROFILE=production
set in the environment. It doesn't.
Right, and there is an open PR https://github.com/terraform-providers/terraform-provider-aws/pull/2883 that resolves this.
On that basis, I see no reason to close the issue (it describes a valid problem, and there is a pending solution).
Fair 'nuf.
@robinbowes thanks for the correction and clarification!
Thanks for taking the time to review the use case and the pull request.
As I think about potential issues, I realize that it is definitely worth reviewing the negative use-case where users may be relying on the lack of provider configuration to abort the plan.
This change will allow a plan to retrieve credentials with potentially disastrous results iff credentials can be found in the environment, or the EC2 metadata (if running with the AWS environment) and if AWS_SDK_LOAD_CONFIG is set.
Given that the AWS_SDK_LOAD_CONFIG variable must be explicitly set (whereas other credentials providers may be implicitly available), I feel that this is an acceptable change to behavior.
Definitely worth a couple of eyes, however, to review the potential ramifications.
@cornfeedhobo thanks m8! I took your words as a challenge and started by granting my role very admin-ish permissions and it did in fact work. I was able then to determine what the issues where with my IAM bits and carve it down to sensible levels. Much appreciated 🥇
PR opened against terraform so that backends can take advantage of this too.
Can this be added to the docs please? https://www.terraform.io/docs/providers/aws/ has no info at all about this AWS_SDK_LOAD_CONFIG=1 needing to be set. This is non default behavior from the way the cli and API work.
I wasted 20mins + on this because I've never heard of AWS_SDK_LOAD_CONFIG before this big long github issue
Technically, this is a (mis)feature of the AWS Go SDK and is default behavior for all Go based applications (kops, for example also needs this environment variable). Documentation can be found in AWS’s docs:
https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/sessions.html
https://docs.aws.amazon.com/sdk-for-go/api/aws/session/
I agree that this could be made more obvious in the Terraform docs.
For anyone interested, I've opened a couple of PRs documenting the AWS_SDK_LOAD_CONFIG requirement:
https://github.com/hashicorp/terraform/pull/21122
https://github.com/terraform-providers/terraform-provider-aws/pull/8451
hello all,
I have troubles chaining profiles using the aws provider assume role:
Use case: an organization with several accounts. One of them is used by users to authenticate (Auth
), other accounts are used to define several other platforms, and our users are supposed to identify themselves with keys in the Auth
account and then assume a role (PowerUserRole
) in each individual account.
For some resources, users assuming the PowerUserRole
need to create some resources in the Root account. For that, they are allowed to assume a RootResourcesAccessRole
which allow them limited access to the Root account.
This scheme, credentials, roles and policies have been tested using awscli
and work fine.
So, i have the following credentials defined in ~/.aws/credentials
[AuthUser]
aws_access_key_id=xxxxxxxx
aws_secret_access_key=yyyyyy
The following profiles defined:
[profile TestPlatform]
region = eu-west-1
source_profile = AuthUser
role_arn = arn:aws:iam::<TestAccountId>:role/PowerUsersRole
[profile TestToRoot]
region = eu-west-1
source_profile = TestPlatform
role_arn = arn:aws:iam::<RootAccountId>:role/RootResourcesAccessRole
As you can see, the TestToRoot
profile chains assume roles (and again, works perfectly using AWS cli)
The idea is to be able to use AWS_SDK_LOAD_CONFIG=1 AWS_PROFILE=TestPlatform terraform apply
and use 2 providers defined like this :
provider "aws" {
version = "~> 2"
region = "${var.aws_region}"
}
provider "aws" {
alias = "root"
version = "~> 2"
region = "${var.aws_region}"
assume_role {
session_name = "terraform-root"
role_arn = "arn:aws:iam::<root account id>:role/RootResourcesAccessRole"
}
}
the aws
and aws.root
providers are supposed to authenticate using AWS_PROFILE
, aws.root
should allow to update resources in the Root account.
(output of TF_LOG=info)
4: 2019/07/02 15:38:54 [INFO] Building AWS auth structure
2019-07-02T15:38:54.971+0200 [DEBUG] plugin.terraform-provider-aws_v2.17.0_x4: 2019/07/02 15:38:54 [INFO] Setting AWS metadata API timeout to 100ms
2019-07-02T15:38:54.971+0200 [DEBUG] plugin.terraform-provider-aws_v2.17.0_x4: 2019/07/02 15:38:54 [INFO] Setting AWS metadata API timeout to 100ms
2019/07/02 15:38:54 [ERROR] root: eval: *terraform.EvalConfigProvider, err: No valid credential sources found for AWS Provider.
Please see https://terraform.io/docs/providers/aws/index.html for more information on
providing credentials for the AWS Provider
2019/07/02 15:38:54 [ERROR] root: eval: *terraform.EvalSequence, err: No valid credential sources found for AWS Provider.
Please see https://terraform.io/docs/providers/aws/index.html for more information on
providing credentials for the AWS Provider
2019/07/02 15:38:54 [ERROR] root: eval: *terraform.EvalOpFilter, err: No valid credential sources found for AWS Provider.
Please see https://terraform.io/docs/providers/aws/index.html for more information on
providing credentials for the AWS Provider
2019/07/02 15:38:54 [ERROR] root: eval: *terraform.EvalSequence, err: No valid credential sources found for AWS Provider.
Please see https://terraform.io/docs/providers/aws/index.html for more information on
providing credentials for the AWS Provider
...
Error: Error refreshing state: 1 error occurred:
* provider.aws.root: No valid credential sources found for AWS Provider.
Please see https://terraform.io/docs/providers/aws/index.html for more information on
providing credentials for the AWS Provider
Defining aws.root
as follows:
```provider "aws" {
alias = "root"
version = "~> 2"
region = "${var.aws_region}"
profile = TestToRoot
}
```
works fine, but again, as many mentioned already, this ties the templates to the user definition, and also forces our users to define as many "XXXToRoot" profiles as there are platforms ...
I'm no go developer so couldn't try to find a solution... so if you have an idea...
This is still an issue for me. I'm trying to run Terraform from an EC2 instance with an IAM Role attached to it, using an AWS shared credentials file containing profiles that assume another role in a different account.
Simplified example:
(n.b. In this Terraform state I'm only using the Kubernetes and Helm providers, but am using an S3 backend.)
My EC2 instance running Terraform has the following role attached:
arn:aws:iam::11111111111:role/terraform-instance
$HOME/.aws/credentials on the instance:
[dev]
role_arn = arn:aws:iam::22222222222:role/terraform-ci
credential_source = Ec2InstanceMetadata
region = eu-west-1
[prod]
role_arn = arn:aws:iam::33333333333:role/terraform-ci
credential_source = Ec2InstanceMetadata
region = eu-west-1
$HOME/terraform/dev/backend.tf
terraform {
backend "s3" {
bucket = "terraform-state"
key = "dev-terraform-state"
region = "eu-west-1"
profile = "dev"
encrypt = "true"
dynamodb_table = "terraform_statelock_dev"
}
}
When I run a terraform init
with TF_LOG=debug
set I can see that it uses the EC2 instance role to try and connect to the S3 backend - it fails as that role does not have access.
If, however, I replace the role_arn
and credential_source
lines in the AWS profiles with actual credentials (i.e. aws_access_key_id
and aws_secret_access_key
) then Terraform uses the correct profile.
I have tried setting explicitly setting the AWS_SHARED_CREDENTIALS_FILE
and AWS_PROFILE
environment variables, also tried setting AWS_SDK_LOAD_CONFIG=1
but to no avail.
Finally, running aws sts get-caller-identity --profile dev
correctly returns the terraform-ci
role ARN.
As a workaround, I can explicitly tell the backend to assume the role:
$HOME/terraform/dev/backend.tf
terraform {
backend "s3" {
bucket = "terraform-state"
key = "dev-terraform-state"
region = "eu-west-1"
role_arn = "arn:aws:iam::22222222222:role/terraform-ci"
encrypt = "true"
dynamodb_table = "terraform_statelock_dev"
}
}
However I would much prefer to use profile
as that profile can then contain IAM credentials for those running Terraform locally, and the role ARN for non-human access.
I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.
If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thanks!
Most helpful comment
What can we do to push this along? This PR would solve several usability issues we have with Terraform. Chiefly, this allows us to push credentials out of plans and instead rely on environment to provide credentials to the plan. It enables CI pipelines that are designed to push code to "this" environment to treat plans as generic IaaC resources without the need to preprocess plans using some sort template processing. Furthermore it helps us push credentials out of repos and defer credential management to an external system (our CI/cd pipelines in our case). Finally, it follows the principles of least surprises by using credentials that other AWS tools support.