Version: 1.4.3 (and builds from current master branch)
Host: MacOSX 10.13.7
Release 1.4.3 and master branch fail to authenticate with AWS using environment variables; presumably as a result of the upgrade to AWS SDK in #7967 (I suspect in fact this may be an AWS SDK bug, and not a packer bug, but raised here as it is packer that is impacted).
The following code snippets are from the same terminal session having logged in to AWS and set the standard AWS_ACCESS_KEY_ID
, AWS_SECRET_ACCESS_KEY
, AWS_SESSION_TOKEN
environment variables.
Version 1.4.2 (expected output):
Joes-MacBook-Pro:packer joe$ packer version
Packer v1.4.2
Joes-MacBook-Pro:packer joe$ packer build base.json
amazon-ebs output will be in this color.
==> amazon-ebs: Prevalidating AMI Name: test
amazon-ebs: Found Image ID: ami-026c8acd92718196b
Version 1.4.3 (actual output):
Joes-MacBook-Pro:packer joe$ ./packer version
Packer v1.4.3
Joes-MacBook-Pro:packer joe$ ./packer build base.json
amazon-ebs output will be in this color.
Build 'amazon-ebs' errored: error validating regions: UnauthorizedOperation: You are not authorized to perform this operation.
status code: 403, request id: 3d27f4f1-0151-4d1c-b8af-73c7dae5adfd
==> Some builds didn't complete successfully and had errors:
--> amazon-ebs: error validating regions: UnauthorizedOperation: You are not authorized to perform this operation.
status code: 403, request id: 3d27f4f1-0151-4d1c-b8af-73c7dae5adfd
Hello @joe-bowman, thanks for reporting. Weird, this worked for me on master, v1.4.2 & v1.4.3.
I have AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
set.
Is there anything special about that token ? Have you tried re creating one ?
Short lived temporary credentials require a session token. Nothing special
about my particular use case though. Just standard STS generated
credentials. They work with 1.4.2 but not 1.4.3.
On Fri, 30 Aug 2019, 14:05 Adrien Delorme, notifications@github.com wrote:
Hello @joe-bowman https://github.com/joe-bowman, thanks for reporting.
Weird, this worked for me on master, v1.4.2 & v1.4.3.
I have AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY set.Is there anything special about that token ? Have you tried re creating
one ?—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/hashicorp/packer/issues/8036?email_source=notifications&email_token=ABA7QYA4CVTPRICNFBQLDUDQHELLDA5CNFSM4IPFSXW2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD5RTCOY#issuecomment-526594363,
or mute the thread
https://github.com/notifications/unsubscribe-auth/ABA7QYCALNFKAHW5YNBYSELQHELLDANCNFSM4IPFSXWQ
.
@joe-bowman not sure if this is applicable for you but on my side I had this kind of issue because of 2 things:
the second seems to be 'somewhat new' due to a change either in the AWS Go SDK or in the packer code when checking that the AMI you are basing your work on (source_ami_filter
) is available in the proper region
Please note also that 1.4.2 tag does not seem to exist (anymore?) in packer repository
Can confirm issue. Packer 1.4.2 works as expected using env vars for credentials, 1.4.3 results in different error message than above.
$ packer build .\ubuntu-xenial.json
amazon-ebs output will be in this color.
Build 'amazon-ebs' errored: CredentialRequiresARNError: credential type source_profile requires role_arn, profile default
==> Some builds didn't complete successfully and had errors:
--> amazon-ebs: CredentialRequiresARNError: credential type source_profile requires role_arn, profile default
==> Builds finished but no artifacts were created.
Running under Windows 10 (1903).
I'm wondering/hoping this is fixed upstream by https://github.com/aws/aws-sdk-go/pull/2731. I've made PR #8131 updating the sdk. Patched binaries can be found here: https://circleci.com/gh/hashicorp/packer/13335#artifacts/containers/0
I would really appreciate it if y'all experiencing this problem could test it out :)
@joe-bowman and @jimcroft can either of you make some time this week to test the linked build?
I still have this problem with 1.4.3-1.4.5, but not with 1.4.2
@kika I can't reproduce; can you please share more information about how you are setting credentials?
@SwampDragons usual environment variables, AWS_ACCESS_KEY_ID
, AWS_SECRET_ACCESS_KEY
, AWS_PROFILE
, AWS_SESSION_TOKEN
. My default credentials are only good to choose the MFA device and a few related operations. I wrote a script that I source
, it asks for the MFA code and set the environment variables for the temporary credentials. They allow me to do anything and they are good for aws
cli for example.
And you are not setting region
, token
, access_key
, or secret_key
at all inside your template? What do your logs from a failed build look like?
@SwampDragons I do have the access_key
, secret_key
, region
and profile
in the template. They are just pulled in from the environment:
builders:
- type: amazon-ebs
access_key: "{{user `aws_access_key`}}"
secret_key: "{{user `aws_secret_key`}}"
region: "us-east-1"
source_ami: "{{user `ami`}}"
profile: "{{user `aws_profile`}}"
Build 'amazon-ebs' errored: error validating regions: UnauthorizedOperation: You are not authorized to perform this operation.
status code: 403, request id: 5330a0ea-0163-412a-b5fd-6fdb746d2cc3
Based on that error message I wonder if this is a permissions/IAM issue. Does your user have permission to list regions?
Is this still an issue for people?
As I mentioned earlier message says error validating regions
therefore you are missing a permission in IAM
This still happens to met with packer version 1.5.4:
packer build --debug packer.json
Debug mode enabled. Builds will not be parallelized.
AWS AMI Builder: output will be in this color.
Build 'AWS AMI Builder' errored: CredentialRequiresARNError: credential type source_profile requires role_arn, profile default
==> Some builds didn't complete successfully and had errors:
--> AWS AMI Builder: CredentialRequiresARNError: credential type source_profile requires role_arn, profile default
==> Builds finished but no artifacts were created.
It actually should use environment variables to retrieve AWS credentials, but looks like it ignores them. The following AWS environment variables are set:
env | grep AWS | awk -F= '{print $1;}'
AWS_ACCESS_KEY
AWS_ACCESS_KEY_ID
AWS_CREDENTIAL_EXPIRATION
AWS_DEFAULT_REGION
AWS_IAM_REGION
AWS_REGION
AWS_SECRET_ACCESS_KEY
AWS_SECRET_KEY
AWS_SECURITY_TOKEN
AWS_SESSION_TOKEN
However, I found a workaround by additionally setting AWS_PROFILE to an invalid value (a profile that does not exits in ~/.aws/config
) then it works.
AWS_PROFILE=dummy packer build --debug packer.json
@alex-berger seems like you are overloading you environment with too many variables
which could potentially lead to issues/confusion if for instance:
AWS_ACCESS_KEY_ID != AWS_ACCESS_KEY
and/or
AWS_SECRET_ACCESS_KEY != AWS_SECRET_KEY
and from the code here, some have precedence over others so no need to overload anyways
AWS_SECURITY_TOKEN
does not seem to be used anywhere
However, using your current env and all variables within are you able to run the following AWS CLI commands successfully
aws sts get-caller-identity
aws ec2 describe-regions
# Take owners and filters values from your packer.json file source_ami_filter
aws ec2 describe-images --owners 595879546273 --filters Name=virtualization-type,Values=hvm,Name=root-device-type,Values=ebs,Name=name,Values=CoreOS-stable-2345.3.0-hvm
Could you also send back the following command output please:
env | grep AWS | awk -F= '/_TOKEN/||/_KEY/{print $1;next}{print}' | sort
# and may be default profile is used (see further down why it matters
env | grep default
What platform are you running on (Linux, Windows, ...) which version ...
From the error message you get, it seems like your ~/.aws/config
file [default]
section is sourcing another profile which in turn is missing a role_arn. This means that your ~/.aws/config
file is most likely looking like this:
[default]
source_profile = XXXX
...
# and further down
[XXXX]
...
# the following kind of entry missing from XXX profile
#role_arn = arn:aws:iam:<SOME-OTHER_ID>:role/OrganizationAccountAccessRole
This explains why when you specify a dummy AWS_PROFILE, it works
Consider fixing/modifying your ~/.aws/config
file to avoid this
@obourdon the environment variables are fine, they are setup this ways because we call a lot of different tools all built on different AWS SDK (Golang, Java, Python, Rust, JavaScript, ...) of which some (for historical reasons) use slightly different environment variable names. These environment variables are the common set of variables, that make sure all tools (SDKs) work from a single shell session. Furthermore, those environment variables are always consistent as they are all set by our session management tool. We use this setup in production for years now and it works seamless with dozens of different tools and AWS SDKs.
My observation is, that for whatever reasons, packer is trying to use the AWS config files as credentials source although it actually should use the environment variables. So, it looks like packer is not using the "default credentials provider chain". Unfortunately, I am lacking the time to further investigate into this issue and as I have a working workaround I am fine for now. Nevertheless, would be great if this could be fixed in the long run. That would save users a lot of time on googling and studying code to figure out how to work around this bug.
⮕ Packer should completely ignore the AWS config files if the environment variables AWS_ACCESS_KEY_ID
, AWS_SECRET_ACCESS_KEY
and (optionally) AWS_SESSION_TOKEN
are set.
I found the following workarounds:
AWS_PROFILE
to a dummy profile that does not exist, neither in ~/.aws/config
nor in ~/.aws/credentials
. For example AWS_PROFILE=$(uuidgen)
.~/.aws/config
file also resolves the problem.As already explained above, I can assure that the following constraints hold true:
$AWS_SECURITY_TOKEN == $AWS_SESSION_TOKEN
$AWS_ACCESS_KEY == $AWS_ACCESS_KEY_ID
$AWS_SECRET_KEY == $AWS_SECRET_ACCESS_KEY
$AWS_REGION == $AWS_DEFAULT_REGION
My ~/.aws/config
file looks like this:
[default]
region = eu-central-1
source_profile = default
mfa_serial = arn:aws:iam::XXXXXXXXXXXX:mfa/USER
And the ~/.aws/credentials
file looks like this:
[default]
aws_access_key_id=...
aws_secret_access_key=...
source_profile
entry without a role arn, when using mfa_serial
. @alex-berger many thanks for all your explanations
as stated above the concern in your config file is:
[default]
region = eu-central-1
source_profile = default # <====== THIS LINE
mfa_serial = arn:aws:iam::XXXXXXXXXXXX:mfa/USER
meaning the default profile references itself which seems unnecessary (unlesss I am mistaken of course)
Commenting out/removing this line should leave your configuration perfectly sound and identical but should get rid of the issue you are experiencing (and without adding a role arn)
Also, this does not look like a packer issue to me, but rather an AWS SDK one and packer only 'implements/behaves' as the SDK does
@obourdon Yes removing that source_profile
also resolves the problem. However, I am wondering whether now other tools might be broken (I will figure out). I am also wondering why this entry, which has been there for years has become a problem now, we are using a lot of tools that use the AWS SDK for Go and so far none of them complained about that entry.
Anyway, thanks for the hint.
Wow, good find @obourdon. The other tools in your workflow may be on different versions of the SDK than Packer, which could explain why they are/are not working. I opened another issue against the SDK a while back that was another flavor of "this slightly wrong profile configuration used to work but now doesn't, even though the CLI and earlier versions of the SDK accept it" -- https://github.com/aws/aws-sdk-go/issues/2895 so there would be some precedent. Maybe people have been tightening requirements in the SDK for profile correctness.
@alex-berger seems like @SwampDragons found something which could relate to your issue furthermore correlating the original date of the issue we are currently discussing as well as the time the AWS SDK issue has been logged.
Many thanks for pointing this out @SwampDragons.
I was currently digging deeper into packer debugging (which is not that easy as stated here) to confirm something like this
Reading from the first entry in the issue mentioned above, seems like issues occur with SDK version >=1.21.0, which seems to correlate with the fact that in Packer 1.4.3, AWS SDK has been updated from 1.16.24 to 1.22.2
I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.