Aws-cli: S3 copy fails with HeadObject operation: Forbidden when coping a file from one bucket to another in the same region

Created on 6 Mar 2019  路  3Comments  路  Source: aws/aws-cli

Hi,

I am having a strange issue with copying a file from one bucket to another which are both in the same region. The error is this:

An error occurred (403) when calling the HeadObject operation: Forbidden

If I run the command on my local machine it works fine. So I updated the docker image to the latest cli which is aws-cli/1.16.118 Python/2.7.15 Linux/3.10.0-229.1.2.el7.x86_64 botocore/1.12.108 - did the same on my local machine and verified that I could still copy between buckets. Works locally but not on this image.

Looking at aws cli debug I can see that the problem is that there are two AWSPreparedRequest one which has the correct region and the other which doesn't have the region:

1st with the correct region specified

2019-03-06 15:28:30,178 - MainThread - botocore.endpoint - DEBUG - Sending http request: <AWSPreparedRequest stream_output=False, method=HEAD, url=https://bucket1.s3.eu-west-1.amazonaws.com/folder1/config.js, headers={'X-Amz-Content-SHA256': 'REDACTED', 'Authorization': 'AWS4-HMAC-SHA256 Credential=REDACTED/eu-west-1/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=REDACTED', 'X-Amz-Date': '20190306T152830Z', 'User-Agent': 'aws-cli/1.16.118

2nd has no region

2019-03-06 15:28:30,374 - MainThread - botocore.endpoint - DEBUG - Sending http request: <AWSPreparedRequest stream_output=False, method=HEAD, url=https://bucket1.s3.amazonaws.com/folder1/config.js, headers={'X-Amz-Content-SHA256': 'REDACTED', 'Authorization': 'AWS4-HMAC-SHA256 Credential=REDACTED/us-east-1/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=REDACTED', 'X-Amz-Date': '20190306T152830Z', 'User-Agent': 'aws-cli/1.16.118 Python/2.7.15 Linux/3.10.0-229.1.2.el7.x86_64 botocore/1.12.108'}>

So what is happening is that it is defaulting us-east-1 instead of eu-west-1.

Any ideas why this may be happening? The only other thing I can think of is it being to do with linux distros being used. The docker image uses alpine and my local machine is ubuntu.

Thanks.

guidance

Most helpful comment

The issue was that I had an env variable CONFIG_SERVICE_BUCKET (legacy) which was set in Gitlab secret variables as well trying to set the same named variable in a deploy job! So it was taking the old setting instead of the new one.

All 3 comments

Ok there is something wrong with my s3 command and using env variables in Gitlab. Sorry will close this.

The issue was that I had an env variable CONFIG_SERVICE_BUCKET (legacy) which was set in Gitlab secret variables as well trying to set the same named variable in a deploy job! So it was taking the old setting instead of the new one.

@NeilJ247 - Thanks for reporting this behavior, providing feedback with the fix, and updating this thread with the root cause. Our documentation on Configuration Settings and Precedence might be helpful.

Was this page helpful?
0 / 5 - 0 ratings