I have the following policy for my instance role:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::foo/bar/*"
],
"Effect": "Allow"
}
]
}
If I try to
aws --region=eu-west-1 s3 cp --acl public-read ./baz s3://mybucket/foo/bar/baz
Then I get:
upload failed: ./baz to s3://mybucket/foo/bar/baz A client error (AccessDenied) occurred when calling the PutObject operation: Access Denied
If I change the policy to allow s3:*
rather than just PutObject
, the it works. It doesn't work if I add ListObject
.
Any ideas?
aws-cli/1.3.4
boto==2.9.6
botocore==0.38.0
I think this might be our bug. I wasn't aware of the need for a PutObjectAcl
role. It might be helpful if the documentation said which were needed.
This appears to work:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:PutObject",
"s3:PutObjectAcl"
],
"Resource": [
"arn:aws:s3:::foo/bar/*"
],
"Effect": "Allow"
}
]
}
Well, I'll reopen this issue for thought because the error message was unhelpful. It could have told me that it was doing a PutObjectAcl
or something when it failed.
+1
I had the same problem and I solved it adding PutObjectAcl
. The error message isn't helpful.
+1
Thanks for this issue! That solved it for me as well. A better error message would be helpful, though.
I think our best bet here would be to update our documentation. Part of the problem from the CLI side is that we don't actually know why the request failed. The error message we display is take directly from the XML response returned by S3:
<?xml version="1.0" encoding="UTF-8"?>
<Error>
<Code>AccessDenied</Code>
<Message>Access Denied</Message>
<RequestId>id</RequestId>
<HostId>id</HostId>
</Error>
So this could fail because of the missing PutObjectAcl
, or could be that the resource you're trying to upload to isn't specified in the "Resource"
in your policy. The CLI can't know for sure.
Leaving this open and tagging as documentation so we'll get all the s3 docs updated with the appropriate policies needed.
+1 of PutObjectAcl being the culprit of much pain in my deployment as well
+1
+1
To summarize, this issue happens when you try to set an ACL on an object via the --acl
argument:
Given:
"Action": [
"s3:PutObject"
],
# This works:
$ aws s3 cp /tmp/foo s3://bucket/
# This fails:
$ aws s3 cp /tmp/foo s3:/bucket/ --acl public-read
upload failed: ../../../../../../../tmp/foo to s3://bucket/foo A client error (AccessDenied) occurred when calling the PutObject operation: Access Denied
Given my previous comment, I'd propose updating the documentation for --acl
to mention that you need "s3:PutObjectAcl"
set if you're setting this param.
Thoughts? cc @kyleknap @mtdowling @rayluo @JordonPhillips
@jamesls a slightly more discoverable fix would be to say "A client error (AccessDenied) occurred when calling the PutObjectAcl operation", since that would make it clear what's failing and that it's missing from my policy. Otherwise I'll just see the error complaining that it tried to PutObject
and bang my head against the wall saying "but I have PutObject
in my IAM policy!", without ever noticing that PutObjectAcl
isn't there.
Not sure how possible that would be to implement because the actual command we're invoking is is PutObject
so that comes directly from the python SDK. We don't have a way of knowing that the command failed because of a missing PutObjectAcl in the policy. We could check if you specified the --acl
argument, but the error message we get back is a catch all access denied error that could be caused by a number of issues.
this really caused me some time to debug.
@jamesls I didn't use --acl, but still my command gives error " access denied when calling the put operation".. What could be the reason?
@jamesls I think the error message being generic is fine, but the help to debug is not. There is no mention of ACL or policy problems to guide developers to the right place(s) to check.
@jamesls when I use --exclude "folder/" is not working with nested folders.
For eg. if my filepath is c:/source/f1, and my cmd is --exclude "f1/" working perfectly
But if my path is c:/source/ff/files/temp/f1 then f1 is not getting excluded. Is there any solution for this?
why does "aws cp" cli tool work without the "s3:PutObjectAcl" ?
An error occurred (AccessDenied) when calling the PutObject operation: Access Denied
I am also getting same error while trying the cp command.
Note: the failed call to PutObjectAcl
never appears in your CloudTrails
PutObjectTagging
could also be the culprit
This still happens. In my case, CodeBuild was telling me that PutObject
failed, when really it was trying PutObjectAcl
. After an hour of amateurishly digging around, I found out my --acl public-read
tag was the culprit. I don't think it was even necessary for the static-web-site S3 bucket which already had bucket-level public read settings.
currently stabbing my eyes out trying to figure this out! lol
Uploading a file really shouldn't be that complicated, yet here we are.
Never fail to amaze me, AWS.
Had the same issue with my setup. Turns out if your bucket is encrypted you need to use the --sse
flag, in my case that was --sse aws:kms
Experiencing the same issue
An error occurred (AccessDenied) when calling the PutObject operation: Access Denied: ClientError
Working if i disable default KMS encryption.
My error that lead to the PutObject
error was a wrong ARN. I did not need other permissions than PutObject
.
I used { "Fn::Join": ["/", [ "arn:aws:s3:::", "${file(./config.${self:provider.stage}.json):ticketBucket}/*" ] ] }
which should have been { "Fn::Join": ["", [ "arn:aws:s3:::", "${file(./config.${self:provider.stage}.json):ticketBucket}/*" ] ] }
(note the /
after Fn::Join
).
Most helpful comment
I think this might be our bug. I wasn't aware of the need for a
PutObjectAcl
role. It might be helpful if the documentation said which were needed.This appears to work: