Aws-cli: No easy way to copy a single directory to s3

Created on 14 Jul 2016  ·  12Comments  ·  Source: aws/aws-cli

My apologies if this has been discussed before; I tried my best to search for it (difficult) and read through the relevant labels.

Given a directory with multiple things in it:

/tmp
├── aaa
    ├── foo
    ├── bar
├── bbb
├── ccc

I'd like to copy one of those directories to s3, the equivalent of cp -r /tmp/aaa /my-bucket/.

Unfortunately, both aws s3 cp and aws s3 sync act more like cp -r /tmp/aaa/* /my-bucket/ - that is, they skip the containing directory and splat all its containing files directly into the root of the bucket. Running either

[$]> aws s3 sync /tmp/aaa s3://my-bucket

or

[$]> aws s3 cp /tmp/aaa s3://my-bucket --recursive

will result in the _contents_ of aaa residing in s3:

my-bucket:
├── foo
├── bar

(For my particular purposes, I don't care about synchronization, but was just hoping that perhaps sync would act as I hoped. Existing documentation aside, I would expect aws s3 cp to act like cp and aws s3 sync to act like rsync, in regards to directories, at least as far as can happen with s3's lack of actual directories.)

This is annoying, because it forces me to basename the directory and tack that on to the end of the bucket. It's also surprising; not only does it not act like common *nix utilities, but aws s3 sync help implies a different behavior via its use of . as a source directory for all examples.

breaking-change feature-request

Most helpful comment

+1 for this... it should work like a *nix utility.

All 12 comments

The request makes sense.

Unfortunately, I do not think we could make the change as it would be a breaking change without some flag. Most likely would not be able to consider this unless we make a major version bump.

That's what I was anticipating, but I figured I'd ask anyways. Thanks!

+1 for this... it should work like a *nix utility.

Good Morning!

We're closing this issue here on GitHub, as part of our migration to UserVoice for feature requests involving the AWS CLI.

This will let us get the most important features to you, by making it easier to search for and show support for the features you care the most about, without diluting the conversation with bug reports.

As a quick UserVoice primer (if not already familiar): after an idea is posted, people can vote on the ideas, and the product team will be responding directly to the most popular suggestions.

We’ve imported existing feature requests from GitHub - Search for this issue there!

And don't worry, this issue will still exist on GitHub for posterity's sake. As it’s a text-only import of the original post into UserVoice, we’ll still be keeping in mind the comments and discussion that already exist here on the GitHub issue.

GitHub will remain the channel for reporting bugs.

Once again, this issue can now be found by searching for the title on: https://aws.uservoice.com/forums/598381-aws-command-line-interface

-The AWS SDKs & Tools Team

I don't understand what is the point of closing issues and forcing users to sign up to some other service to have a discussion or make suggestions.

Anyways, tar has a handy flag that I think would make sense for aws s3 too.

--strip-components=NUMBER
              Strip NUMBER leading components from file names on extraction.

This would be backwards compatible but still allow one to upload a child directory without the cd dance.
If it is something that could land, I am happy to whip up a pull request.

Based on community feedback, we have decided to return feature requests to GitHub issues.

A -p or --preserve would also be a non-breaking change that would accomplish the same thing. Currently using a split workflow of aws s3 and s3cmd. 😢

I'm finding this issue to be very problematic for me too. if I have a folder I want to sync with files in it, and there is a very heavy subtree like xiongchiamiov suggests, then theres nothing I can do. sync is the right function, but all we really need is to be able to set a depth, like with --depth=1 or some such.

I m having the same issue. I need to copy along with the parent directory not only the contents of the directory.

copy folder from local machine to s3

PS C:\Users\apandey\Desktop> aws s3 cp test s3://awsmyfolder/test --recursive
upload: test\future.txt to s3://awsmyfolder/test/future.txt

you need to mention folder name in copy command with bucket

BTW, the link in ASayre's answer is dead (error 404)

Was this page helpful?
0 / 5 - 0 ratings