$ aws s3 cp /home/ye/website_images/310PixelWidth_Products/00000000000*.jpg s3://mybucket/products/
Unknown options: /home/ye/website_images/310PixelWidth_Products/000000000001392439_1.jpg,/home/ye/website_images/310PixelWidth_Products/000000000001400395_1.jpg,/home/ye/website_images/310PixelWidth_Products/000000000001400499_1.jpg,/home/ye/website_images/310PixelWidth_Products/000000000001400505_1.jpg,/home/ye/website_images/310PixelWidth_Products/000000000001400510_1.jpg,/home/ye/website_images/310PixelWidth_Products/000000000001400512_1.jpg,/home/ye/website_images/310PixelWidth_Products/000000000001400544_1.jpg,/home/ye/website_images/310PixelWidth_Products/000000000001400545_1.jpg,/home/ye/website_images/310PixelWidth_Products/000000000001400593_1.jpg,/home/ye/website_images/310PixelWidth_Products/000000000001400594_1.jpg,/home/ye/website_images/310PixelWidth_Products/000000000001400595_1.jpg,/home/ye/website_images/310PixelWidth_Products/000000000001400596_1.jpg,/home/ye/website_images/310PixelWidth_Products/000000000001400618_1.jpg,/home/ye/website_images/310PixelWidth_Products/000000000001400764_1.jpg,s3://mybucket/products/
Even with the --recursive
option.
$ aws s3 cp --recursive /home/ye/website_images/310PixelWidth_Products/00000000000*.jpg s3://mybucket/products/
Unknown options: /home/ye/website_images/310PixelWidth_Products/000000000001392439_1.jpg,/home/ye/website_images/310PixelWidth_Products/000000000001400395_1.jpg,/home/ye/website_images/310PixelWidth_Products/000000000001400499_1.jpg,/home/ye/website_images/310PixelWidth_Products/000000000001400505_1.jpg,/home/ye/website_images/310PixelWidth_Products/000000000001400510_1.jpg,/home/ye/website_images/310PixelWidth_Products/000000000001400512_1.jpg,/home/ye/website_images/310PixelWidth_Products/000000000001400544_1.jpg,/home/ye/website_images/310PixelWidth_Products/000000000001400545_1.jpg,/home/ye/website_images/310PixelWidth_Products/000000000001400593_1.jpg,/home/ye/website_images/310PixelWidth_Products/000000000001400594_1.jpg,/home/ye/website_images/310PixelWidth_Products/000000000001400595_1.jpg,/home/ye/website_images/310PixelWidth_Products/000000000001400596_1.jpg,/home/ye/website_images/310PixelWidth_Products/000000000001400618_1.jpg,/home/ye/website_images/310PixelWidth_Products/000000000001400764_1.jpg,s3://mybucket/products/
(awscli)[magento@dam ~]$ aws s3 cp --recursive /home/ye/website_images/310PixelWidth_Products/0000000000013 s3://mybucket/products/
000000000001389251_1.jpg 000000000001391874_1.jpg 000000000001392439_1.jpg
You cannot use wildcard syntax in path names. If you need to use wildcard syntax, I suggest you use --exclude
and --include
. Check out some examples here on how to use them.
@kyleknap Yes, that's exactly what I am asking.
The glob patterns are more powerful than just include/exclude. http://en.wikipedia.org/wiki/Glob_(programming)
And Python has this module for both 2.x and 3.x too: https://docs.python.org/2/library/glob.html https://docs.python.org/3.4/library/glob.html Why not take this as a feature request?
We are aware of using glob patterns, and will look into this.
+1 for this.
+1
+1... this would be super helpful when a sync command is not necessary. i.e.
aws s3 cp ./*.jpg s3://mybucket
+1
+1
+1
+1
aws s3 sync *.tar.gz s3://mybucket/
is definitely nicer than
aws s3 sync . s3://mybucket/ --exclude="*" --include="*.tar.gz"
Until this is supported, you could use some variant of:
find ../folder -name *.jpg -maxdepth 1 -exec aws s3 cp {} s3://mybucket/myfolder/ \;
@john-aws Does not seems to be a better approach than the one commented by @gricey432 , but thanks for sharing. Hope that Amazon Developers fix this out soon.
@kyleknap Any updates? Two year anniversary of this feature request has been passed.
really? this still isn't fixed?
Good Morning!
We're closing this issue here on GitHub, as part of our migration to UserVoice for feature requests involving the AWS CLI.
This will let us get the most important features to you, by making it easier to search for and show support for the features you care the most about, without diluting the conversation with bug reports.
As a quick UserVoice primer (if not already familiar): after an idea is posted, people can vote on the ideas, and the product team will be responding directly to the most popular suggestions.
We鈥檝e imported existing feature requests from GitHub - Search for this issue there!
And don't worry, this issue will still exist on GitHub for posterity's sake. As it鈥檚 a text-only import of the original post into UserVoice, we鈥檒l still be keeping in mind the comments and discussion that already exist here on the GitHub issue.
GitHub will remain the channel for reporting bugs.
Once again, this issue can now be found by searching for the title on: https://aws.uservoice.com/forums/598381-aws-command-line-interface
-The AWS SDKs & Tools Team
This entry can specifically be found on UserVoice at: https://aws.uservoice.com/forums/598381-aws-command-line-interface/suggestions/33168352-aws-cli-does-not-recognize-glob-patterns-in-comman
Based on community feedback, we have decided to return feature requests to GitHub issues.
In the case use: aws s3 sync /var/log/httpd s3://mybucket/ --exclude="" --include=".gz"
@itpedrops great workaround! It looks like you need to put a "*" in the exclude though. Easily enough tested with the "--dryrun" command.
aws s3 sync /my/local/website/ s3:///my-bucket/ --exclude="*" --include="*.html" --dryrun
I was having problems with this that were resolved by putting quotes around the --include and --exclude patterns
+1
globing please!
Most helpful comment
@kyleknap Any updates? Two year anniversary of this feature request has been passed.