I'm considering using AWS S3 as storage for backing up my computer and I'm trying to gauge how much it would cost me.
The pricing page mentions cost per gb along with price per request. I'm afraid using aws sync is going to burn through my money really quick if its going to be running a GET and a PUT on every file.
P.S. I think its interesting that S3 is much more expensive than Dropbox in this comparison. Are there any other AWS options for saving ~ 500GB of data?
Thanks
This is kind of an odd place i would think for this sort of question, normally Id reccomend the aws forums however,
That faq is a very useful read though, I would reccomend reviewing it and asking on the aws forums if that doesn't answer many questions you may run into later.
PS: regarding your last question, depending on how ephemeral your data is, you may check into S3's infrequent access options and glacier. These options may save you the cost of storing large amounts of data long term: https://aws.amazon.com/blogs/aws/aws-storage-update-new-lower-cost-s3-storage-option-glacier-price-reduction/
thanks for the info @cloudkitsch -- I'll look into that some more.
the reason I posted here is because of the awscli sync
command. I was curious if that uses any of these batch features you mention.