I configured my instance to use S3. Files are getting uploaded properly but they are not getting retrieved properly.
Mastodon URL trying to retrieve S3 objects:
https://<bucket>.s3-us-east-1.amazonaws.com/accounts/avatars/000/000/001/original/baa5ccb5aceb7d69.png?1491402855
URL in S3 from object properties:
https://s3.amazonaws.com/<bucket>/accounts/avatars/000/000/001/original/baa5ccb5aceb7d69.png
I am wondering if I missed something in my config? Again, files are getting pushed to S3 without issue. It seems the only issue is retrieval.
This explains the issue I am having:
http://stackoverflow.com/questions/26604977/url-for-public-amazon-s3-bucket#26622229
This link is in the SO post above but I am putting it here for exposure:
http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region
@bradj I believe the <bucket>.s3-<region>.amazonaws.com
URL structure is enabled when S3’s “static website hosting” option is checked:
Edit: oops, didn’t see the -website
bit 🙃
@meyer I thought it was something to do with the S3 configuration but everything is disabled, including site hosting.
With hosting enabled the url would be:
http://<bucket>.s3-website-us-east-1.amazonaws.com
having exactly the same problem here.
Looks like this is a bug to me ... the URL segment s3-us-east-1
should be s3
... because this is the default region.
I expect other regions work fine with the existing mapping.
@continuata Thanks for validating. I was beginning to winder if I had done something wrong.
The next workaround I am going to try will be CloudFront just to see if it works.
I was able to get mine working with http
by changing S3_REGION
to website-us-east-1
and enabling static site hosting on the S3 bucket. If you are ok with that you should be able to set S3_PROTOCOL
to http
and everything should work.
Getting https
to work though required me setting up cloudfront and then setting S3_CLOUDFRONT_HOST
to my cloudfront domain name which in this case is dverqbuzz4a92.cloudfront.net
.
I just created a bucket in a different region and pointed to that one instead, which works.
But this does seem like a bug for default region.
I just set up an s3 bucket for an instance. I am using SSL. I think I have information which can either close this issue or lay out a specific change that can happen:
The links _are_ different, but they still work fine. Mastodon creates links like this:
https://[bucket-name].s3.amazonaws.com/media_attachments/files/000/000/012/original/9876.jpg
But the links that Amazon provides are more like this:
https://s3.amazonaws.com/[bucket-name]/media_attachments/files/000/000/012/original/9876.jpg
However using the first link does ~redirect to the second link~ also work, without redirection. The url stays the same, however it is different than the link that Amazon provides. My guess is that at the time when the code was written to construct the s3 links, Amazon had a different link structure. Then they changed it, but maintained backwards compatibility. In other words, at this time, I don't think it should be necessary to change the link because Amazon is maintaining backwards compatibility, probably for the foreseeable future.
It may somewhat reduce confusion to make the links the same as what Amazon now serves up. However in terms of prioritization, it seems like it would be low on the list. For that reason, I'd recommend closing it until Amazon decides they want to stop backwards compatibility, so work can be focused on more important features. Thanks.
It's not a backwards-compatibility issue. The version of the link where the bucket name is a subdomain is the URL structure that allows you to CNAME your own subdomain to point to S3. It's not going anywhere.
A workaround that did it for me was to create a new environment var named S3_ALIAS_HOST
and set its value to s3.amazonaws.com/[bucket-name]
.
Most helpful comment
A workaround that did it for me was to create a new environment var named
S3_ALIAS_HOST
and set its value tos3.amazonaws.com/[bucket-name]
.