Hugo: Publish to remote destination

Created on 5 Apr 2017  路  5Comments  路  Source: gohugoio/hugo

Since I'm publishing to S3, I think it would be much easier if the --destination flag supported a remote location, rather than having to manually upload every time. I hope this will be considered as a future enhancement. Thanks!

Enhancement Stale

Most helpful comment

In isolation, this issue isn't very interesting. To get some speed benefits from this, we would have to get a head start, and that isn't really happening with Hugo. So the real speed benefits will be in "only transfer what's changed", and existing S3 tools does this very well (https://github.com/bep/s3deploy is one).

But Hugo is set up with virtual file systems on both ends (source, destination), which we use to great benefit in the server mode (mem-mapped files) ... And it would be cool, in the future, to maybe extend this to a "remote streaming update" kind of live update. Suddenly the static isn't that static anymore.

All 5 comments

Although this would make publishing nice, there are plenty of solutions for publishing to remote services.

I currently use HUGO to build a static site with react components that gets published to Firebase Hosting. The Firebase CLI in node already handles the publishing for me, so I would not use it in HUGO.

I am also thinking you do not want to build directly to your remote until the full build is complete in the case there is an error. Of course, the argument can be made that an error can exist in any transfer, but adds another layer of failure.

[OPINION]
Please do not read this as being critical, just wondering if I am missing something. Why would we not just use a task runner on top of the HUGO project to publish to our remote destinations? I am not sure this is going to be a plus in the long run UNLESS it was created with a plugin system to be able to maintain those publishing providers separately.

@talves, thanks for the feedback.

I suppose the case in point is AWS Lambda, at least in my case. Rather than having the files created locally and then synced to S3, I think it would be more elegant and perhaps quicker to have Hugo handle the S3 upload.

I agree with @talves, although it's in roadmap.
Deployment tool should be separate from hugo. I think, it's a bit overhead for hugo and considering how many deployment tools there are (for AWS, heroku, firebase, scp, git and etc).

As alternative it could be added an option in config like this:

deploy = "git push origin master"

And if you want to deploy you should execute hugo --deploy and at the end of rendering the command in deploy will be executed. IMHO it's the simplest solution, but you must have appropriate CLI installed.

In isolation, this issue isn't very interesting. To get some speed benefits from this, we would have to get a head start, and that isn't really happening with Hugo. So the real speed benefits will be in "only transfer what's changed", and existing S3 tools does this very well (https://github.com/bep/s3deploy is one).

But Hugo is set up with virtual file systems on both ends (source, destination), which we use to great benefit in the server mode (mem-mapped files) ... And it would be cool, in the future, to maybe extend this to a "remote streaming update" kind of live update. Suddenly the static isn't that static anymore.

This issue has been automatically marked as stale because it has not had recent activity. The resources of the Hugo team are limited, and so we are asking for your help.
If this is a bug and you can still reproduce this error on the master branch, please reply with all of the information you have about it in order to keep the issue open.
If this is a feature request, and you feel that it is still relevant and valuable, please tell us why.
This issue will automatically be closed in the near future if no further activity occurs. Thank you for all your contributions.

Was this page helpful?
0 / 5 - 0 ratings