https://www.bountysource.com/issues/59506367-feature-native-cloud-saving
You could just add the RetroArch save directory to Dropbox, or something similar. Outside of Google Drive, have any other services that you recommend?
One of the main issues stopping this is TLS support. While we have actually had support for it for a while now, it is not compiled in by default and at the request of @twinaphex it has stayed that way. Until that stance changes I think there is no point in even working on such a feature.
Rob Loach
cues then you cant use it on other things like wii u, and so on just pc
really witch kills the point of a cloud save if you can't use it on other
things, other then for PC users
bparker06
O ok then, well that sucks, thx for the info I really hope you can get some
kind of TLS support one day cues that would be perfect for on the go play
on switch then back on your PC of what ever and a very hype new thing to
try and advertise ones you get it.
On Tue, Jun 12, 2018 at 7:48 PM, bparker06 notifications@github.com wrote:
One of the main issues stopping this is TLS support. While we have
actually had support for it for a while now, it is not compiled in by
default and at the request of @twinaphex https://github.com/twinaphex
it has stayed that way. Until that stance changes I think there is no point
in even working on such a feature.—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
https://github.com/libretro/RetroArch/issues/6875#issuecomment-396778692,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AmBUoQ05nevbdYfr5kNUkpuW6oYVXC_kks5t8GFTgaJpZM4UiKIn
.
Even with cloud saving...
Savestates are more often than not architecture dependent. (ie: PS3, WiiU saves may not work on PC and viceversa)
Still would be nice to have
It is obvious that any solution is going to have to be mindful of save state portability (and thus said save states portability might need to either be addressed as a separate issue, or management of different save state usage needs to be thought of).
Considering this has been kicking around for four years, I just dropped a $100 on the bounty to kick start development of it. As what good is a cross platform emulation tool if the data isn't!
Best of luck.
FWIW, I'd like something more than this, and it is remote file system support.
I'd like to be able to speak to a webdav share or sshfs or something like that and store data there.
That way everyone can implement their own solution to the problem in any way they see fit.
For webdav or other network mountable folders I think a better method would be to mount the drive through the OS and point RA's save dir to the mount. Seems like adding webdav to RA would be reinventing the wheel since nearly every OS already has a robust implementation. I could provide a script that set up a webdav drive automatically for most major OS's.
SSL is enabled by default now so that solves one cloud hurdle. I am thinking of implementing this through Google Drive. I'd like some feedback before beginning. Or some coding help if anyone else is up to working on this now.
For native cloud saving here is my plan.
It is possible to skip using the local drive but I think it is best to maintain a local copy so states are available when playing offline.
The save path in google drive would be /retroarch/[architecture]/states/[core]/[save file name]
Some issues that could cause local and cloud to go out of sync and have different save states.
Options to handle out of sync states.
Feedback before I begin would be awesome?
My only suggestion would be to split the functions that RA performs when dealing with cloud saves, and the actual providers API itself so as to allow extensibility to other cloud providers, and a reduction in technical debt from cloud providers changing API,
Aping the libretro design goal of "rethink what code belongs to ‘core land’ and what should belong to ‘frontend land’. Libretro cores ideally should have minimal to no dependencies on any system/OS-specific APIs so that the same libretro cores can work on any libretro-compatible frontend." I think could be appropriate in this scenario.
Otherwise all looks like a rational minimal viable to me.
Unlike any other services, webdav is a standard protocol, and it just takes a little more http support than what RA has already.
You can't "mount" the webdav share on every supported OS.
Unlike gdrive, you can roll out your own webdav implementation or use owncloud, nextcloud, seafile, all of those have webdav support.
If not authorized, then open default web browser and prompt for authorization. Google let's you log in through a secondary device if you don't have a browser or keyboard.
If you can do this, have a web browser and all you most likely can just sync by pointing the folder making builtin syncing redundant and pointless.
Not trying to stop you from using gdrive, if you want to do it more power to you. Your gameplan would only work on platforms that can run official google clients already though.
@twinaphex (and many users) have stated that they want cloud saves on mobile and consoles/handhelds too, so relying on the operating system features and/or having a browser is simply not an option.
Same goes for requiring external dependencies, everything needs to be baked in instead, and use the existing net_http code, which will most likely need to be extended to support this. For example, it doesn't support redirects, authentication or alternative methods like PUT as used by webdav.
Is it possible to use JSON to get information on the saves, use POST with data to send saves to cloud, and download any changed MD5 files?
I know that there is no C API's for GDrive, AWS, OneDrive, etc. so making an API might have to be done.
There are examples of gdrive wrappers for FUSE (which FUSE itself is written in C)
https://github.com/dsoprea/GDriveFS
Might be useful for reference
@CammKelly gdrive, according to that code, requires an external browser for oauth authentication, so that's not acceptable for us.
Is this a good solution? Base64 encode the file, upload it using the existing net_http POST function. Then the server decodes it and saves to a file.
I made a rough demo of this to test it and it is working. I was able to upload and download save states.
Once the file is uploaded to a web server, you can then send it to any backend you want - webdav, Dropbox, Google, etc.
I agree, GDrive is probably not a good solution but for completeness, here is more info on it.
The external browser for oauth can be on an external device. The device playing the game doesn't have to have a browser. You can trigger the request from any device and authorize it from your phone.
For the Google API there is this https://google.github.io/google-api-cpp-client/latest/
Not everyone may have access to a browser and I don't think it would be right to require that. Also we cannot forward files from our server to some other service because we don't want the responsibility of storing people's private account info. There are already people begging for a self-run server solution so that we never see anything they upload, but at the same time most people just want it to be easy to use and may not care about the security of a save state and whatnot.
For roll your own server - this is the PHP I used for testing. It works with the existing task_push_http_post_transfer function after adding base64.c for the encoding.
<?php
// Prevent Cache
header('Expires: Mon, 26 Jul 1997 05:00:00 GMT');
header('Cache-Control: no-store, no-cache, must-revalidate');
header('Cache-Control: post-check=0, pre-check=0', FALSE);
header('Pragma: no-cache');
// simple security to keep strangers out
if (!isset($_POST['password']) || $_POST['password'] != 'password')
{
header("HTTP/1.1 404 Not Found");
exit();
}
// $_POST['upload'] is file name to upload
// $_POST['data'] is base64 encoded file
if (isset($_POST['upload']) && isset($_POST['data']))
{
$fp = fopen($_POST['upload'], 'wb');
fwrite($fp, base64_decode(str_replace(" ","+",$_POST['data'])));
fclose($fp);
// respond with file size for success checking
echo filesize($_POST['upload']);
}
// $_POST['download'] is file name to download
else if (isset ($_POST['download']))
{
if (file_exists($_POST['download']))
{
header('Content-Length: ' . filesize($_POST['download']));
header("Content-type: application/octet-stream");
readfile($_POST['download']);
}
else
echo 'Error: File Not Found';
}
else
echo 'Error: Unknown Request';
?>
This is the code added to save_state_cb to initiate the upload. It needs some work, just seeing if this is the correct direction.
#include <libretro-common/include/utils/base64.c>
#include <net/net_http.h>
int encoded_data_length = Base64encode_len(state->size);
char* base64_string = malloc(encoded_data_length);
Base64encode(base64_string, state->data,state->size);
char *file_name = NULL;
net_http_urlencode(&file_name, path_basename(state->path));
char post_data[encoded_data_length+512];
post_data[0] = '\0';
snprintf(post_data, sizeof(post_data), "data=%s&upload=%s", base64_string, file_name);
task_push_http_post_transfer(url, post_data, false, NULL, upload_cb, NULL);
Duplicate of https://github.com/libretro/RetroArch/issues/2028
Re-opened with the linked bounty over at https://www.bountysource.com/issues/59506367-feature-native-cloud-saving
I have started tackling this bounty.
Here's an overview of what I WAS planning on doing.
1) RetroArch server software to be developed in many different supported syntaxes.
2) Create 2 new files (cloud.c and cloud.h) that will use code with the existing base64.c code base and upload/download functions.
3) Saves to the cloud will happen when the save state and game save files are created. Downloads will happen before the rom loading process.
1) is definitely a pipedream though. There are bandwidth caps most dedicated servers have, and we already pay over $300+ or more per month on servers. We cannot rack up even more bills, especially as our Patreon has gotten smaller as of late, and not bigger.
Also, we cannot be dependent on somebody else's own personal server, either, for various reasons.
Some guy's LLC residing over a cloud service being offered through RetroArch is completely unacceptable to us. Besides, this is again a pipedream - Article 13 is about to be instated in the EU soon, you will need mandatary Google-style content ID filtering put in place to screen every possible upload for potential 'copyright-infringing' content that is being uploaded by an EU citizen. Every copyright violation that wouldn't get immediately dealt with BTW would be a massive fine.
We are only going to accept this bounty if it's going to interface through the existing predominant cloud services that exist. Having it go through some guys' private server is a complete non-starter, and will simply not be accepted.
I'm going to try to get @bparker06 involved in this conversation, so that we can come to some kind of consensus and some kind of gameplan on what it is we are going to do.
I discussed this with @bparker06 and @fr500. I think all three of us are in agreement that the most pragmatic way to go about this, is to actually implement additional VFS backend drivers. We could have VFS backend drivers for say webdav, ssh, ftp, any other cloud-based thing, etc., etc. This would neatly fit into our existing system and would also be usable by libretro cores as well.
tip: on PCs you can use the rclone mount command. It supports many backends, and works pretty well for me.
Here's what I was working on https://github.com/erfg12/RetroArch-network-transfer
The project managers want to take this a different way, but at least here's a working concept that people can incorporate right now.
Dropbox or Google Drive would be the great so nobody needs to host anything themselfs. I pledged $5 as a bounty
Any update on this?
Is this dead or is it worth adding on the bounty?
This is the feature that lacks the most to RetroArch.
Any update on this?
Is this dead or is it worth adding on the bounty?
This is the feature that lacks the most to RetroArch.
Agreed. In today's age I honestly don't know anyone personally anymore who only plays on one system strictly except for my console friends who only know and want PlayStation no matter what.
Willing to pitch in as well, but won't do it if the concept is buried and simply not talked about.
I am all for this as well. This is probably the single greatest thing that holds me back from using RetroArch more than I do. I managed to create a workaround for my Phone and PC but i would install this on almost everything if it had cloud saves.
I am all for this as well. This is probably the single greatest thing that holds me back from using RetroArch more than I do. I managed to create a workaround for my Phone and PC but i would install this on almost everything if it had cloud saves.
hey my I ask how u did it between PC and Android
Most reliable way is probably FolderSync.
I am all for this as well. This is probably the single greatest thing that holds me back from using RetroArch more than I do. I managed to create a workaround for my Phone and PC but i would install this on almost everything if it had cloud saves.
hey my I ask how u did it between PC and Android
https://www.youtube.com/watch?v=d6qPLFVhbZA
Here is a video. of a guy basically doing what i figured out. Google Drive. setting the file source to be a location where google drive syncs from.
cuz this is Open source and on F-Droid do you guys think you can implement this in to Lakka and so on retroarch https://syncthing.net/downloads/
its been working keeping my android saves sync with windows and linux PC's
it always sees the new save in my android and gets the updated save.
yes its not a cloud save backup but I mainly care about having saves on everything at ones rather then a cloud, I just have to leave my android at lest on witch is easy and free
cuz this is Open source and on F-Droid do you guys think you can implement this in to Lakka and so on retroarch https://syncthing.net/downloads/
its been working keeping my android saves sync with windows and linux PC's
it always sees the new save in my android and gets the updated save.
yes its not a cloud save backup but I mainly care about having saves on everything at ones rather then a cloud, I just have to leave my android at lest on witch is easy and free
The version i mentioned was easy enough to set up for me. It is free and they do save to the cloud so you have that cloud save back up should the worst happen. It might be worth you looking into.
I've added another $30 to the $100 I donated to this bounty back in 2018 to make it a round $150.
In October 2018 discussion on what would constitute a MVP somewhat occurred, this might need to be revisited to ensure best approach. Handling of saves across different architectures likely also needs to be discussed in some form as well.
Pitched in 15 bucks as well to ruin the rounding. :P
Dont tempt me to toss in something random
Dont tempt me to toss in something random
I don't tempt you.... I dare you :D
You have left me no choice.
Is anyone actually working on this?
I'm asking because after my next personal project is done, I might get some time to do this.
Is anyone actually working on this?
After @erfg12 idea was scuttled, I believe this is currently orphaned. I would also look at #4774 as it dovetails with the MVP discussed by @twinaphex earlier in this thread.
Bounty is now $225.
You can also use Rest API's and Post file data to a storage provider. No C programming necessary:
GDrive: https://developers.google.com/drive/api/v3/manage-uploads
OneDrive: https://docs.microsoft.com/en-us/onedrive/developer/rest-api/concepts/upload?view=odsp-graph-online
DropBox: https://www.dropbox.com/developers/documentation/http/overview
AWS: https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPOST.html
But I would recommend maybe programming an RA data server in something like Node so it can be on Mac, Windows or Linux and keep the files in sync there. Then on your server you can choose what you want to do with it. FTP, GDrive, DropBox, whatever.
Here are some concept photos I made: https://imgur.com/a/uy09HMP
Bounty is now $250.
Don't want to steal @Peter Woodman's thunder, but he just yolo'd another $300 at this bounty which is a massive confirmation of the idea of native cloud saving. Thanks Pete!
Most helpful comment
I discussed this with @bparker06 and @fr500. I think all three of us are in agreement that the most pragmatic way to go about this, is to actually implement additional VFS backend drivers. We could have VFS backend drivers for say webdav, ssh, ftp, any other cloud-based thing, etc., etc. This would neatly fit into our existing system and would also be usable by libretro cores as well.