I was just looking at OpenDrive as a potential storage provider. They offer pretty competitive prices already, but they also claim to do competitor price matching, so may be a viable alternative to ACD's unlimited storage.
Their API documentation is linked here: https://www.opendrive.com/api
They also claim to have (only beta so far) support for Webdav, so Webdav support (#580) may avoid the need for native support.
Looks interesting!
I've been going over the API docs and double-checked with a contact at Opendrive support, and so far it appears that OpenDrive:
I will update this comment with any future clarifications to avoid spreading the details out, making them harder to find.
@eharris very useful list - thank you
This guy working a fork, that able to use Opendrive API (Download and listing is working)
Well, we try to delete the next "unlimited" storage provider? 馃棥 :)
@vampywiz17 Have you actually pulled the code and confirmed the features referenced (D/L and Listing)? I've forked Oli's code and will start banging on it, but was curious if I can get confirmation that the baseline is ready to go.
@noahleemiller Sorry, i can't test it (but i will plan it!), only yust check the commits.
I have now also implemented a experimental support for upload: https://github.com/olihey/rclone/commits/opendrive
But I will not continue as the upload speed is only 2-3 MByte/s. I am going with Google drive now which in average 5 times faster.
Good luck
Hey, if anybody still interested I took where olihey left off:
https://github.com/Khouba/rclone
It's passing tests, except server side Copy and Move.
Awesome work. What kind of upload/download speeds are you getting?
Not sure yet, I'm on the slow connection right now. I'll let you know later. I think I was getting similar speeds as you, around 2 - 3 MB/s
@olihey @Khouba - great work! I'd love to have this in rclone when you are done.
Make sure you check out the steps in the CONTRIBUTING guide for writing a new backend if you haven't seen them already.
Let me know if I can help - I'm quite happy to finish stuff off if there is stuff left to do.
Sorry for the delay, I've been busy at work... Did some testing and it looks like speeds are limited per transfer. Right now I'm using 20 transfers at once and I'm getting 7 MB/s upload. Didn't test download yet, but I suppose it'll be better, will let you know as soon as I do
@ncw I'll write the docs as soon as I've got some spare time, I also plan to write you because of server side copy and move. I implemented also those, but tests are not passing for it
@olihey thanks for great foundation, it was much easier for me to learn golang when I had basic things working
Hey, just a quick update.
I'm in talks with support about adding FileHash to the folder content request.
I also found bug in the api, when file isn't overwritten, filled bug about that.
In the mean time I'm testing and also added docs. There's still something wrong, maybe they have some kind of limit on number of requests in time... Need to ask them about it
@Khouba how are you doing with this? Ready to make a PR?
@Khouba I finally have some time in my life to provide some support, even if it's just testing. My environment is up and all that good stuff. Really great job so far. Let me know how I can help.
Also, did OpenDrive support ever get back to you RE the API issues you mention above?
Hope all is well on your end.
@ncw Not sure, I'm using it for some time and only issue I have is that I'm getting dial tcp: lookup dev.opendrive.com: no such host after like 110 or so files uploaded. I checked with support and they say that there's no limit for number of uploads in session. There's also one more thing missing and that's paging when getting list of files, so atm it returns only first 100 files or so.
@noahleemiller great to hear 馃檪, can you please test for me the thing above, just upload folder with 140 files, so we're sure that it's not just my account.
Didn't get any answer for the error with file overwrite, it happens only when the account is set to store Max 1 version of every file. It was redirected to the developers though.
@Khouba All good with just a quick and dirty test.
Setup is as follows: Linux Mint 18.1 in a VBox. Approx. 100mbit down / 10mbit up residential connection.
Average upload speed was 1 MB/s during actual file transfer.
Here is my console output [ rclone copy -v --stats 1s ...paths ]:
2017/09/05 22:20:52 INFO :
Transferred: 482.358 MBytes (706.916 kBytes/s)
Errors: 0
Checks: 0
Transferred: 123
Elapsed time: 11m38.7s
Noah Lee Miller - Test Build
@Khouba One other question for you. What setup are you running with? Did you mount your drive using FUSE? Trying to narrow down some of the variables so that I can recreate exactly how you were conducting your test. Drop me a note :)
@Khouba Had time for another run today, note that i put a bandwidth limit, so upload speed obviously reflective of that param.
[rclone copy -v --stats 5s --bwlimit 500 --include '*.jpg' ...paths]:
2017/09/06 22:17:49 INFO :
Transferred: 47.547 MBytes (60.170 kBytes/s)
Errors: 0
Checks: 1
Transferred: 327
Elapsed time: 13m29.1s
Noah Lee Miller - Test Build
@Khouba wrote:
Not sure, I'm using it for some time and only issue I have is that I'm getting dial tcp: lookup dev.opendrive.com: no such host after like 110 or so files uploaded. I checked with support and they say that there's no limit for number of uploads in session. There's also one more thing missing and that's paging when getting list of files, so atm it returns only first 100 files or so.
If you fix the paging thing, it sounds like you are ready for a PR. Do the integration tests pass?
@ncw they didn't all pass for me, but I'm going to defer to @Khouba
Hopefully I can take a closer look this weekend and tell you what was failing last time I ran the tests. If I recall correctly, only 3 or so of the tests reported failures.
@Khouba @ncw
These are the tests that failed for me:
--- FAIL: TestFsCopy (78.18s)
--- FAIL: TestFsMove (185.88s)
--- FAIL: TestFsDirMove (1.28s)
--- FAIL: TestObjectString (12.55s)
--- FAIL: TestObjectFs (8.59s)
--- FAIL: TestObjectRemote (8.63s)
--- FAIL: TestObjectHashes (8.66s)
--- FAIL: TestObjectModTime (8.54s)
--- FAIL: TestObjectMimeType (8.48s)
--- FAIL: TestObjectSetModTime (11.59s)
--- FAIL: TestObjectSize (8.62s)
--- FAIL: TestObjectOpen (8.53s)
--- FAIL: TestObjectOpenSeek (8.65s)
--- FAIL: TestObjectPartialRead (9.56s)
--- FAIL: TestObjectUpdate (9.59s)
--- FAIL: TestObjectStorable (8.63s)
--- FAIL: TestFsIsFile (67.88s)
--- FAIL: TestObjectRemove (8.53s)
--- FAIL: TestObjectPurge (0.38s)
I'll keep debugging as time permits.
@noahleemiller amazing, thanks. So it's only my account or something with OS environment. I'm running macOS Sierra. I'll try linux later, when all is fixed.
@ncw tests apparently evolved a bit from what I tested with 馃槃 Will fix asap
OK, well I look forward to seeing the result as a PR. If you want me to help with the finishing touches then I can do that too.
@Khouba @ncw
Hi there, not much of a Go coder but I downloaded Go and compiled Khouba's fork to test out OpenDrive on Ubuntu 17.04. Setting up the remote drive worked fine. Downloading via "rclone copy remote:path local/path" worked well. I then tried uploading/ backing up a folder to the remote drive via "rclone copy local/path remote:path/to/folder" and that's where I ran into problems. It appears as if the opendrive interface does not recognize the "path/to/folder" (on upload) but puts the files in the root folder. It also ignores any folder structure in "local/path". When a file upload completes I get the message:
2017/09/14 21:06:20 ERROR : Filename: corrupted on transfer: sizes differ 190 vs 0
I suspect that it does check the complete, correct path to the uploaded file but as the file has been copied to the root directory it cannot find it - hence size 0. Might be wrong. Unfortunately, I dont know Go at all so cant be of much help.
@emsfeld are you sure that you have correct branch / repo? It sounds like old version, before I started working on it
@Khouba I guess that's possible. I think I just downloaded the master.zip from Github. I assumed that's the most recent version? Will try cloning the repo.
Edit: have to admit I am not too familiar with Github either...I see that there is a branches dropdown where I can select "opendrive". Would that be the correct branch to clone/download? Thanks!
@emsfeld Yea, that's it. You need to download opendrive branch
@Khouba got it now. Thanks. Very useful as I find opendrive to be the best value for the time being!
@Khouba
Hi again. I have two accounts with opendrives. Everything works fine with the account of which I am an admin. With the other account a friend of mine is an admin and I am a user. With that one I am having problems. For example running the command "rclone ls remote:somefolder" I receive the error:
"Failed to create file system for "remote:folder": failed to create session: json: cannot unmarshal string into Go struct field UserSessionInfo.IsAccountUser of type int"
I then opened the project with liteide and tried to debug (btw the debuggers dont seem to do well with Go compiled code in liteide...very poor debugging experience) and found that the offending code is in Fn NewFs:
`var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
account := Account{Username: username, Password: password}
opts := rest.Opts{
Method: "POST",
Path: "/session/login.json",
}
resp, err = f.srv.CallJSON(&opts, &account, &f.session)
return f.shouldRetry(resp, err)
})
if err != nil {
return nil, errors.Wrap(err, "failed to create session")
}`
There seems to be a non-null error so I just commented out the line
"return nil, errors.Wrap(err, "failed to create session")"
and it works fine for me. Unfortunately the debugger behaves pretty badly so I cannot inspect the "err" object properly. Hope this helps.
@emsfeld oh, in that piece of code you found, there's JSON parser with struct UserSessionInfo. Problem is that API is returning two different types on one attribute inside of that struct, either int or string.
I'm trying to find decent solution now. Will write you as soon as I've got something
Update: Try now, should be fixed
@Khouba no longer encountering that error so that's great. I think this has been discussed before in this thread but there seems to be an issue when you're uploading a lot of files. I get a lot of these:
Failed to copy: failed to create file: Post https://dev.opendrive.com/api/v1/upload/create_file.json: dial tcp: lookup dev.opendrive.com: too many open files
I understand that this is server side but I suppose this means that OpenDrive restrict the number of concurrent uploads. Would it be possible to specify the degree of concurrency client side via some setting? Maybe that would help?
Update: I just noticed the --transfers option so I set that to 2 (default is 4). Trying out if that works.
Update2: Did not help. The upload job terminated prematurely with the above error.
@emsfeld
Failed to copy: failed to create file: Post https://dev.opendrive.com/api/v1/upload/create_file.json: dial tcp: lookup dev.opendrive.com: too many open files
That sounds like rclone leaking sockets... Check lsof while the sync is running to see if the number of open sockets rises. Also check the number of goroutines when rclone finishes (it prints it out when you use -vv).
@Khouba yes leaking sockets seems to be the case. I have the debugger hooked up and hit the breakpoint in opendrive.go (line 1001). I then checked "lsof | grep rclone" and there are like 4000+ sockets handles. I suspected that to be the case but thought the socket would be closed server side after making the api call to close_file_upload.json.
Update: Also reran the same job using the -vv flag and I get:
2017/09/17 15:43:32 DEBUG : Go routines at exit 2041
I suppose a Go routine is the same as a user-level thread?
@Khouba the mistake is here I think: https://github.com/Khouba/rclone/blob/opendrive/opendrive/opendrive.go#L976
If you use Call, then you either need to close the Body, or set NoResponse in the ops.
I see the same problem here: https://github.com/Khouba/rclone/blob/opendrive/opendrive/opendrive.go#L864
There may be others!
@emsfeld wrote:
I suppose a Go routine is the same as a user-level thread?
Yes exactly. They are so-called green threads
@ncw, @emsfeld thanks, should be fixed everywhere. This issue bothered me a lot.
Thanks Khouba for the prompt update. Was trying to figure out your code as I'm not versed in Go lang but am glad that it took you such a short time to update.
No worries, I'm also not Go programmer, I learned a bit only because of this, so any advice or feedback is appreciated. It's also not only mine code and it's bit of a mess right now. I plan to clean it up, but need to finish it first...
@Khouba I am testing your recent update. I just kicked it off and receive very frequent pacer related errors in the log. I suppose that is some sort of scheduler that retries when the server is not responding??? At any rate, this is (an example of) what i am currently seeing:
2017/09/18 14:22:10 DEBUG : PreOpen: opendrive.openUpload{SessionID:"1505737182634089000", FileID:"", Size:4517163}
2017/09/18 14:22:11 DEBUG : pacer: Rate limited, increasing sleep to 2m43.84s
2017/09/18 14:22:11 DEBUG : pacer: low level retry 6/10 (error
2017/09/18 14:22:43 INFO :
Transferred: 0 Bytes (0 Bytes/s)
Errors: 0
Checks: 15
Transferred: 0
Elapsed time: 3m2s
Transferring:
@emsfeld I'll take a look in the afternoon or tomorrow morning, but it's weird I'm running that build for some time without issues. I understand the pacer same way as you do. So whenever there's connection problem pacer puts sleep before next attempt. So there might be some issue with your network or OpenDrive servers.
@Khouba fairly certain this is a problem server side. Every now and again an upload goes through but at pretty slow speeds. In the OpenDrive support forum I found a thread from earlier today where someone is complaining about current API upload speeds so I guess that's related.
@emsfeld now I'm experiencing this too. So definitely API issue
@Khouba have you noticed any improvements in the upload speeds? It's a bit faster for me at a "whooping" 30KB/s.
@emsfeld not really, it looked promising during the day, but now I'm back to few B/s
update: actually, I let it run and now I'm at 500KB/s, so definitely much better in here
Well at least both of you are able to get something transferred, I'm still getting 0bytes transferred and pacer keeps increasing sleep time. (from ACD to Opendrive)
I tried downloading the Windows app syncing local to Opendrive and managed to get some upload but still encountered errors. Windows app is using API version 2.0 (based on the log file) so I'm not sure if we were to use wireshark and simulate the calls in rclone will it work?
That's interesting. I wasn't aware that the OD app uses the Rest API also - would have thought they'd be using some custom protocol on top of TCP. I realize my numbers may be a bit distorted as I am uploading a lot of small files with the largest file size being 15Mb. So I am seeing a few files (the larger ones) uploading at 300-400KB/s.
@huffiewuffie it's not good idea to reverse engineer API, hopefully they'll release v2 soon. Still interesting discovery
@emsfeld it looks like speed is fine in the morning.
I rebased to @Khouba 's latest. I can confirm some of the same behaviors that everyone else has been seeing. I don't have time to dig into it more tonight, but I'll try to spend some real time later this week.
lsof wasn't telling me the full story. I ended up using netstat and it looked pretty obvious that we are still opening up way too many connections. I'll debug more soon...
yea, I noticed it too. I'm working on it. I tried the tests and when I commented out Copy, Move and DirMove then everything was fine. Not sure what's wrong with those three tests missing
I suspect there might be some issues with the server as well. Even when using the Windows app, I'm getting numerous "Can't create directory for file." errors.
@Khouba
I'm just wrapping up the 1.38 release at the moment but after it is out the door (at the weekend hopefully) I'll have some time to help with this.
I'd like to get it merged early in the release cycle for 1.39 so in the next couple of weeks which will then give us 4 weeks to sort the bugs out before 1.39 is released.
I'd like to get it merged sooner, even if it has some problems - then I can help fixing them!
Uses md5 for file hashes
True
Supports modtime for files but not directories.
It's supported, field has name "DirUpdateTime", but this field is modified only by backend once folder content have changed
File and directory names are case insensitive
True
Has a file/directory name length limit of 255 characters (I assume this is bytes, not unicode). This limit appears to be per path/file segment, it is unclear if there is a whole path length limit but so far it appears there is not.
Please keep in mention, that many filesystems have hardcoded limit in 260 unicode chars and if sync will work for example on linux hosts then sync may will not work on windows PCs if you would like to share content between different hosts
There are currently no facilities for storing arbitrary per file/directory metadata. However there is a "Description" field for both files and directories that could potentially be used to store some limited metadata.
Yes, that's true, a description field should be used just for descriptions, and no more else.
May support file-level deduplication, as if you supply a file size and hash when uploading a file, it will return you info that a file with the same size and hash already exists. At this point it is unclear how to resolve where a file exists or how to retrieve it purely by hash.
File-level de-duplication is supported, you should precompute a MD5 for file and call _upload/open_file_upload_ for aready created files or _upload/create_file_ for new files (pls supply non-empty MD5 and real file size in bytes), in case if response has field "RequireHashOnly" set to true - it means file with same hash and size is already uploaded by someone. In such case please don't upload any chunks :) and just call upload/close_file_upload with same file_size and MD5. In this way the whole bandwidth will be saved
Failed to copy: failed to create file: Post https://dev.opendrive.com/api/v1/upload/create_file.json: dial tcp: lookup dev.opendrive.com: too many open files
I understand that this is server side but I suppose this means that OpenDrive restrict the number of concurrent uploads.
It's the leaks in client code, pls check if all opened file/socket handles are closed properly after use
devbazilio, in regards to your comment about > Supports modtime for files but not directories.
If the value for directories isn't settable by the client (and your comment seems to confirm that, as you say it can only be set by their backend), then it doesn't support storing modtime as meant here, because it can't be used to preserve the modtime that was set on the source filesystem.
@devbazilio got it, please make sure to check the whole thread before duplicating the same comments, only because it leads to clutter. Either way, appreciate the time if you plan to help us out debug/fix this code up.
Just seeking advise as was searching online to count the files opened based on lsof and came upon a command "ls /proc/$PID/fd/|wc -l" (which is supposed to track files opened based on PID?).
with the following command: rclone copy --transfers=10
Does that mean maximum of files opened concurrently is at 24?
@huffiewuffie Excellent find. I'm unfamiliar with that command, but at first glance, it looks legit. Can you check something for us? What happens if you increase the number of transfers (e.g. --transfers=20), does the number of open file handles increase proportionally with the "ls /proc..." command you found? Just a thought. I'll also do some more tests this weekend with the same command you suggested.
@Khouba @ncw
Apologies I can't devote more time to this worthy project. I have 4 very young children that I can't miss out on. Regardless, I've been spending some of the "free" time that I do have looking through the golang spec and suggested material to actually learn go properly. Adding another language to the toolbox will be worth it in the long term for me.
Cheers and ttys.
I'm not sure if the command is legit for this purpose. I tried to use --transfers=10 initially I got was 24.. and after a few hours it'll jump up to 150 ~ 170. When I tried --transfers=15 initial value was 33 - 34 but after a few hours it will go up to 130 (max I got up was 700 and the transfers start slowing down).
I have a hunch that there is some leak in the checking, but I'm still monitoring.
I have not thoroughly tested this, but OpenDrive supports WebDAV (https://www.opendrive.com/webdav) which is now in the new rclone milestone.
@olanmatt works. See my comment in https://github.com/ncw/rclone/issues/580
@Khouba do you want me to have a go at merging this (for 1.42 now)? Or do you think the webdav support is "good enough" for opendrive?
I pulled it into a branch and rebased and fixed the bit rot. It compiles and quite a lot of the integration tests work which is very good :-)
I've merged it into the opendrive branch: https://github.com/ncw/rclone/tree/opendrive on the rclone main repo.
You'll see from the travis tests that there is some work to do...
Let me know what you want to do
Thanks
Nick
@ncw hey, it'd awesome if you finished this. I honestly didn't try webdav, since I'm using this without issues for some time. I didn't have spare time to finish it though.
Thank you
@Khouba great! The backend looks nearly done. I'll hold it until the 1.42 release as it won't have enough time to get tested. Thank you to @Khouba and @olihey for your work so far :-)
I've been working on the opendrive backend. It is now passing all the tests except one - the server side directory move.
If any interested parties would like to give it a a go, here it is:
https://beta.rclone.org/v1.41-016-g6fe0fce3-opendrive/ (uploaded in 15-30 mins)
I've given it some testing with larger amounts of files and it is looking good.
You can find the code in the opendrive branch if anyone wants to send pull requests against it!
I've finished off the opendrive backend and merged it to master
https://beta.rclone.org/v1.41-047-gcdde8fa7/ (uploaded in 15-30 mins)
It will be in the latest beta from now on and in v1.42.
Please test and report bugs!
Thank you @olihey and @Khouba for getting nearly all the way there :-) I've merged it as 3 commits one by each of you and one by me (I normally merge backends as 1 commit but I wanted to show where credit was due!).
Amazing, thanks. Will test straight away 馃憤
Hi there!
After a long time without using rclone, I downloaded today the last beta to use it with my new OpenDrive account.
The thing is that when I try to mount OpenDrive on my Synology through ssh using:
rclone mount OpenDrive: --allow-non-empty --allow-other --default-permissions --read-only /volume1/TimeMachine/OpenDrive &
I receive:
Failed to create file system for "OpenDrive:": didn't find filing system for "opendrive".
I've tried Rclone Browser on my Mac and I had no trouble doing some test uploading some files to OpenDrive.
Anyone can help?
Thanks!
Solved: I'm silly and I installed the beta on my computer but not in my Synology, that's why I couldn't mount the drive
@curro88 the type field in the config file has an extra space in by the look of it.
I'm going to close this as it is done - hooray!
Most helpful comment
Hey, if anybody still interested I took where olihey left off:
https://github.com/Khouba/rclone
It's passing tests, except server side Copy and Move.