Sickchill: Dead-slow shutil.copyfile and shutil.movefile in helpers.py

Created on 2 Mar 2017  Â·  43Comments  Â·  Source: SickChill/SickChill

SickRage/sickbeard/helpers.py

The defined methods in helpers.py for moving and copying files using shutil.movefile and shutil.copyfile are dead-slow on a Mac (running SickRage, downloading to a local Volume) when copying/moving files to a smb share (to which the Mac is connected and which holds the final repository).

Typically I reach 1-3MB/s, where if I drag'n'drop a file from the Mac to the smb share, I reach well above 50MB/s.

This seems to be a known issue with python, see for example http://stackoverflow.com/questions/22078621/python-how-to-copy-files-fast.

I've seen other methods documented, including in the link above, which take into account if the platform is OSX or Windows, and uses faster (OS-specific) methods to copy/move the files, resulting in way faster copying/moving during post-processing.

Is fixing this on the roadmap?

Bug / Issue Confirmed

Most helpful comment

@ShaharHD: "Wow" is the only word I have... I mounted the shares using nfs:// (instead of afp:// or smb://) and now things fly...

Recap of my test speeds of the 6.66GB file set:

  • OSX-native, local to local: 1.5 minutes, 70MB/s
  • OSX-native, local to _nfs_: 2 minutes, 52MB/s
  • OSX-native, local to smb/afp: 4 minutes, 26MB/s

  • SR-develop, local to local: 4 minutes, 26MB/s (including artwork fetching)

  • SR-develop, local to _nfs_: 4 minutes, 26MB/s (including artwork fetching)
  • SR-develop, local to smb/afp: won't even start holding my breath

So from what I see, NFS solves my problem. It doubles my OSX native copying speeds, and multiplies SR copying speeds by a zillion. SR copying to an nfs share is now as fast as SR copying to a local volume. I think I must have reached a file _read_ bottleneck there somewhere...

All 43 comments

Thanks for the issue report! Before a real human comes by, please make sure your report has all the below criteria checked

  • [ ] Include basic information: Branch/Commit, OS, What you did, What happened, What you expected
  • [ ] Enable debug logging (be sure to disable after the bug is fixed)
  • [ ] Post debug logs, either inline (for smaller logs) or using gist

Please make sure you also read how to create an issue and followed all of the steps.

The title should describe your issue. Having "SR not working" or "I get this bug" for 100 issues, isn't really helpful. We will close issues if there isn't enough information.

Sometimes the devs may seem like grunts and respond with short answers. This isn't (always) because the dev hates you, but because he's on mobile or busy fixing bugs. If something isn't clear, please let us know, and this bot may get updated to automatically answer you.

Thanks!

This is a report of slow shutil.copy command on Windows. Seems to be buffer-related. Increasing the buffer reportedly improves performance a lot.
http://stackoverflow.com/questions/21799210/python-copy-larger-file-too-slow
This fixes shutil.copy on Windows. My request is to include shutil.move as well and be platform-specific when running on either Mac, Win or *nix.

Can anyone spend some time on this, I have very limited programming skills myself...

Confirmed on Windows 7 too.

Another option is to use the solution suggested here, for the copy function at least.
(This is slightly modified, haven't tested though)
I might test this later today.

# Adapted from http://blogs.blumetech.com/blumetechs-tech-blog/2011/05/faster-python-file-copy.html
def copy_file(src, dst, buffer_size=10485760, preserve_file_date=True):
    """
    Copies a file to a new location.
    Much faster performance than Apache Commons due to use of larger buffer.

    :param src:    Source file path
    :param dst:    Destination file path
    :param buffer_size:    Buffer size to use during copy
    :param preserve_file_date:    Preserve the original file date
    """
    # Check to make sure destination directory exists. If it doesn't create the directory
    dst_parent, dst_file_name = os.path.split(dst)
    if not os.path.exists(dst_parent):
        os.makedirs(dst_parent)

    # Optimize the buffer for small files
    buffer_size = min(buffer_size,os.path.getsize(src))
    if buffer_size == 0:
        buffer_size = 1024

    if shutil._samefile(src, dst):
        raise shutil.Error("`{0}` and `{1}` are the same file".format(src, dst))
    for fn in [src, dst]:
        try:
            st = os.stat(fn)
        except OSError: #  File most likely does not exist
            pass
        else: #  XXX What about other special files? (sockets, devices...)
            if shutil.stat.S_ISFIFO(st.st_mode):
                raise shutil.SpecialFileError("`{}` is a named pipe".format(fn))
    with open(src, 'rb') as fsrc:
        with open(dst, 'wb') as fdst:
            shutil.copyfileobj(fsrc, fdst, buffer_size)

    if preserve_file_date:
        shutil.copystat(src, dst)

Great to see this issue open. Good luck!

@sharkykh we used to have a custom copy that was monkey patched into shutil.
There is a better way to monkey patch it in I guess. Issue is that the default buffer size is 16k
I think this will work

import shutil
copyfileobj_orig = shutil.copyfileobj

def copyfileobj(fsrc, fdst, length=16*1024*1024):
    return copyfileobj_orig(fsrc, fdst, length)
shutil.copyfileobj = copyfileobj

There is this option also, we don't really have to stick with shutil
http://stackoverflow.com/a/28129677

I am very interested to see what comes of this.
on OS X a file move operation using the shutil.move in helpers.py takes hours on a 5gb file from ssd to external usb 3.0 drive.

Copy and paste or cut paste in OS X however takes around 3-5min

@roelanddewindt @haqthat
I would appreciate if you could switch branches in order to test this.

Enter your Github username and password in the General Config -> Advanced Settings and save it.
Shutdown SickRage, edit Data/config.ini file, look for developer = 0 and set it to 1.
Start SickRage, go back to General Config -> Advanced Settings -> Github -> Branch version, and checkout the branch named faster-file-functions.
After SickRage restarts, test processing files - with move or copy process method.

Just switch to develop and try it

Unfortunately, after switching to the develop branch and restarting, I saw the transfer speeds from local volume to NAS remain mainly unchanged. Was about 3.3MB/s before, drops to about 3.0MB/s after. Same with copy and move...

Drag-n-drop the same file between same source/dest volumes gives a transfer speed of about 30MB/s, 10x increase.

I do see the changes to code in helpers.py (increasing the buffer), so that's accounted for, but doesn't seem to help.

Can you change code to use the OS native move and copy mechanisms? See [http://stackoverflow.com/questions/22078621/python-how-to-copy-files-fast] for a hint on this.

Thanks for the effort you put into this.

I don't really like the idea of putting os-specific commands on such a simple operation like copying a file.
I doesn't feel right to me...
I'm pretty sure the issue is the buffer size, but I don't think I can just up it to like 64mb without consequences, like for users with hard drives that have 32mb cache.

But perhaps I'm wrong, or I'm looking at it the wrong way...

As it turns out, I can't actually see if my code makes any difference.
I am copying a 1.11 GB file over from an HDD to an SSD, same system:
Copying using Ctrl+C, Ctrl+V: 18s~19s
Copying using shutil.copy:

Copy method | Result
--- | ---
Default buffer | 18s
16 MB buffer | 19s
64 MB buffer | 24s

I'm actually seeing a decrease in performance... :confused:

Also, I guess I don't have that issue on Windows, as I now realize that the reason it was slow for me is because I was copying within the same hard drive..

So copying from and to local volumes is speedy whatever the buffer size.
Have you tried copying from a local volume to an smb share? That's where
the problem of slow speeds crop up.
Ps i should try copying local2local myself. On the road now, so will have
to wait. I suspect local2local will be no problem on my system either.

On Wed, 5 Apr 2017 at 16:47, sharkykh notifications@github.com wrote:

As it turns out, I can't actually see if my code makes any difference.
I am copying a 1.11 GB file over from an HDD to an SSD, same system:
Copying using Ctrl+C, Ctrl+V: 18s~19s
Copying using shutil.copy:
Copy method Result
Default buffer 18s
16 MB buffer 19s
64 MB buffer 24s

I'm actually seeing a decrease in performance... 😕

—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/SickRage/SickRage/issues/3292#issuecomment-291885244,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AJZQt91PoHgUBZuhMSFdF2kc6caSBF03ks5rs6mYgaJpZM4MRAAH
.

SSD to SSD on SMB share:

File size: 1.11 GB

Method | Result
--- | ---
Ctrl-C, Ctrl-V | ~11s
Default shutil.copy | 51s
10 MB shutil.copy | 12s
64 MB shutil.copy | 12s

Maybe it's the SSD...

SSD to HDD on SMB share:

File size: 1.11 GB

Method | Result
--- | ---
Ctrl-C, Ctrl-V | ~20s
Default shutil.copy | 50s
10 MB shutil.copy | 24s
64 MB shutil.copy | 25s

So in both cases I'm seeing a big improvement.
So maybe you were right and this only helps Windows users.

@roelanddewindt Have you had a chance to test local-to-local?

@miigotu I'm getting very good results on Windows now that I'm copying from an SSD to an HDD on same system.
How do _you_ feel about using native commands based on platform (copy / cp) instead of shutil? Maybe Python itself is that cause of the slowness.
Should be pretty straightforward to implement.

@sharkykh can you give the commands you're using for this? I'd like to test on my Macbook to a SMB share on my Unraid server as well as Docker.

@OmgImAlexis I am simply running a manual post-processing task.
I'm tweaking the buffer set in the patch (4b5f21e3ede977389aee30e5a15d7c67273b9cad) between tests and restarting SR.

A more simple way is to put the patch code in a py script and call the methods however you like.

@sharkykh Sorry for my late input.

I tested on both master and develop branch local-local. A collection of 12 media files (6.66GB) copied in 4 minutes at an average speed of 26MB/s. I repeat; same on both master and develop branch.
Note that with SR copying, a pauze occurs between files when SR fetches artwork etc. I did see spikes of around 60-70MB/s when actual copying took place. So compared with speeds I got with native copying in OSX (below), SR reached the same speeds both on master and develop.

Copying the file set with OSX native local-local took 1.5 minutes for 70MB/s.
Copying the file set with OSX native local-share too 3.5 minutes for 32MB/s.

I won't even try waiting for the local-share copying to complete. On both develop and master, I see my speed drop to about 3.5MB/s.

In a nutshell: python local-local: OK, but local-share NOT OK, with or without increased buffers.

As to your _feeling_ :) about using native copy/move commands. I hope I can persuade you to branch this so we can test at least.

I just debated with my brother and he suggested:

  1. 10MB buffer doesn't do us any good, and it should be 128KB/256KB/512KB max.
  2. That the issue might be that the copy function isn't using non-blocking I/O.

I'll make a small script to test these ideas and upload it here for you to test with.

I have to go now so I'll keep it short.

This is NOT by any means a final version.

Should only be for testing...
https://pastebin.com/bDk9vQMy
Save this as a python script testfilecopy.py

You can select a buffer by changing BUFFER['256kb'] to one of these:

BUFFER['128kb']
BUFFER['512kb']
BUFFER['10mb']

or any number you'd like:

KB = Bytes * 1024
MB = Bytes * (1024 ** 2)

Running the script:

# Syntax:
python testfilecopy.py <source-file> <destination-file>

# Mac:
python testfilecopy.py "/path/to/source/file.mkv" "/Volume/destination/file.mkv"
# Windows
python testfilecopy.py "C:\Source\file.mkv" "\\ip-or-hostname\Destination\file.mkv"

Timing the script:

On Mac

time python testfilecopy.py "/path/to/source/file.mkv" "/Volume/destination/file.mkv"

Powershell on Windows

Measure-Command {python testfilecopy.py "C:\Source\file.mkv" "\\ip-or-hostname\Destination\file.mkv"}

```
time python testfilecopy.py "/Users/xo/Desktop/The Great Dictator (1940).mkv" "/Volumes/dev/The Great Dictator (1940).mkv"

2017-04-13 19:57:49 :: START :: source: /Users/xo/Desktop/The Great Dictator (1940).mkv :: destination: /Volumes/dev/The Great Dictator (1940).mkv
Traceback (most recent call last):
File "testfilecopy.py", line 65, in
shutil.copy(src_file, dst_file)
File "/usr/local/Cellar/python/2.7.13/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 119, in copy
copyfile(src, dst)
File "testfilecopy.py", line 41, in copyfile
with open(dst, os.O_CREAT | os.O_WRONLY | os.O_NONBLOCK) as fdst:
TypeError: file() argument 2 must be string, not int
python testfilecopy.py "/Users/xo/Desktop/The Great Dictator (1940).mkv" 0.05s user 0.02s system 79% cpu 0.090 total

Exactly the same errors here... on Python 2.7.12

Forgot to put mode, sorry.
Try this: https://pastebin.com/ud7G0JtJ

It seems I was completely wrong. I used open instead of os.open.
When I change it it to with os.open(dst, os.O_CREAT | os.O_WRONLY | os.O_NONBLOCK) as fdst
I get AttributeError: __exit__

Maybe we should use sendfile. (Has no effect for Windows)

2017-04-13 20:37:03 :: START :: source: /Users/xo/Desktop/The Great Dictator (1940).mkv :: destination: /Volumes/dev/The Great Dictator (1940).mkv
2017-04-13 20:37:49 :: DONE
python testfilecopy.py "/Users/xo/Desktop/The Great Dictator (1940).mkv"   0.76s user 13.31s system 30% cpu 45.485 total
➜  Desktop time cp "/Users/xo/Desktop/The Great Dictator (1940).mkv" "/Volumes/dev/The Great Dictator (1940).mkv"
cp "/Users/xo/Desktop/The Great Dictator (1940).mkv"   0.01s user 5.92s system 16% cpu 37.017 total
➜  Desktop time rsync -zvhW "/Users/xo/Desktop/The Great Dictator (1940).mkv" "/Volumes/dev/The Great Dictator (1940).mkv"
The Great Dictator (1940).mkv

sent 1.70G bytes  received 35 bytes  13.42M bytes/sec
total size is 1.81G  speedup is 1.06
rsync -zvhW "/Users/xo/Desktop/The Great Dictator (1940).mkv"   91.89s user 14.70s system 84% cpu 2:05.89 total

Maybe we should use sendfile.

No Windows support.

I think the debate with your brother ended with the wrong conclusion. It's a well known fact that the small buffer in shutil is a very big problem. Changing it to a large buffer avoids thrashing.

The buffer size is only one part of a 2 or 3-issue solution.

Just look back in SR history and find the patch that used to be in there, I removed it like almost 2 years ago.

The brother here :)

I suggested to check various sizes of the buffer size as on my rMBP (connected on a 802.11ac - Ubiquity AP) we tested with 10MB buffer and 512KB buffer - and the 512KB performed better (by around 20-25 seconds if I remember correctly).

Regarding buffer sizes. On mac/linux/unix machines using Samba as their connection to the windows world they are limited by Samba implementation which its configuration tends be to be on the conservative side (https://www.arm-blog.com/samba-finetuning-for-better-transfer-speeds)

Are the timing constant in all possible cases?

  • [ ] Reading from local -> Writing on SMB Share
  • [ ] Reading from SMB Share -> Writing on local
  • [ ] Reading form SMB Share -> Writing on SMB Share

To the brother @shaharhd,
In my humble opinion the smb part of neither osx nor nas is the problem. The python method is. Copying files local to nas (smb://) using osx is speedy, python crawls. Also, I tried mounting the same shares using afp:// instead of smb:// and the same speed (differences) occurred.

@roelanddewindt can you test with nfs mount (Asynchronous, can be either TCP or UDP) ?
if you do a cp command and not drag & drop, the timings are better than python?

@ShaharHD problem is that the built-in buffer size is 16k, IIRC, and that is for sure not enough, it should be at least 1MB, but that all depends on things like hard drive's hardware cache, etc. In SMB case, it also depends on the version/implementation of samba and how it was compiled. We had done a lot of work figuring this stuff out back in the day with WiiFlow and WiiXplorer to maximize transfer times to the Wii =P

Really, anything less than 1MB is silly. (I think the most common sizes we used were 2-3MB)

@dimok789 what size did WiiXplorer end up using in the end for transfer buffer over smb?

@miigotu This is why I suggest @sharkykh to allow the setting of the buffer size so the end users can see which one performs the best for them - to maybe give better insight into the issue.

Another suggestion for a test is (if you have enough ram) - load file in big chunks (500MB) to the RAM and copy it from there, time the "chuck" writes - it might give you another insight.

I've looked at http://stackoverflow.com/questions/22078621/python-how-to-copy-files-fast which was mentioned at the top which seems like there's already a solution (a bit complex one).

@ShaharHD: "Wow" is the only word I have... I mounted the shares using nfs:// (instead of afp:// or smb://) and now things fly...

Recap of my test speeds of the 6.66GB file set:

  • OSX-native, local to local: 1.5 minutes, 70MB/s
  • OSX-native, local to _nfs_: 2 minutes, 52MB/s
  • OSX-native, local to smb/afp: 4 minutes, 26MB/s

  • SR-develop, local to local: 4 minutes, 26MB/s (including artwork fetching)

  • SR-develop, local to _nfs_: 4 minutes, 26MB/s (including artwork fetching)
  • SR-develop, local to smb/afp: won't even start holding my breath

So from what I see, NFS solves my problem. It doubles my OSX native copying speeds, and multiplies SR copying speeds by a zillion. SR copying to an nfs share is now as fast as SR copying to a local volume. I think I must have reached a file _read_ bottleneck there somewhere...

New testfilecopy.py

Implements the custom file copy suggested here: http://stackoverflow.com/a/28129677

It also implements the native functions too,
however copy (Windows) didn't work for me.
I can't figure out how to strip the excess backslashes Python is adding (escaping)
before calling subprocess.Popen.

New syntax is:

python testfilecopy.py <method> "<source-files-separated-by-pipe>" "<destination-folder>"
# <method> = custom / native
# <source files> = /path/to/file.ext|/path/to/another/file.ext
# <destination folder> = /path/to/destination/  # folder, not file

Buffer is set to 512KB, you can change it on line 6.

Can't get it to work, unsure what I'm doing wrong... Copying a file from the current working directory, where the script resides as well, to a directory on another volume where user admin has equal write privs. I thought it's the space in the destination path, but tried to other destination path without spaces, but same FAILED message crops up.

admin$ python testfilecopy.py custom 1GB.dmg /Volumes/Stripe\ 3-4/Users/admin/Desktop
[2017-04-13 22:24:06] method: custom , buffer: 512 KB
[2017-04-13 22:24:06] src_files[1]: ['1GB.dmg']
[2017-04-13 22:24:06] dst_path: /Volumes/Stripe 3-4/Users/admin/Desktop
[2017-04-13 22:24:06] START :: custom :: source: /Users/admin/Desktop/1GB.dmg :: destination: /Volumes/Stripe 3-4/Users/admin/Desktop/1GB.dmg
[2017-04-13 22:24:06] FAILED
[2017-04-13 22:24:06] FINISH
[2017-04-13 22:24:06] DONE IN 0

admin$ python testfilecopy.py native 1GB.dmg /Volumes/Stripe\ 3-4/Users/admin/Desktop
[2017-04-13 22:24:32] method: native
[2017-04-13 22:24:32] src_files[1]: ['1GB.dmg']
[2017-04-13 22:24:32] dst_path: /Volumes/Stripe 3-4/Users/admin/Desktop
[2017-04-13 22:24:32] START :: native :: source: /Users/admin/Desktop/1GB.dmg :: destination: /Volumes/Stripe 3-4/Users/admin/Desktop/1GB.dmg
[2017-04-13 22:24:32] FAILED
[2017-04-13 22:24:32] FINISH
[2017-04-13 22:24:32] DONE IN 0

Updated the gist because I did something wrong, and to log more information about failures.
Please download the file again.

Custom method is fine...

admin$ python testfilecopy.py custom 1GB.dmg "/Volumes/Stripe 3-4/Users/admin/Desktop"
[2017-04-13 22:46:57] method: custom , buffer: 512 KB
[2017-04-13 22:46:57] src_files[1]: ['1GB.dmg']
[2017-04-13 22:46:57] dst_path: /Volumes/Stripe 3-4/Users/admin/Desktop
[2017-04-13 22:46:57] START :: custom :: source: /Users/admin/Desktop/1GB.dmg :: destination: /Volumes/Stripe 3-4/Users/admin/Desktop/1GB.dmg
[2017-04-13 22:47:16] FINISH
[2017-04-13 22:47:16] DONE IN 18

Native method throws errors

admin$ python testfilecopy.py native 1GB.dmg "/Volumes/Stripe 3-4/Users/admin/Desktop"
[2017-04-13 22:47:35] method: native
[2017-04-13 22:47:35] src_files[1]: ['1GB.dmg']
[2017-04-13 22:47:35] dst_path: /Volumes/Stripe 3-4/Users/admin/Desktop
[2017-04-13 22:47:35] START :: native :: source: /Users/admin/Desktop/1GB.dmg :: destination: /Volumes/Stripe 3-4/Users/admin/Desktop/1GB.dmg
Traceback (most recent call last):
  File "testfilecopy.py", line 129, in <module>
    if not call_method(method, src_file, dst_file):
  File "testfilecopy.py", line 72, in call_method
    return copy_with_subprocess(src_file, dst_file)
  File "testfilecopy.py", line 63, in copy_with_subprocess
    docopy = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/subprocess.py", line 711, in __init__
    errread, errwrite)
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/subprocess.py", line 1343, in _execute_child
    raise child_exception
OSError: [Errno 2] No such file or directory

OK, tested with my brother's MBP, should work for you now:
https://gist.github.com/sharkykh/e727198b4c8b85f631706e163f52722a

The native method no longer throws an error, but no actual copying takes place...

Mac-Pro:Desktop admin$ python testfilecopy.py native 1GB.dmg "/Volumes/Multimedia"
[2017-04-13 23:31:41] method: native
[2017-04-13 23:31:41] src_files[1]: ['1GB.dmg']
[2017-04-13 23:31:41] dst_path: /Volumes/Multimedia
[2017-04-13 23:31:41] START :: native :: source: /Users/admin/Desktop/1GB.dmg :: destination: /Volumes/Multimedia/1GB.dmg
[2017-04-13 23:31:41] cmd: cp "/Users/admin/Desktop/1GB.dmg" "/Volumes/Multimedia/1GB.dmg"
[2017-04-13 23:31:41] FINISH
[2017-04-13 23:31:41] DONE IN 0:00:00.002317

Then tested with custom method and various buffer sizes... (cut a bit in the output). All this with the 1GB test file and to a nfs:// share.

[2017-04-13 23:32:19] DONE IN 0:00:16.730297

[2017-04-13 23:32:30] method: custom , buffer: 16 KB
[2017-04-13 23:32:47] DONE IN 0:00:16.724298

[2017-04-13 23:32:58] method: custom , buffer: 32 KB
[2017-04-13 23:33:15] DONE IN 0:00:17.222858

[2017-04-13 23:33:32] method: custom , buffer: 64 KB
[2017-04-13 23:33:49] DONE IN 0:00:16.753438

[2017-04-13 23:34:04] method: custom , buffer: 128 KB
[2017-04-13 23:34:23] DONE IN 0:00:19.561199

[2017-04-13 23:34:38] method: custom , buffer: 256 KB
[2017-04-13 23:34:55] DONE IN 0:00:16.662577

[2017-04-13 23:35:12] method: custom , buffer: 512 KB
[2017-04-13 23:35:28] DONE IN 0:00:16.307206

[2017-04-13 23:35:44] method: custom , buffer: 1 MB
[2017-04-13 23:36:01] DONE IN 0:00:16.807435

[2017-04-13 23:36:17] method: custom , buffer: 2 MB
[2017-04-13 23:36:34] DONE IN 0:00:16.931678

[2017-04-13 23:37:28] method: custom , buffer: 32 MB
[2017-04-13 23:37:45] DONE IN 0:00:16.867575

[2017-04-13 23:37:00] method: custom , buffer: 128 MB
[2017-04-13 23:37:17] DONE IN 0:00:17.526401

My conclusion is that (to an nfs:// share at least), the buffer size has little or no influence on transfer speed.

I'm not eager to move back to smb:// mounts. The move to nfs:// has proven beneficial for my CouchPotato post-processing speeds as well.

If you @sharkykh want to, I'll test on smb:// as well (and native if you can fix it), but what interest me more is if your brother @ShaharHD sees the same speed increase when he connects over nfs:// instead of smb://... If so, I'm afraid all buffer tweaking or fiddling with native rather than python copy commands will have a much smaller effect than the move to nfs:// as was my case.

@roelanddewindt we know nfs is fast, it's fast on everything. The protocol itself is just better in general. All of these tests should be for smb lol.

The native method no longer throws an error, but no actual copying takes place...

Mac-Pro:Desktop admin$ python testfilecopy.py native 1GB.dmg "/Volumes/Multimedia"
[2017-04-13 23:31:41] method: native
[2017-04-13 23:31:41] src_files[1]: ['1GB.dmg']
[2017-04-13 23:31:41] dst_path: /Volumes/Multimedia
[2017-04-13 23:31:41] START :: native :: source: /Users/admin/Desktop/1GB.dmg :: destination: /Volumes/Multimedia/1GB.dmg
[2017-04-13 23:31:41] cmd: cp "/Users/admin/Desktop/1GB.dmg" "/Volumes/Multimedia/1GB.dmg"
[2017-04-13 23:31:41] FINISH
[2017-04-13 23:31:41] DONE IN 0:00:00.002317

That's why I don't want to use shell commands.. You have to manually debug each error/exception.
And in this example I don't even have one, and obviously (from the timedelta) it didn't do anything.

What happens if you manually run the command native copy used?

cp "/Users/admin/Desktop/1GB.dmg" "/Volumes/Multimedia/1GB.dmg"

Maybe your system is doing some caching? Or maybe because the file is already there and it's the same, it doesn't really copy it?

@sharkykh I understand your reluctance to rely on external commands. It complicates things.

To be honest, ever since I've switched from smb:// to nfs:// I'm more than OK with performance of post-processing. I have to thank you for picking this up and trying to solve the initial speed issue. Maybe you should put the nfs:// tip in a FAQ "performance when post-processing to smb:// shares" of some kind... it solved my problem on Mac OSX at least. I guess a lot of people will not have the option for nfs://, but then again, the majority may not post-process from local to a share, but keep everything local.

Thanks again all!

Was this page helpful?
0 / 5 - 0 ratings

Related issues

heymoe picture heymoe  Â·  3Comments

Theli93 picture Theli93  Â·  3Comments

Rouzax picture Rouzax  Â·  3Comments

jkjkrules picture jkjkrules  Â·  3Comments

oodonnell picture oodonnell  Â·  4Comments