Restic: Slow, although it is just checking timestamps

Created on 17 Feb 2016  路  5Comments  路  Source: restic/restic

I have 164809 files to backup regularly (about 60GB)... Every time I run "restic backup" the report doesn't go beyond 33MB/s and checking with strace it's only doing lstat() calls.

This renders about 20 minutes per backup. I wonder what is restic doing, because being almost all files unmodified, it shows the steady 33MB/s and I understand that it only needs to lstat() them, which is exactly what restic already does at the first step of backup just to show the total size, in 6 or 7 seconds.

Is it just the cpu time spent in checking the contents for that same file/timestamp is already present in a restic previous snapshot?

feature enhancement

Most helpful comment

Yes, this will most likely be the reason. For this particular use case there's a workaround: use the -f (--force) option for the backup command, which will read all files locally again and not load metadata from the repo. That should be fast.

All 5 comments

At the moment, the metadata for the files and directories is not cached, but loaded (and decrypted) from the repository. This is done once per directory. I'm planning to cache metadata locally, which is not yet implemented but should speed up "incremental" backups a lot.

Hi

Could this also cause poor performance for incremental backups over a slow WAN connection?

I just backed up a folder with something over 9000 files and 250MB to a remote s3 server. Both computers are connected with an asymmetrical internet connection of 50/5 mbit/s down and up.

The initial backup took about 5 minutes and seemed pretty reasonable. But a second backup shortly after that took almost twice as long! A folder with less files seems to be much faster.

Yes, this will most likely be the reason. For this particular use case there's a workaround: use the -f (--force) option for the backup command, which will read all files locally again and not load metadata from the repo. That should be fast.

Thank you very much! Works like a charm!

We've added a local metadata cache (see #1040) in the master branch, I think this issue is resolved and therefore I'm closing it. Thanks!

Was this page helpful?
0 / 5 - 0 ratings