Dear Developers,
While syncing, the ETA is shown. I find it totally unrealiable. It seems the estimate is "total transfer to be made/ current transfer speed". The truth is there is a "constant cost" if transferring a single file, which is easy to estimate (it is around 0.8 sec in my case). Actually in my case I am pretty sure the sync will be over in approximately two hours, just because the remainder of files consists of larger files.
The "transfer time per file" is given by:
a constant value (in my case 0.8 sec) for a small file
filesize/ transfer rate for large files.
There is a smooth crossover between "small" and "large" file regime. Currently OC wrongly assumes all files are large.
Though I guess it would reordering synchronization order (first send a few small files, and then a large one to get the estimate of both the constant component and the actual transfer speed). Or am I wrong?

Remaining transfer time = remaining filesize / current transfer speed = 283470s = 3.28 days
Constant cost = 1654 files * 0.8s/file = 22 min.
The constant cost of each file transfer is somehow negligible at this point. There are more effects (larger files often start slowly and get faster) and the overall performance for a large set of files is hard to predict. Do you know a data transfer tool that does a good eta prediction?
Nope, but I do not really use them. Still the data transfer took me approximately 3 hours in the end (upload speed approx 500 kB/s). The estimate for the "remaining time" was changing every second between 1 and 7 days (depending whether the "small file" was say a few or a few dozens kB).
Of course if transfer speeds are not stable, then it is not possible to predict the total time at all.
But:
The copy time depends mainly on filesize.
So after the program has copied say a few hundred files of various sizes, it should be possible to estimate reliable the ETA (by reliably I also mean under these assumptions).
On a given connection with a given client/server-combination, you can probably figure out something more sophisticated. But does this easily apply for the other cases?
You could as well group the small files, put them in a zip-archive and transfer this archive (and unpack on server side). This way, the predictions are better (all files are about the same size, because big files get chunked) and the upload faster.
Actually I think on a stable connection there are approximately two parameters which should yield much more accurate prediction: large files transfer speed, and small file total process. If more accuracy is required, one may measure "average transfer speed of files between <50, 50-100kB, 100-200 kB, 200-400kB,400-800kB, etc." and use that to make an estimate.
This is heuristics, that should be fine tuned and devised in such a way that it gives exact ETA for totally perfectly predictable transfer behaviour.
Indeed I do the zip-trick in the end.
And this is actually my complain: if I work in a completely predicatable environment the ETA reported by the client may be way wrong. It is impossible to make it always right without psychic powers, but there are many cases when the estimate can be made well. Currently it is exact only when large files are copied, however, copying small files can also be predictable to large extent. But the latter case is actually the case when the client fails completely.
I agree that there is room for improvement, e.g. by taking into account the constant-time per file overhead each small file has. Also interesting to take into account is buffer sizes (in Qt, in the OS, on your network card etc).
When improving this issue, it might make sense to keep https://github.com/owncloud/client/issues/3382 and https://github.com/owncloud/client/issues/4354 in mind.
Thanks for bringing this up. I made some changes in the area a few releases back but acknowledge that it's far from ideal. The current ETA model actually attempts switching between small-file and large-file regimes, but doesn't do it well.
I believe we should model the remaining time as
remaning_file_sizes / transfer_speed
+ remaining_file_count * per_file_overhead
+ remaining_chunked_file_sizes / chunked_reassembly_speed
and estimate the three parameters as jobs progress and finish.
Unfortunately we can't estimate transfer_speed well unless there's a big file upload. And we can't estimate per_file_overhead well unless there are many small files. So the ETA would still be horribly wrong if the first hour of a synchronization is big files and the second hour is small files. But at least the estimate would get better over time.
@ckamm somehow related is the speed meter when it displays > 1MB/s, notice the switch:
Could a couple of decimals, instead of truncate/round the figure, be used to make it a bit more accurate?
@SamuAlfageme Would be easy to do, the current behavior is due to https://github.com/owncloud/client/issues/3403#issuecomment-134064781 and 4915bbf8f342b7225f215fd9b0d3fb252c843b9e
@ckamm I'd love to have this change! I think it makes more sense; we're not just yet at the era of GB-speed connections to consider that .XX unimportant. 馃榿
@SamuAlfageme I put it into 2.3 :)