Client: Sync error loop on Windows 10, after reinstalling app crashed during sync and starts to delete files after automatic restart

Created on 13 Dec 2020  路  32Comments  路  Source: owncloud/client

Expected behaviour

File sync using virtual files on Windows 10 after automatic app upgrade.

Actual behaviour

A group of regular PDF files are ignored in the sync process due to an unknown reason and displayed in the ignore window of the desktop client. Client keeps trying to resync the same group of files though produces the same error, hence get caught in a loop and never completes the sync for which other files pending syncing are put on hold and never uploaded/downloaded. After reinstalling the app, on some computers the app crashes and automatically restarts or is manually restarted. After restart the app starts to remove all files from the server that have not yet been downloaded, rather than picking up the sync where it left off and downloading those files.

Steps to reproduce

  1. Automatic upgrade to latest version of Windows 10 OwnCloud client without changing or reinstalling sync account.
  2. Wait for sync loop error.
  3. Reinstall app by downloading latest version from Owncloud.com.
  4. Reconfigure sync account and link an empty folder where the files should download.
  5. Files start to download, then wait for crash.
  6. After automatic or manual app restart, files are being removed from server rather than being downloaded.

Server configuration

Operating system: CentOS 7.8
Web server: Apache/2.4.6 (CentOS)
Database: mysql Ver 15.1 Distrib 5.5.65-MariaDB
PHP version: PHP 7.2.33
ownCloud version: ownCloud 10.5.0
Storage backend (external storage): No external storage

Client configuration

Client version: 2.7.3.2877 and 2.7.2.2626
Operating system: Windows 10 Home 2004 19041.685
OS language: Spanish
Installation path of client: C:\Program Files (x86)\ownCloud\

Logs

No error log file was configured on either of the 12 machines with the same issue. In order to avoid getting more files deleted (we already have to manually restore over 230,000 files from different user trash bins) we can't run the app again to reproduce the error as it sometimes takes several hours for the app to crash and then in a few minutes starts to remove thousands of files. We do have the server log file, activity output in client window as well as the .owncloudsync.log file in the OwnCloud file folder window in 2 of the machines.

Most helpful comment

I came up with a solution that works pretty well and that recovers between 10,000 and 12,000 files per hour, though it requires making some edits to the undelete.php file and locking the new code to the IP address you are executing it from. You also need to be logged on with the same user on the Web interface as well as running a code snippet that feeds the undelete.php file with a list of files from the oc_files_trash table. It's not pretty and it needs human intervention/supervision, but it works and the recovery is very fast. I will try to clean up the code a little and then post the solution here.

All 32 comments

Please send .owncloudsync.log to apps-<at>-owncloud.com

Thank you. Log file and more information sent

Update

We did more tests using an account with an limited amount of files that could easily be restored. The desktop app does not seem to crash/restart, but rather to freeze/stall a bit, then goes into _Looking for changes in folder_ and right after that the elimination process starts.
The reasons seems to be that the virtual file links are never created and this is probably the reason why the desktop app thinks it should remove those files from the server (since the local folders are empty). When switching off the virtual files mode the actual files are created without any problem. We tested this on the C:/ drive as well as on a secondary partition of the same hard drive - same result.
We enabled the comprehensive log on the desktop app. This segment with errors repeats throughout the log file:

12-13 21:55:22:756 [ info sync.propagator ]: Completed propagation of "clientest/Word Test/TEST TEST TEST TEST - Copy.docx" by OCC::PropagateDownloadFile(0x24cf1338) with status OCC::SyncFileItem::Success
12-13 21:55:22:756 [ debug sync.statustracker ] [ OCC::SyncFileStatusTracker::slotItemCompleted ]: Item completed "clientest/Word Test/TEST TEST TEST TEST - Copy.docx" OCC::SyncFileItem::Success CSyncEnums::CSYNC_INSTRUCTION_NEW
12-13 21:55:22:763 [ info sync.propagator ]: Starting INSTRUCTION_NEW propagation of "clientest/Word Test/TEST TEST TEST TEST.docx" by OCC::PropagateDownloadFile(0x24cf12b8)
12-13 21:55:22:763 [ debug sync.propagator.download ] [ OCC::PropagateDownloadFile::start ]: "clientest/Word Test/TEST TEST TEST TEST.docx" 0
12-13 21:55:22:763 [ debug sync.propagator.download ] [ OCC::PropagateDownloadFile::start ]: creating virtual file "clientest/Word Test/TEST TEST TEST TEST.docx"
12-13 21:55:22:763 [ warning sync.vfs ]: CfCreatePlaceholders error "La operaci贸n de nube no es v谩lida." "La operaci贸n se complet贸 correctamente." "clientest\Word Test\TEST TEST TEST TEST.docx"
12-13 21:55:22:763 [ warning sync.vfs ]: convertToPlaceholder: could not get handle for "C:/octest/clientest/Word Test/TEST TEST TEST TEST.docx"
12-13 21:55:22:763 [ critical sync.csync.vio_local ]: CreateFileW failed on \?\C:\octest\clientest\Word Test\TEST TEST TEST TEST.docx
12-13 21:55:22:763 [ warning sync.fileitem ]: Failed to query the 'inode' for file "C:/octest/clientest/Word Test/TEST TEST TEST TEST.docx"

The complete log file has been sent via email.

Looks like this is the issue: https://central.owncloud.org/t/virtual-file-sync-no-longer-working-properly/29388/8

We will be doing more tests tomorrow to confirm.

Update

We did more tests using an account with an limited amount of files that could easily be restored. The desktop app does not seem to crash/restart, but rather to freeze/stall a bit, then goes into _Looking for changes in folder_ and right after that the elimination process starts.
The reasons seems to be that the virtual file links are never created and this is probably the reason why the desktop app thinks it should remove those files from the server (since the local folders are empty). When switching off the virtual files mode the actual files are created without any problem. We tested this on the C:/ drive as well as on a secondary partition of the same hard drive - same result.
We enabled the comprehensive log on the desktop app. This segment with errors repeats throughout the log file:

12-13 21:55:22:756 [ info sync.propagator ]: Completed propagation of "clientest/Word Test/TEST TEST TEST TEST - Copy.docx" by OCC::PropagateDownloadFile(0x24cf1338) with status OCC::SyncFileItem::Success
12-13 21:55:22:756 [ debug sync.statustracker ] [ OCC::SyncFileStatusTracker::slotItemCompleted ]: Item completed "clientest/Word Test/TEST TEST TEST TEST - Copy.docx" OCC::SyncFileItem::Success CSyncEnums::CSYNC_INSTRUCTION_NEW
12-13 21:55:22:763 [ info sync.propagator ]: Starting INSTRUCTION_NEW propagation of "clientest/Word Test/TEST TEST TEST TEST.docx" by OCC::PropagateDownloadFile(0x24cf12b8)
12-13 21:55:22:763 [ debug sync.propagator.download ] [ OCC::PropagateDownloadFile::start ]: "clientest/Word Test/TEST TEST TEST TEST.docx" 0
12-13 21:55:22:763 [ debug sync.propagator.download ] [ OCC::PropagateDownloadFile::start ]: creating virtual file "clientest/Word Test/TEST TEST TEST TEST.docx"
12-13 21:55:22:763 [ warning sync.vfs ]: CfCreatePlaceholders error "La operaci贸n de nube no es v谩lida." "La operaci贸n se complet贸 correctamente." "clientest\Word Test\TEST TEST TEST TEST.docx"
12-13 21:55:22:763 [ warning sync.vfs ]: convertToPlaceholder: could not get handle for "C:/octest/clientest/Word Test/TEST TEST TEST TEST.docx"
12-13 21:55:22:763 [ critical sync.csync.vio_local ]: CreateFileW failed on \?\C:\octest\clientest\Word Test\TEST TEST TEST TEST.docx
12-13 21:55:22:763 [ warning sync.fileitem ]: Failed to query the 'inode' for file "C:/octest/clientest/Word Test/TEST TEST TEST TEST.docx"

The complete log file has been sent via email.

The error message you post is a 2.6.3 log, which is also affected by https://owncloud.com/news/important-windows-client-issue-fix-available/ please upgrade to 2.7.3.

Important Recovery Request

We are trying to recover some 380,000 files that were deleted in bulk due to the Windows desktop client issue, which is an impossible task to do through the Web interface. We are already experiencing a tremendous amount of work caused by this issue.

On some users whose desktop client caused a lot of deleted files I can't even access the removed files through the Web interface. The following error message being returned: _The directory is unavailable, please check the logs or contact the administrator._ I have, however, checked the trash bin folder on the server for these users - the folder and all deleted files are there. All the files also show up on the oc_files_trash table.

  1. Is there a command I can run using CLI to automatically restore all files that were deleted by a specific user?
  2. If not, please provide me with the actual PHP code that would execute the recovery sequence as if it were to be executed through the Web interface. I would create a loop around your code to go through the oc_files_trash table and restore files based on the auto_id or the user column.

Happend to me exactly as described by @netzimon after at least one Windows client was automatically updated to 2.7.3 and for two entirely separate networks with their own ownCloud instances. Would really appreciate a command or code piece to bulk restore files as this seems to be an impossible task right now.

Restoring from Webfront is an impossible task. We have round about 25k files. While restoring, the mysql database is eating up my CPU (Xeon CPU E5-2620 v4). The VM has already all 8 Cores.

Please provide a solution, like occ restore command or something more powerful

Our support and server team is working on a solution.

Our support and server team is working on a solution.

Thank you. Do you have an ETA?

Also, may I suggest to provide a stand alone code snippet or CLI script that could be easily uploaded and executed directly on the server? Given the emergency situation most of us are in due to this situation, having to upgrade the OwnCloud core itself to a new version would not be feasible right now.

Related issue in the owncloud/core repo: https://github.com/owncloud/core/issues/38208

We provided a first idea for mass restore here: https://github.com/owncloud/client/issues/8295#issuecomment-746734689

So is this an issue related to the client?

I also am experiencing this issue - Working through the web client is fine but anyone who syncs their files via the client software - All files shared to to them are deleted upon sync.

So is this an issue related to the client?

I also am experiencing this issue - Working through the web client is fine but anyone who syncs their files via the client software - All files shared to to them are deleted upon sync.

That is correct. It affects the Windows desktop app if running under the latest version of Windows 10. All OwnCloud versions prior to 2.7.3 are affected. You need to upgrade to 2.7.3 and then restore the deleted files using the Web interface or the mass restore client mentioned earlier today.

So is this an issue related to the client?
I also am experiencing this issue - Working through the web client is fine but anyone who syncs their files via the client software - All files shared to to them are deleted upon sync.

That is correct. It affects the Windows desktop app if running under the latest version of Windows 10. All OwnCloud versions prior to 2.7.3 are affected. You need to upgrade to 2.7.3 and then restore the deleted files using the Web interface or the mass restore client mentioned earlier today.

Thanks for the information, netzimon. I will wait patiently for an update from the Owncloud team. Hopefully there will be an update soon. My company is not too happy at the moment.

Actually, I'm a bit confused - 5 hours ago, it was mentioned by someone they were working on a resolution. So the fix is to just upgrade to 2.7.3 then restore the files or is this still being worked on by Owncloud?

Actually, I'm a bit confused - 5 hours ago, it was mentioned by someone they were working on a resolution. So the fix is to just upgrade to 2.7.3 then restore the files or is this still being worked on by Owncloud?

Yes, the fix is to upgrade to 2.7.3 and then restore the files.
They were working on a mass restore script which now has been released.

Noted. Thank you!

The issue with the mass delete was fixed on Friday. https://owncloud.com/news/important-windows-client-issue-fix-available/
If the issue was discovered during a sync an the sync was aborted, the client then updated, the new client might continue to move files to the trash.
So in case the sync was aborted the sync_*.db needs to be removed.

As mentioned in https://github.com/owncloud/client/issues/8300#issuecomment-746957855
we have a script that helps restoring the files https://github.com/owncloud/client/issues/8295#issuecomment-746734689 .
This is not the final scrip and we will come up with a simpler solution.

As @TheOneRing mentioned, please update the client and use the script mentioned for restore the deleted data: https://github.com/owncloud/client/issues/8300#issuecomment-746957855.

Tommorow i'll open a git project, instead just providing a ZIP file.

In fact the script does:

  1. Scanning the trashbin
  2. Selecting the removed data which was removed since this issue occured
  3. Restore each item.

Based on the amount of data this might take a while. Killing/Restarting the script does not harm your data and also keeps the state.

Git Repo is here: https://github.com/JanAckermann/owncloud-restore-trash

Using the automated script each file restoration takes about 45 seconds, which in my case translates into running the script for 197 days around the clock to restore all files. This can be compared against doing the restoration from the Web interface, from the trash bin folder, where each restoration takes <1 second. I can see that there is a through-the-roof load on MySQL every time a new file is requested by the script.

Looking closer at the script it seems to be doing a _move_ rather than a _undelete_. Is this because you are using the DAV access which I assume does not support an undelete command?

Will the next version of the script use a different approach, hence making the file restore faster?

Using the automated script each file restoration takes about 45 seconds, which in my case translates into running the script for 197 days around the clock to restore all files. This can be compared against doing the restoration from the Web interface, from the trash bin folder, where each restoration takes <1 second. I can see that there is a through-the-roof load on MySQL every time a new file is requested by the script.

Looking closer at the script it seems to be doing a _move_ rather than a _undelete_. Is this because you are using the DAV access which I assume does not support an undelete command?

Will the next version of the script use a different approach, hence making the file restore faster?

Thanks for the input, 45 seconds for one file is indeed to much, working on make it faster

Please let me know if there is any solution in sight to this issue so that the restoration will be quicker. My users cant wait over 6 months to get their files back.

As far as we see, there is the option for the user in the "deleted files" UI. Press the button to "select all". Then "Restore all". That should be faster, but is only tested for like 1000 files for now ... and will generate database load. Of course this might restore more then liked, but worse then not having files.
The script will not be faster, ideas welcome.

The select all option wouldn't work in my case because of the amount of files that were removed. The user who deleted the least amount of files have 66,000 files in the trash bin, the one who deleted the most has over 180,000 files in the trash bin (which doesn't even open, I am guessing because of the amount of files).

Given the fact that clicking each individual file in the trash bin list executes a restore process that is much faster (1 second wait per file) than the DAV approach found in the recovery script published yesterday, is there a way to create a script that would execute those same commands as the click in the trash bin list? If so, that script could be uploaded to the OC installation and then executed through CLI. By avoiding going through DAV but instead executing the code directly on the server I assume the process would be as fast as the one executed when clicking the restore button for each file in the trash bin. This script could be feed with either a user name (to recover all trash bin files) or the file ID. In my case I would prefer the file ID and create a loop around the script to feed it each individual file ID to be recovered. I already have that information available. If the script is feed with the user name I think a second parameter could be start date, so that it will only recover files deleted from the specified date onward.

On a OwnCloud installation running versi贸n 10.0.10 I am getting an error when requesting the trashin folder list using the restore script. The error message is:



Sabre\DAV\Exception\NotFound
File not found: trash-bin in 'root'

I tried a couple of users on the same installation (all of which have files in the trash bin folder that show up when logging in through the Web interface) - all with the same issue. Please advise what tweaking is needed.

Maybe 10.0.10 does not has this trashbin API yet. 10.6.0 was just released.

@netzimon
Sharing here my own personal case as it seems to match yours :
You need to be able to browse to your domain to https:///remote.php/dav/trash-bin and get a page saying "This is the WebDAV interface. It can only be accessed by WebDAV clients such as the ownCloud desktop sync client."
== the same page you see simply browsing to https:///remote.php/dav

If you do not see this very same page, you are facing a server configuration issue. I was having the same issue, which i solved upgrading from owncloud 10.2.1 to 10.5.0 and upgrading to php 7.2 while disabling older installation of php that were taking over some request on my config.
sudo a2dismod php7.1

We are in a similar situation with hundreds of thousands of files.
The restore script is going to take months to run and trying to access the trash produces an error:
This directory is unavailable, please check the logs or contact the administrator

That error is due to the number of files. Other users don't have an issue accessing the trash.

I came up with a solution that works pretty well and that recovers between 10,000 and 12,000 files per hour, though it requires making some edits to the undelete.php file and locking the new code to the IP address you are executing it from. You also need to be logged on with the same user on the Web interface as well as running a code snippet that feeds the undelete.php file with a list of files from the oc_files_trash table. It's not pretty and it needs human intervention/supervision, but it works and the recovery is very fast. I will try to clean up the code a little and then post the solution here.

would love to get the solution you came up with @netzimon .. TIA

So here is the solution we came up with on our end.

Please note that the code we came up with is not an official solution provided by OwnCloud. It requires tampering with code in the OwnCloud installation which could cause unexpected and undesired behavior. If you do choose to use this idea, it is on your own risk so make sure you back up everything first and if possible shut down access to the OwnCloud installation to avoid other users making changes.

I have only tested and applied this solution to accounts which can display the trash bin properly. Also, it has only been tested on version 10.0.10 of OwnCloud. And again - it requires coding knowledge and changing code in the OwnCloud installation, thus not being a secure and stable solution. My recommendation is to use only the official solution provided by OwnCloud. This post and code is really meant as a community contribution to come up with a better, faster and stable solution to recover files in bulk.

Important: I have noticed that some of the files I have tried to recover using this method came back with an error and I am unsure if those files were removed or not from the system. So, please back up everything first so you can roll back or recover files manually from the backup if something goes awry on your particular OwnCloud installation.

The idea is based on the "restore all" function in the trash bin, but instead of recovering all the files at once what this code does is recovering files in bulk up to a limit that you set yourself. There are 3 parts: 1. generate a list of files to recover; 2. trigger the recover function from the trash bin; 3. wait for the recovery process to finish and check results.

First, I located the AJAX trash bin folder within the OwnCloud installation at relative path /apps/files_trashbin/ajax/ and then I made the below addition to the undelete.php file found in the folder, _after_ the line $files = $_POST['files']:

if($_SERVER["REMOTE_ADDR"]=="X.Y.Z.Q"){
$files=file_get_contents("/absolute/path/to/your/oc/installation/apps/files_trashbin/ajax/undelete_file.txt");
}

X.Y.Z.Q is of course your IPv4 address and the path needs to be absolute to the AJAX folder.
Then, I generate the undelete_file.txt which will hold the list of files I have decided to restore. I named the script undelete_file.php which contains this code:

<?php
$link = mysqli_connect("yourhost", "dbusername", "dbpassword", "dbname");
$query="
SELECT id,timestamp FROM `oc_files_trash` WHERE FROM_UNIXTIME(timestamp) LIKE \"2020-12-%\" 
AND user=\"YYYYYZZZZZ\" ORDER BY auto_id DESC LIMIT 800
";
$outp='[';
$result = mysqli_query($link, $query);
$nrows=mysqli_num_rows($result);
for($i=0;$i<$nrows;$i++){
$row=mysqli_fetch_array($result,MYSQLI_BOTH);
$outp.='"'.utf8_encode($row["id"]).'.d'.$row["timestamp"].'"';
if(($i+1)<$nrows){$outp.=",";}
}
$outp.=']';
file_put_contents("/absolute/path/to/your/oc/installation/apps/files_trashbin/ajax/undelete_file.txt",$outp);
echo("Done\n");
?>

The undelete_file.php script can be placed anywhere on the server. I always execute it using PHP CLI to avoid undesired Web access. Obviously the script needs to be populated with real access information to the database as well as the user name in the SQL query belonging to the user which trash bin you are trying to restore.

Next, so after executing the undelete_file.php using PHP CLI the undelete_file.txt file is created and the undelete.php file is now ready to be fed with the list of files from undelete_file.txt. For that to work, I logged on through the Web interface with the same user I have mentioned in the SQL query (in my example the user name is: YYYYYZZZZZ). I click on the "recover" link on any of the files in the trash bin and then the recovery process begins for the 800 files listed in the undelete_file.txt.

Each bulk recover of 800 files took in my case less than 4 minutes to complete. When there were errors (i.e. files could not be restored) they would be displayed in a popup on the Web interface, but if no errors were found simply nothing was returned. So in order to make sure the process was completed before executing a new batch, I always double checked the number of files (before and after the process) in the oc_files_trash table by logging on either through MySQL CLI or on phpMyAdmin. The command is simple:

SELECT COUNT(*) FROMoc_files_trashWHERE FROM_UNIXTIME(timestamp) LIKE "2020-12-%" AND user="YYYYYZZZZZ";

This is very basic PHP coding and of course many things can be improved or tested, such as raising the number of files per batch above 800 or restoring a trash bin for a different user than logged on through the Web interface.

Again - this is a _not_ a pretty solution and it may not work properly, so if you decide to use it - it's on your own risk. It has worked for us and we are moving rapidly through recovering files, but that doesn't mean it will work for you. My goal with this post is to have the code improved, tested and hopefully some day picked up by OwnCloud to be packaged as an official stand alone solution.

Was this page helpful?
0 / 5 - 0 ratings