changed file should be uploaded, no additional files should be generated locally
locally: the new version gets renamed to ..._conflict... and the original version of the file gets restored
server: nothing changes, error about file being locked
Operating system:
debian testing
Web server:
apache 2.4.23-4
Database:
mysql Ver 14.14 Distrib 5.6.30, for debian-linux-gnu (x86_64) using EditLine wrapper
PHP version:
PHP 7.0.10-1 (cli) ( NTS )
Nextcloud version: (see Nextcloud admin page)
Nextcloud 11.0.0 (stable)
Updated from an older Nextcloud/ownCloud or fresh install:
updated from RC1 (problems since RC1)
Where did you install Nextcloud from:
from tar then updated via updater
Signing status:
Signing status
No errors have been found.
List of activated apps:
App list
Enabled:
The content of config/config.php:
Config report
{
"system": {
"instanceid": "oc563d2ba690",
"passwordsalt": "REMOVED SENSITIVE VALUE",
"trusted_domains": [
"domain.tld",
"server.domain.tld",
"1.2.3.4",
"5.6.7.8"
],
"datadirectory": "\/var\/www\/owncloud_data",
"overwritewebroot": "\/owncloud",
"overwrite.cli.url": "\/owncloud",
"dbtype": "mysql",
"dbname": "owncloud",
"dbuser": "REMOVED SENSITIVE VALUE",
"dbpassword": "REMOVED SENSITIVE VALUE",
"dbhost": "localhost",
"dbtableprefix": "oc_",
"version": "11.0.0.10",
"installed": true,
"theme": "",
"forcessl": true,
"mail_from_address": "root",
"mail_smtpmode": "smtp",
"mail_domain": "domain.tld",
"loglevel": 2,
"mail_smtphost": "127.0.0.1",
"mail_smtpname": "REMOVED SENSITIVE VALUE",
"mail_smtppassword": "REMOVED SENSITIVE VALUE",
"mail_smtpport": "25",
"log_rotate_size": "100 MiB",
"maintenance": false,
"secret": "REMOVED SENSITIVE VALUE",
"preview_libreoffice_path": "\/usr\/bin\/libreoffice",
"memcache.local": "\OC\Memcache\APCu",
"trashbin_retention_obligation": "auto",
"updatechecker": false,
"htaccess.RewriteBase": "\/owncloud",
"updater.release.channel": "stable"
}
}
Are you using external storage, if yes which one: local/smb/sftp/...
3 local folders
Are you using encryption: yes/no
no
Are you using an external user-backend, if yes which one: LDAP/ActiveDirectory/Webdav/...
no
Browser:
Firefox
Operating system:
devuan testing
Nextcloud log
{"reqId":"0OC6eWHD0eY6TVAAxdv0","remoteAddr":"","app":"PHP","message":"file_exists(): connect() failed: No route to host at \/var\/www\/nextcloud\/apps\/files_external\/lib\
/Lib\/Storage\/StreamWrapper.php#74","level":3,"time":"2016-12-23T00:16:22+00:00","method":"--","url":"\/owncloud\/cron.php","user":"--","version":"11.0.0.6"}
{"reqId":"RQhT\/YpPg0oD0mVRTts8","remoteAddr":"::1","app":"webdav","message":"Exception: {\"Message\":\"HTTP\\\/1.1 423 \\\"foldername\\\/subfolder\\\/filename.ods\\\" is locked\",\"Exception\":\"OCA\\\\DAV\\\\Connector\\\\Sabre\\\\Exception\\\\FileLocked\",\"Code\":0,\"Trace\":\"#0 \\\/var\\\/www\\\/nextcloud\\\/3rdparty\\\/sabre\\\/dav\\\/lib\\\/DAV\\\/Server.php(1106): OCA\\\\DAV\\\\Connector\\\\Sabre\\\\File->put(Resource id #31)\\n#1 \\\/var\\\/www\\\/nextcloud\\\/3rdparty\\\/sabre\\\/dav\\\/lib\\\/DAV\\\/CorePlugin.php(513): Sabre\\\\DAV\\\\Server->updateFile('foldername\\\/subfolder\\\/G...', Resource id #31, NULL)\\n#2 [internal function]: Sabre\\\\DAV\\\\CorePlugin->httpPut(Object(Sabre\\\\HTTP\\\\Request), Object(Sabre\\\\HTTP\\\\Response))\\n#3 \\\/var\\\/www\\\/nextcloud\\\/3rdparty\\\/sabre\\\/event\\\/lib\\\/EventEmitterTrait.php(105): call_user_func_array(Array, Array)\\n#4 \\\/var\\\/www\\\/nextcloud\\\/3rdparty\\\/sabre\\\/dav\\\/lib\\\/DAV\\\/Server.php(479): Sabre\\\\Event\\\\EventEmitter->emit('method:PUT', Array)\\n#5 \\\/var\\\/www\\\/nextcloud\\\/3rdparty\\\/sabre\\\/dav\\\/lib\\\/DAV\\\/Server.php(254): Sabre\\\\DAV\\\\Server->invokeMethod(Object(Sabre\\\\HTTP\\\\Request), Object(Sabre\\\\HTTP\\\\Response))\\n#6 \\\/var\\\/www\\\/nextcloud\\\/apps\\\/dav\\\/appinfo\\\/v1\\\/webdav.php(60): Sabre\\\\DAV\\\\Server->exec()\\n#7 \\\/var\\\/www\\\/nextcloud\\\/remote.php(165): require_once('\\\/var\\\/www\\\/nextcl...')\\n#8 {main}\",\"File\":\"\\\/var\\\/www\\\/nextcloud\\\/apps\\\/dav\\\/lib\\\/Connector\\\/Sabre\\\/File.php\",\"Line\":175,\"User\":\"florz\"}","level":4,"time":"2016-12-23T00:16:24+00:00","method":"PUT","url":"\/owncloud\/remote.php\/webdav\/foldername\/subfolder\/filename.ods","user":"florz","version":"11.0.0.10"}
{"reqId":"0OC6eWHD0eY6TVAAxdv0","remoteAddr":"","app":"PHP","message":"file_exists(): connect() failed: No route to host at \/var\/www\/nextcloud\/apps\/files_external\/lib\/Lib\/Storage\/StreamWrapper.php#74","level":3,"time":"2016-12-23T00:16:25+00:00","method":"--","url":"\/owncloud\/cron.php","user":"--","version":"11.0.0.6"}
{"reqId":"0OC6eWHD0eY6TVAAxdv0","remoteAddr":"","app":"PHP","message":"file_exists(): connect() failed: No route to host at \/var\/www\/nextcloud\/apps\/files_external\/lib\/Lib\/Storage\/StreamWrapper.php#74","level":3,"time":"2016-12-23T00:16:28+00:00","method":"--","url":"\/owncloud\/cron.php","user":"--","version":"11.0.0.6"}
{"reqId":"0OC6eWHD0eY6TVAAxdv0","remoteAddr":"","app":"OC\\Files\\Cache\\Scanner","message":"!!! Path '' is not accessible or present !!!","level":0,"time":"2016-12-23T00:16:28+00:00","method":"--","url":"\/owncloud\/cron.php","user":"--","version":"11.0.0.6"}
@MorrisJobke: as talked about in IRC
I may have fixed the problem by rebooting my server. I will test it again tomorrow.
I may have fixed the problem by rebooting my server. I will test it again tomorrow.
Interesting. So this seems to be a caching issue. @rullzer @LukasReschke Have you ever seen something like this?
Reading the log it seems like your cron-job run at the same time and might have had a lock on the file. But I guess this should not cause a conflict file, but just delay the syncing?
file_exists(): connect() failed: No route to host at \/var\/www\/nextcloud\/apps\/files_external\/lib\/Lib\/Storage\/StreamWrapper.php
Not sure, but this might be the real issue here.
@icewind1991 I believe this is related to another issue and the fix will make its way into Nextcloud 11.0.2
Yupp, seems rebooting didn't fix it. I saw the (same?) problem again yesterday on my notebook. I used a program that writes a database file I want to sync. After I finished my work the database ended as a conflict file and an old version was restored from my cloud. Moving the conflict file over the restored version fixed it and the file was uploaded without further conflicts. (There were several other files that had conflicts. [mostly logging -> simply deleted those])
I have the same issue with 11.0.2 and similar setup (PHP 7, no encryption, no external storage).
My file is pretty big, 4.1GiB.
I can trigger this in 100% of cases when syncing backups. While backups are created (7z archives) Nextcloud client tries to upload file immediately while archive is still being created and updated on disk. After some time client fails to upload and each time it tries to resume uploading it fails to do so.
The only workaround I found in this situation is to move files somewhere and then move them back. In this case Nextcloud uploads them from start and everything works fine. Another workaround is to pause synchronization while creating archives and resume afterwards to avoid this issue altogether.
This is then more a client issue.
cc @rullzer
Having the same problem here.
Can confirm, this problem has plagued me for months. It often happens with partially sync'd git repos that I delete locally before or just after the sync completes- the few weeks after this message plagues me every 10 seconds or so and it's highly annoying.
Using NextCloud v12.0.3
It could be similar to the reasons of #7009. That fix for that issue solves also this one here.
I've got Nextcloud client upgrade some time relatively recently (2.3.3 right now) and it seems to work properly now. Can anyone confirm it too?
I've got Nextcloud client upgrade some time relatively recently (2.3.3 right now) and it seems to works properly now. Can anyone confirm it too?
Thanks for the feedback. In more recent versions (13.0.4 and 12.0.9) we also reduced drastically the amount of lock statements which should avoid most of the problems in here. Thus I will close it. If there is still stuff that breaks have a look at #9305 and it's linked issues.
Another workaround to make those errors even less likely: use Redis as locking backend
Just now I moved a big folder locally with the sync client active and connected. Resulted in the whole folder being locked, had to manually delete entries from the file_locks database.
Nextcloud 13.0.1., Desktop Sync Client 2.3.3
This is happening to me too.
I still had this problem with all versions until recently. I found out that my system resources were not sufficient. For me the problem occurred due to high load. File locking is a big factor that contributes to IO, especially if your database runs on the same machine. Setting up separate in memory caching seemed overkill for home use. So first thing I did was to increase caching on my MySQL database:
in _/etc/mysql/conf.d/mysql.cnf_ add or edit the block:
[mysqld]
innodb_buffer_pool_size=1073741824
innodb_io_capacity=4000
According to an other thread _innodb_buffer_pool_size_ should be at least as big as your DB. For me 1GB seems fine. After that I increased my RAM size to 4GB to bring swapping to a minimum. For debugging this I found the glances tool most helpful.
Next thing I found was that having your DB inside a VM image file (KVM) is a real bottleneck. Better put your data and your DB on a separate block device and mount that directly.
And then as a last problem I found that I still had my php-fpm settings at default values. This means that the server will only handle a limited amount of simultaneous requests. This might be an other reason for unlock requests to time out. This also seems to lead to frequent sync conflicts, even if you only change a file on the same machine twice and the sync goes through partially.
I am currently happy with following settings in _/etc/php/7.2/fpm/pool.d/www.conf_:
pm = dynamic
pm.max_children = 120
pm.start_servers = 12
pm.min_spare_servers = 6
pm.max_spare_servers = 18
pm.process_idle_timeout = 60s
@Florz Thanks for this nice summary. We maybe should invest a little bit into overloaded system handling.
@florz Do you mind to put this in the documentation with the disclaimer that it is an example, but maybe helps others to find potential bottlenecks?
@MorrisJobke pull request nextcloud/documentation#891 is on its way
Most helpful comment
I can trigger this in 100% of cases when syncing backups. While backups are created (7z archives) Nextcloud client tries to upload file immediately while archive is still being created and updated on disk. After some time client fails to upload and each time it tries to resume uploading it fails to do so.
The only workaround I found in this situation is to move files somewhere and then move them back. In this case Nextcloud uploads them from start and everything works fine. Another workaround is to pause synchronization while creating archives and resume afterwards to avoid this issue altogether.