Hi there,
I have problems uploading some files via windows 7 client (Version 2.0.2, build 5569) connected with an owncloud 8.2 stable-Server.
The files exist on the client, not on the server. The log file on the client says:
06.11.2015 23:07:35 folder1/xxx.MDB F:\Cloud1 Error downloading http://xxx/owncloud/remote.php/webdav/folder1/xxx.MDB - server replied: Locked ("folder1/xxx.MDB" is locked)4,3 MB
1)
I wonder why the client has problems to download - it should try to upload.
2)
At first I thougt that the file on the client could be in use by another program. But the server says, that the file ist locked, not the client.
Can anyone help me please?
Regards,
klausguenter
cc @icewind1991 for the locking topic
I believe the error message in the client says "download" even when uploading, it's another issue.
The question here is why the file is locked in the first place. Are there other users accessing that folder ?
I suspect a stray lock.
It's possible that the files to upload were in use by another program when the sync-client tried to upload them for the first time.
When the problem occured I did a restart of the client pc to make sure that these files are not in use by another program any more. But the files kept being not uploadable.
I have exact the same problem. It suddenly occured for one file and for the first time. I'm the only one who is syncing to this directory (3 PCs, 2 mobile devices). I can not overwrite or delete it.
Came here from https://forum.owncloud.org/viewtopic.php?t=31270&p=100790 and tried this procedure:
Operating system:
Raspbian 8
Web server:
Nginx
Database:
Mysql
PHP version:
5.6.14
ownCloud version: (see ownCloud admin page)
8.2.0.12
List of activated apps:
The content of config/config.php:
"system": {
"instanceid": "oc788abd2781",
"passwordsalt": "***REMOVED SENSITIVE VALUE***",
"datadirectory": "\/var\/ocdata",
"dbtype": "mysql",
"version": "8.2.0.12",
"installed": true,
"config_is_read_only": false,
"forcessl": true,
"loglevel": 2,
"theme": "",
"maintenance": false,
"trashbin_retention_obligation": "30, auto",
"trusted_domains": [
"***REMOVED SENSITIVE VALUE***"
],
"mail_smtpmode": "php",
"dbname": "owncloud",
"dbhost": "localhost",
"dbuser": "***REMOVED SENSITIVE VALUE***",
"dbpassword": "***REMOVED SENSITIVE VALUE***",
"secret": "***REMOVED SENSITIVE VALUE***",
"forceSSLforSubdomains": true,
"memcache.local": "\\OC\\Memcache\\APCu"
}
Error message from logfile:
{"reqId":"5h4sJPhlw0mjlWNp5wdl","remoteAddr":"94.87.129.34","app":"webdav","message":"Exception: {\"Message\":\"HTTP\\\/1.1 423 \\\"safe.kdbx\\\" is locked\",\"Exception\":\"OC\\\\Connector\\\\Sabre\\\\Exception\\\\FileLocked\",\"Code\":0,\"Trace\":\"#0 \\\/var\\\/www\\\/owncloud\\\/3rdparty\\\/sabre\\\/dav\\\/lib\\\/DAV\\\/Tree.php(179): OC\\\\Connector\\\\Sabre\\\\File->delete()\\n#1 \\\/var\\\/www\\\/owncloud\\\/3rdparty\\\/sabre\\\/dav\\\/lib\\\/DAV\\\/CorePlugin.php(287): Sabre\\\\DAV\\\\Tree->delete('safe.kdbx')\\n#2 [internal function]: Sabre\\\\DAV\\\\CorePlugin->httpDelete(Object(Sabre\\\\HTTP\\\\Request), Object(Sabre\\\\HTTP\\\\Response))\\n#3 \\\/var\\\/www\\\/owncloud\\\/3rdparty\\\/sabre\\\/event\\\/lib\\\/EventEmitterTrait.php(105): call_user_func_array(Array, Array)\\n#4 \\\/var\\\/www\\\/owncloud\\\/3rdparty\\\/sabre\\\/dav\\\/lib\\\/DAV\\\/Server.php(469): Sabre\\\\Event\\\\EventEmitter->emit('method:DELETE', Array)\\n#5 \\\/var\\\/www\\\/owncloud\\\/3rdparty\\\/sabre\\\/dav\\\/lib\\\/DAV\\\/Server.php(254): Sabre\\\\DAV\\\\Server->invokeMethod(Object(Sabre\\\\HTTP\\\\Request), Object(Sabre\\\\HTTP\\\\Response))\\n#6 \\\/var\\\/www\\\/owncloud\\\/apps\\\/files\\\/appinfo\\\/remote.php(55): Sabre\\\\DAV\\\\Server->exec()\\n#7 \\\/var\\\/www\\\/owncloud\\\/remote.php(137): require_once('\\\/var\\\/www\\\/ownclo...')\\n#8 {main}\",\"File\":\"\\\/var\\\/www\\\/owncloud\\\/lib\\\/private\\\/connector\\\/sabre\\\/file.php\",\"Line\":300}","level":4,"time":"2015-11-09T22:34:35+00:00","method":"DELETE","url":"\/remote.php\/webdav\/safe.kdbx"}
Server configuration
Operating system:
Debian 7 stable
Web server:
Apache 2.2.22
Database:
Mysql 5.5.46
PHP version:
5.4.45
ownCloud version:
8.2.0.12 (stable)
List of activated apps:
activity: 2.1.3
deleted files: 0.7.0
first run wizard 1.1
Gallery 14.2.0
Mail Template Editor 0.1
Notifications 0.1.0
Provisioning API 0.3.0
Share Files 0.7.0
Text Editor 2.0
Updater 0.6
Versions 1.1.0
Video Viewer 0.1.3
The content of config/config.php:
$CONFIG = array (
'instanceid' => '_',
'passwordsalt' => '_',
'secret' => '_',
'trusted_domains' =>
array (
0 => '_',
1 => '_',
),
'datadirectory' => '_',
'overwrite.cli.url' => '_',
'dbtype' => 'mysql',
'version' => '8.2.0.12',
'dbname' => 'owncloud1',
'dbhost' => 'localhost',
'dbtableprefix' => 'oc_',
'dbuser' => '_',
'dbpassword' => '*',
'logtimezone' => 'UTC',
'installed' => true,
'filelocking.enabled' => 'true',
'memcache.locking' => '\OC\Memcache\Redis',
'memcache.local' => '\OC\Memcache\Redis',
'redis' =>
array (
'host' => 'localhost',
'port' => 6379,
'timeout' => 0,
),
);
Do you also get files locked errors when trying to upload trough the web interface?
Yes
i had the same problem. My workaround :
Enable maintaince mode
Deleted every entry in table "oc_file_locks" in database
Disable Maintaince mode
Dirty but solved the problem ... for now
I've found some additional files which can not be deleted because they are locked. If you need additional debug data let me know....
Are there any errors in the logs _before_ the locking error shows up?
I see no other errors before the locking error. It occures just in the moment I want to modify or delete a file.
All this "problem" files were present before the update to Owncloud 8.2. Maybe the error came with this version.
Here is my owncloud.log
https://gist.github.com/unclejamal3000/2aba05cd32cc53771256
I do have the same problem with a fresh installation of 8.2
I did not have this problem on older version on the same server.
This is happening to me (on both 8.2 and 8.2.1, with MySQL), particularly (I think) since I added Dropbox external storage to one of my users (another user already had Dropbox set up previously with no problems).
Possibly of note: I just tried cleaning things up, by turning on maintenance mode, deleting everything from oc_file_locks, then running occ files:scan --all. After doing the latter, and with maintenance mode still turned on, there are now 10002 rows in oc_file_locks. Is that expected? I assumed there would only be locks if something was still using the files (which no clients would be, since it's in maintenance mode, and since the files:scan process finished, it wouldn't still be holding onto locks, would it?).
After doing the latter, and with maintenance mode still turned on, there are now 10002 rows in oc_file_locks. Is that expected?
For performance reasons (since 8.2.1) rows are not cleaned up directly but re-used in further requests
Fair enough, so that's probably not related to the issue, then. For what it's worth, I've removed the Dropbox external storage from this particular user, and haven't had any file locking problems so far since then. That may be coincidence, of course, or just that the particular files being synched with the Dropbox folder were the ones likely to cause the locking issue.
All of our s3 files are locked. We cannot delete or rename any files that were there previous to 8.2 update.
UGGG is this fixable? we have thousands of files on s3.
Same on OC v8.2.1 with TFL and Memcaching via Redis as recommended. Anyway, there are a few entries in oc_file_locks (although through using Redis there shouldn麓t be any locks?). No idea how to fix this. Only one specific file affected, making me and the never-ending, logfile-filling desktop clients going crazy.
Thankful for every tip or workaround! No idea how to "unlock" the file...
@icewind1991 are you able to reproduce this issue ?
For DB-based locking it might be possible to remove the locks by cleaning the "oc_file_locks" table.
If you're using redis exclusively for ownCloud, you might be able to clear it using the command flushall using redis-cli.
Are you guys using php-fpm ? I suspect that if the PHP process gets killed due to timeouts, the locks might not get cleared properly. However I thought that locks now have a TTL, @icewind1991 ?
Yes, php-fpm is in the game too. @PVince81 perfect! That was what I was looking for (at http://redis.io/commands). For the moment syncing works fine again.
Do you know the cli for listing all keys/locked files on redis-cli too?
And I still don麓t get why oc_file_locks has entries although using redis...
I've been experiencing the same issue.
Operating system: Ubuntu 14.04.3 LTS
Web server: Apache 2.4.7
Database: MySQL 5.5.46
PHP version: 5.5.9 (running as Apache Module)
ownCloud version: 8.2.1-1.1
MemChache: APCu 4.0.7
After entering on Maintenance Mode, I have seen that the table oc_file_locks has lost of entries with lock > 0 (even > 10) and about 150 entries with a future ttl value.
Solved by deleting all rows and leaving the maintenance mode.
Same issue here.
all-inkl.com shared hosting
PHP 5.6.13
mySQL 5.6.27
ownCloud 8.2.1 stable
Flushing oc_file_locks resolves all issues.
I was hit by this bug too. My system:
PHP 5.6.14
MariaDB 10.0.21
Nginx 1.9.5 (thus using php-fpm)
FreeBSD 10.2-RELEASE-p8
OwnCloud 8.2.1 stable
The flushing of oc_file_locks seams to fix this issue indeed. So I wrote a little script to remove all the stale locks from the file_locks table:
#!/usr/bin/env bash
##########
# CONFIG #
##########
# CentOS 6: /usr/bin/mysql
# FreeBSD: /usr/local/bin/mysql
mysqlbin='/usr/local/bin/mysql'
# The location where OwnCloud is installed
ownclouddir='/var/www/owncloud'
#################
# ACTUAL SCRIPT #
#################
dbhost=$(grep dbhost "${ownclouddir}/config/config.php" | cut -d "'" -f 4)
dbuser=$(grep dbname "${ownclouddir}/config/config.php" | cut -d "'" -f 4)
dbpass=$(grep dbpassword "${ownclouddir}/config/config.php" | cut -d "'" -f 4)
dbname=$(grep dbname "${ownclouddir}/config/config.php" | cut -d "'" -f 4)
dbprefix=$(grep dbtableprefix "${ownclouddir}/config/config.php" | cut -d "'" -f 4)
"${mysqlbin}" --silent --host="${dbhost}" --user="${dbuser}" --password="${dbpass}" --execute="DELETE FROM ${dbprefix}file_locks WHERE ttl < UNIX_TIMESTAMP();" "${dbname}"
Just configure where the mysql command can be found (hint: which mysql will tell you) and configure where OwnCloud itself is installed.
It needs this location in order to find the config.php inside your owncloud install. It extracts the needed database information from it and uses that to connect to MySQL. This has the advantage that when you change the password of the owncloud MySQL user, this scripts automatically uses the new information. And it saves you from having another file on your filesystem stating your password.
You don't need to edit anything below the "ACTUAL SCRIPT" comment.
When it has connection with MySQL it removes all the locks from the database that are already expired. It doesn't remove all locks as suggested in the rest of this issue, because there can be valid locks in the database which are still in the future. This script leaves those alone, to prevent bad stuff from happening.
And of course you can run this script as a cronjob every night, so you don't have to think about these stale locks anymore.
Hopefully this workaround script is useful for someone else except just me :)
Hi, recently I had the same problem (Using the database as locking system).
The file_locks table was full of stray locks (>10k). Most data rows had set the "lock" field to 1, some hundreds to 2 and so on.
As I read the post of @PVince81 here, the "ttl" was introduced for removing old or stray locks?
But...
The "ttl" of most of the entries in my table was more than 12 hours old.
So the locks should have been expired, right?
Well, I tested the expire mechanism and it seems not to work as expected.
In the last case I would expect the file can be renamed successfully. But the file lock is respected although it is expired.
By looking into the code of the DBLockingProvider, I cannot find anything that checks the ttl for the locks - except by method cleanEmptyLocks().
But this method is only removing expired entries having "lock"=0.
So I wonder, if this is the only purpose of the ttl: Only to clean up valid old and fully released locks?
If not, this might cause the bug.
But in any case, it seems to be useful to introduce a timestamp like the ttl, which is regarded when a lock should be acquired. For example let's call this timestamp "stray_timeout"
Well, hope these thoughts are not totally nonsense and may help ;-)
ownCloud version: 8.2.1 (stable)
Operating system: Raspbian 8
Web server: Nginx
Database: MySQL 5.5.44
PHP version: 5.6.14 - Using PHP-FPM
The "ttl" of most of the entries in my table was more than 12 hours old.
So the locks should have been expired, right?
@icewind1991 can you have a look why the expiration is not working ?
Setting to 8.2.2 because stray locks are nasty
CC @cmonteroluque
I have the same problem, when trying to create a directory for WebDav this occurring the error 423 File Locked
P.S. I'm using external storage
Partial fix is here https://github.com/owncloud/core/pull/21072 (only for the db locking backend)
And here for redis based locking
Fix for DB is here https://github.com/owncloud/core/pull/21072 and redis here https://github.com/owncloud/core/pull/21073
Will be in 8.2.2 and 9.0.
I don't know if this is the only reason stray locks will be generated. A user of mine came to me 2 months ago that I noticed the stray locks, and I only learned about the occ file:scan command 2 weeks ago (in one of my attempts to "fix" this problem).
In other words: If Owncloud itself doesn't start that command, then this issue is wider then only the occ file:scan command.
Stray locks could also happen if a PHP timeout happens or if the connection is closed/lost. I believe that some environments like php5-fpm will automatically kill the PHP process if the connection is lost, while others (mod_php?) will leave it running.
This is why the TTL is important because it seems that it's not possible to catch a killed PHP process and run code at this moment.
I just had this happen, as well. Mine is a relatively new installation. Could I just delete the database and start over? If so, how would I do that?
@shorinjin I would just run the query that is stated in my workaround script. This solves the problem, without having to start over again.
But: this is an issue tracker, not a support forum. For support related questions I would suggest you open a topic on the forums (https://forum.owncloud.org).
Stray locks should not happen any more in 8.2.2 (which was released shortly)
Deleting the contents of oc_file_locks table should be enough (do this in maintenance mode just to be sure)
Confirming - no more stray locks with 8.2.2. Thanks for the fix!!
@pdesanex: I get the files locked problem now that I updated to 8.2.2. Is there a fix or work around or do I have to wait for 9.0?
@stormsh try clearing your oc_file_locks table. Maybe you had stray locks from before the update.
@PVince81 Thanks for the quick reply. That worked. Although the stray locks never occurred to me before the update to 8.2.2.
had the same problem with owncloud 9, upgraded from 8.2.2. Only noticed it today. For now i solved it with TRUNCATE oc_file_locks
@icewind1991 maybe we need a repair step to clear stray locks at update time, just in case ?
@PVince81 I think that would be good! I just barely started using owncloud and the first version that I installed was 9.0.1 and after upgrading to 9.0.2 I'm having this problem.
Raised https://github.com/owncloud/core/issues/24494 for a repair step
Seeing this issue with 9.0.2 as well.
Does not seem fixed.
Possibly related: https://github.com/owncloud/core/issues/24507
Would be good if you could add more info about your setup there because so far this is not reproducible. It could be a very specific use case (sharing/ext storage/other) that triggers a specific code path where the locks aren't cleared.
Could also be timeouts.
Getting 21034 lock records during one night.
ttl is over, but I do think that the records should be deleted from the databases otherwise this thing will just blow up.
It's just a basic setup.
Config is:
Ubuntu 16.04 LTS
PHP 7.0.4-7ubuntu2 (cli) ( NTS )
No external storage
Apps:
Enabled:
Disabled:
Note that the lock cleanup is done in a background job, so cron needs to be configured
cron is configured and run every 15 minutes.
why is this closed when the problem is still there?
@simsala do any of the expired locks have a lock value different than 0?
also can you check if OCA\Files\BackgroundJob\CleanupFileLocks exists in oc_jobs
There is this database entry in oc_jobs:
142883 OCA\Files\BackgroundJob\CleanupFileLocks null 1463062502
and yes, there are some expired lock records with values other than zero.
some (around 10) with 1 and one with -1.
@simsala so it means the TTL logic didn't work. @icewind1991 any idea ?
@simsala assuming you're running 8.2.5 already ?
I am running 9.0.2.
Currently the lock db has 11000 records.
Mostly 0. Seems to me they never get removed.
I am running 9.0.2.
"Good" to know I am not the only one facing this or a similar problem:
https://github.com/owncloud/core/issues/25232
(If #25232 is redundant, please delete / close.)
I have the same issue on a fresh install - two files can't be uploaded, with a FileLocked exception in the log
Running 9.0.1 server, client is 2.1.1
Cleaned the oc_file_locks table with no effect
In fact I did the clean install because I was facing this problem on 8.0.3 so I deleted the install+DB (it's not a large one and i'm the only user...) and did a fresh install -- and refaced the same issue on the same two files
@zedug can you tell us more about these two files ? Are they shared files ? Received shares ? Shared directly or through a folder ?
No they are not shared files. I use owncloud to share files between different machines (at home and at work) but I'm the only user on all machines, using the owncloud windows client.
There's nothing specific about the files themselves - for example one is a .pdf file, and there's a ton of other pdf's that synchronize with no issues.
Hi everybody,
same probs here, i updated one of my owncloud instances last Sunday from 8.2.5 to 9.0.4 (physical Ubuntu 14.04 Server, Mysql database). Everything seemed to go fine.
Now the windows sync client (2.2.2) is showing these locked messages, all sort of filetypes (pdf, xlsx, docx, etc.), can't really reproduce the error.
After truncating the oc_file_locks table (~17000) everything runs without error at the moment.
2 days after truncating the oc_file_tabe it is filled with about 2000 lines, 3 files are not able to get synced because of locking issue.
@Cybertinus re: https://github.com/owncloud/core/issues/20380#issuecomment-162300963
Your script works a charm, there is one mistake in the code though..
The line
dbuser=$(grep dbname "${ownclouddir}/config/config.php" | cut -d "'" -f 4)
should read
dbuser=$(grep dbuser "${ownclouddir}/config/config.php" | cut -d "'" -f 4)
Cheers
oc_file_tabe it is filled with about 2000 lines
2000 lines is not a problem. The problem is the lines where the "lock" value is non-zero.
You might want to check the "ttl" value for these lines, you can compute the expiration time with this command:
date -d +@timestamp, replace timestamp with the value from the TTL column.
For example:
date -d +@1465553223
This gives you the time after which the lock is supposed to expire. So if the files still can't be uploaded after that time, then there's a bug in expiration.
But thinking of it, maybe it does expire properly but something else could be re-creating the stray lock.
Would be good to observe the behavior of locks over the span of several hours by running select * from oc_file_locks wherelock!= 0;. Goal is to find out whether the "ttl" value changes after expiration for the same "key". The unique key will always point to the same file.
I have a file that I uploaded via browser to my owncloud, shared it with another user, and now I can not delete it. I found something in the logs as shown below:
{"reqId":"V7f74ZT7smAAAGi06@IAAAAS","remoteAddr":"xxx.xxx.xxx.xxx","app":"webdav","message":"Exception: {\"Message\":\"HTTP\\\/1.1 423 \\\"filename\\\" is locked\",\"Exception\":\"OCA\\\\DAV\\\\Connector\\\\Sabre\\\\Exception\\\\FileLocked\",\"Code\":0,\"Trace\":\"#0 \\\/home\\\/xxxxx\\\/public_html\\\/3rdparty\\\/sabre\\\/dav\\\/lib\\\/DAV\\\/Tree.php(179): OCA\\\\DAV\\\\Connector\\\\Sabre\\\\File->delete()\\n#1 \\\/home\\\/xxxx\\\/public_html\\\/3rdparty\\\/sabre\\\/dav\\\/lib\\\/DAV\\\/CorePlugin.php(285): Sabre\\\\DAV\\\\Tree->delete('filename')\\n#2 [internal function]: Sabre\\\\DAV\\\\CorePlugin->httpDelete(Object(Sabre\\\\HTTP\\\\Request), Object(Sabre\\\\HTTP\\\\Response))\\n#3 \\\/home\\\/xxxx\\\/public_html\\\/3rdparty\\\/sabre\\\/event\\\/lib\\\/EventEmitterTrait.php(105): call_user_func_array(Array, Array)\\n#4 \\\/home\\\/xxxx\\\/public_html\\\/3rdparty\\\/sabre\\\/dav\\\/lib\\\/DAV\\\/Server.php(459): Sabre\\\\Event\\\\EventEmitter->emit('method:DELETE', Array)\\n#5 \\\/home\\\/xxxx\\\/public_html\\\/3rdparty\\\/sabre\\\/dav\\\/lib\\\/DAV\\\/Server.php(248): Sabre\\\\DAV\\\\Server->invokeMethod(Object(Sabre\\\\HTTP\\\\Request), Object(Sabre\\\\HTTP\\\\Response))\\n#6 \\\/home\\\/xxxx\\\/public_html\\\/apps\\\/dav\\\/appinfo\\\/v1\\\/webdav.php(55): Sabre\\\\DAV\\\\Server->exec()\\n#7 \\\/home\\\/xxxx\\\/public_html\\\/remote.php(138): require_once('\\\/home\\\/xxxx\\\/...')\\n#8 {main}\",\"File\":\"\\\/home\\\/xxxx\\\/public_html\\\/apps\\\/dav\\\/lib\\\/connector\\\/sabre\\\/file.php\",\"Line\":342,\"User\":\"Username\"}","level":4,"time":"2016-08-20T06:42:41+00:00","method":"DELETE","url":"\/remote.php\/webdav\/filename","user":"xxxxxx"}
@typorian were you able to delete it after one hour ? (the lock expiration)
@typorian [1] were you able to delete it after one hour ? (the lock
expiration)
@PVince81
It has been a couple of days and I still have not been able to delete
it. I'm not realy sure where to go from here, maybe I'll ask to have
owncloud restartet.
@typorian set owncloud to maintenance mode and have the oc_file_locks table cleared, then set back to normal mode. This would clear any stray locks.
But the thing that is strange is that you seem to be having a lock that doesn't go away. Either it doesn't go away on its own (TTL logic bug) or there is some other process that re-sets the lock on this file over and over again.
Has anyone here been able to test https://github.com/owncloud/core/issues/20380#issuecomment-238156998 on their environments ?
@typorian [1] set owncloud to maintenance mode and have the
oc_file_locks table cleared, then set back to normal mode. This would
clear any stray locks.But the thing that is strange is that you seem to be having a lock
that doesn't go away. Either it doesn't go away on its own (TTL logic
bug) or there is some other process that re-sets the lock on this file
over and over again.
@PVince81 from what I see I won't be able to do that, as I don't have
shell access to the server, I just have administration access within
owncloud itself, as I rent the storage from a company. Or am I missing
something?
@typorian hmm, indeed. From this use case you likely won't be able to do that.
Is this file in a shared folder ? Are other people accessing it / downloading it ?
@typorian [1] hmm, indeed. From this use case you likely won't be able
to do that.Is this file in a shared folder ? Are other people accessing it /
downloading it ?
@PVince81 it is a file that was shared in a folder that was not shared.
I removed the share to no avail. I tried deleting it with an account
that it was shared to, but that did not work either. As far as I know,
no one else should be able to access it at this time.
@typorian are you able to rename or delete move the file ? I guess not
@typorian [1] are you able to rename or delete the file ? I guess not
@PVince81 No, neither rename nor delete. I can download it though, so
it's still actually there. Not being able to delete it is the issue that
led me here in the first place.
@typorian okay, so if you can download it it means it has a shared lock on it, not exclusive lock.
Now something comes to mind: the expiry of stray locks is happening when running cron.
How is cron run on your system ?
(check the admin page in OC)
@typorian [1] okay, so if you can download it it means it has a shared
lock on it, not exclusive lock.Now something comes to mind: the expiry of stray locks is happening
when running cron.
How is cron run on your system ?
@PVince81 At the moment it is set to Ajax and claims cron ran
successfully. I can set it to webcron or (system)cron as well.
@typorian hmm, when using ajax cron it doesn't always run all the jobs, just a few. The reason is that some jobs might take longer and PHP might run into a timeout when run from the web.
Now if you switch to system cron I'm not sure it will work, because your provider needs to be able to trigger OC cron from a system cron job. If they had set it up initially, OC would have automatically switched to that mode already. You could ask your provider if they could switch to system cron for your instance.
Question to all other reporters for this issue: which cron mode do you have ?
Also AJAX
Does anyone here does NOT have ajax cron and still see the issue ?
@typorian [1] hmm, when using ajax cron it doesn't always run all the
jobs, just a few. The reason is that some jobs might take longer and
PHP might run into a timeout when run from the web.Now if you switch to system cron I'm not sure it will work, because
your provider needs to be able to trigger OC cron from a system cron
job. If they had set it up initially, OC would have automatically
switched to that mode already. You could ask your provider if they
could switch to system cron for your instance.
@PVince81 You are correct, I can not switch to php or system cron
apparently. BUT: I tried anyways and it told me after a while that
apparently there is a problem with running the cron jobs. I switched
back to Ajax, and loaded some random pages in owncloud just to trigger
it, and voila: I was just now able to delete the file. I don't generally
use the website that much so maybe the issue is that Ajax relies on the
web frontend to trigger the jobs, they don't always run as you said and
the client maybe does not trigger them at all?
@typorian okay, that's good to know.
So many we need to find a solution for environments that rely solely on ajax cron.
Some ideas:
CC @DeepDiver1975 @butonic
Don't know if this helps:
Getting: Fatal webdav Exception: {"Message":"HTTP\/1.1 423 ...
on ownCloud 9.0.2 (stable)
Cron activated (however it says last execution 5 month ago)
crontab -u www-data -e
*/15 * * * * /usr/bin/php /var/www/owncloud/occ files:scan --all > /dev/null
Cronjob runs
Question to all other reporters for this issue: which cron mode do you have ?
AJAX
Also using AJAX. I'll be trying CRON on both my installations.
Also using AJAX. Will try CRON early next week
Ajax doesn't clean properly, how ironic :wink:
If possible on your system, try always using system cron for more accurate results. There are more and more background jobs being added nowadays to do the work that cannot be done within a single PHP request because it would risk timing out (for example expiring versions, trashbin).
There might still be a slight chance to have ajax cron run more often. So far I noticed that it only runs once per page load. Maybe this could be increased to be done once every 15 minutes within any already open ownCloud page.
Ajax doesn't clean properly, how ironic
Drat, you beat me to it.
There might still be a slight chance to have ajax cron run more often.
Well, since I don't really use the web interface, I think it runs perhaps once a month... I guess that's bad? I have a bunch of bookmarks that I check on a regular basis. I added the cron.php. (Is it sufficient to open www.abc.de/owncloud for Ajax to run or do I have to link to the cron.php?)
As soon as my webhoster's promotional "a euro a month" period has passed, I'll will switch to another package that supports cronjobs. I was a bit surprised to see that it has ssl certificates but no cron jons.
There might still be a slight chance to have ajax cron run more often.
What about integrating an AJAX-run-thingy in the clients? If a client detects AJAX based crons, it could offer an option to run the script automatically.
Is it sufficient to open www.abc.de/owncloud for Ajax to run or do I have to link to the cron.php?
Opening any ownCloud web page as logged in user would run cron.php once, no need to call it directly. Now thinking of it, if you have a computer that is always online you might be able to have something ping the ownCloud server's "cron.php" URL regulariy to force it to run (that might be what webcron is about, not sure).
Every cron.php will run only a few of the jobs in oc_jobs, not all of them, due to the risk of timeouts.
What about integrating an AJAX-run-thingy in the clients?
There used to be a ticket where this was discussed in the past and rejected. I don't remember the reasons. I can't seem to find it. @DeepDiver1975 do you remember ?
Opening any ownCloud web page as logged in user would run cron.php once
But if I'm not logged in, I have to call it directly, right?
that might be what webcron is about, not sure
At least that's what I thought it would do, too, after reading the description you guys placed there.
if you have a computer that is always online you might be able to have something ping the ownCloud server's "cron.php"
I'll botch-up something like that with my NAS. Till then, I will use cURL and Windows Task Scheduler to do the magic.
There used to be a ticket where this was discussed in the past and rejected.
I don't know how many are out there who rely solely on the clients. But if they are forced to use AJAX and they don't have the proper means to tinker their way around it, some client based help might circumvent a lot of problems as a second best solution.
Nonetheless, thank you so far for your help!
I cleaned the oc_file_locks table, it is empty, but I still get error 423 with certain files:
{"reqId":"8TlQgrXtV+is28PODYIV","remoteAddr":"1.1.1.1","app":"webdav","message":"Exception: {\"Message\":\"HTTP\\\/1.1 423 \\\"workspace\\\/.metadata\\\/.plugins\\\/org.eclipse.core.resources\\\/.projects\\\/sudoku\\\/org.eclipse.jdt.core\\\/state.dat\\\" is locked\",\"Exception\":\"OCA\\\\DAV\\\\Connector\\\\Sabre\\\\Exception\\\\FileLocked\",\"Code\":0,\"Trace\":\"#0 \\\/var\\\/www\\\/owncloud\\\/apps\\\/dav\\\/lib\\\/Connector\\\/Sabre\\\/Directory.php(136): OCA\\\\DAV\\\\Connector\\\\Sabre\\\\File->put(Resource id #271)\\n#1 \\\/var\\\/www\\\/owncloud\\\/3rdparty\\\/sabre\\\/dav\\\/lib\\\/DAV\\\/Server.php(1036): OCA\\\\DAV\\\\Connector\\\\Sabre\\\\Directory->createFile('state.dat', Resource id #271)\\n#2 \\\/var\\\/www\\\/owncloud\\\/3rdparty\\\/sabre\\\/dav\\\/lib\\\/DAV\\\/CorePlugin.php(523): Sabre\\\\DAV\\\\Server->createFile('workspace\\\/.meta...', Resource id #271, NULL)\\n#3 [internal function]: Sabre\\\\DAV\\\\CorePlugin->httpPut(Object(Sabre\\\\HTTP\\\\Request), Object(Sabre\\\\HTTP\\\\Response))\\n#4 \\\/var\\\/www\\\/owncloud\\\/3rdparty\\\/sabre\\\/event\\\/lib\\\/EventEmitterTrait.php(105): call_user_func_array(Array, Array)\\n#5 \\\/var\\\/www\\\/owncloud\\\/3rdparty\\\/sabre\\\/dav\\\/lib\\\/DAV\\\/Server.php(459): Sabre\\\\Event\\\\EventEmitter->emit('method:PUT', Array)\\n#6 \\\/var\\\/www\\\/owncloud\\\/3rdparty\\\/sabre\\\/dav\\\/lib\\\/DAV\\\/Server.php(248): Sabre\\\\DAV\\\\Server->invokeMethod(Object(Sabre\\\\HTTP\\\\Request), Object(Sabre\\\\HTTP\\\\Response))\\n#7 \\\/var\\\/www\\\/owncloud\\\/apps\\\/dav\\\/appinfo\\\/v1\\\/webdav.php(56): Sabre\\\\DAV\\\\Server->exec()\\n#8 \\\/var\\\/www\\\/owncloud\\\/remote.php(164): require_once('\\\/var\\\/www\\\/ownclo...')\\n#9 {main}\",\"File\":\"\\\/var\\\/www\\\/owncloud\\\/apps\\\/dav\\\/lib\\\/Connector\\\/Sabre\\\/File.php\",\"Line\":174,\"User\":\"user\"}","level":4,"time":"2016-11-09T12:55:53+00:00","method":"PUT","url":"\/remote.php\/webdav\/workspace\/.metadata\/.plugins\/org.eclipse.core.resources\/.projects\/sudoku\/org.eclipse.jdt.core\/state.dat","user":"user"}
I am running redis as a memcache.
Cron.php is run every 15 minutes by a crontab entry, in the webinterface it says it was run a few minutes ago.
@e-alfred are you using redis for locking too ? If yes, then oc_file_locks is not used but redis. You might want to clear the redis cache too then.
Yes, Redis for both caching and locking. I flushed the Redis cache and will see what happens.
@PVince81 The problem still prevails, interestingly only for one user with synced hidden files (Git repositories and Eclipse configuration) I am getting a 423 response for certain files.
Here are two examples:
{"reqId":"EguF+W00ZGHGG+M\/L3+Y","remoteAddr":"1.1.1.1","app":"webdav","message":"Exception: {\"Message\":\"HTTP\\\/1.1 423 \\\"workspace\\\/.metadata\\\/.plugins\\\/org.eclipse.core.resources\\\/.projects\\\/sudoku\\\/org.eclipse.jdt.core\\\/state.dat\\\" is locked\",\"Exception\":\"OCA\\\\DAV\\\\Connector\\\\Sabre\\\\Exception\\\\FileLocked\",\"Code\":0,\"Trace\":\"#0 \\\/var\\\/www\\\/owncloud\\\/apps\\\/dav\\\/lib\\\/Connector\\\/Sabre\\\/Directory.php(136): OCA\\\\DAV\\\\Connector\\\\Sabre\\\\File->put(Resource id #271)\\n#1 \\\/var\\\/www\\\/owncloud\\\/3rdparty\\\/sabre\\\/dav\\\/lib\\\/DAV\\\/Server.php(1036): OCA\\\\DAV\\\\Connector\\\\Sabre\\\\Directory->createFile('state.dat', Resource id #271)\\n#2 \\\/var\\\/www\\\/owncloud\\\/3rdparty\\\/sabre\\\/dav\\\/lib\\\/DAV\\\/CorePlugin.php(523): Sabre\\\\DAV\\\\Server->createFile('workspace\\\/.meta...', Resource id #271, NULL)\\n#3 [internal function]: Sabre\\\\DAV\\\\CorePlugin->httpPut(Object(Sabre\\\\HTTP\\\\Request), Object(Sabre\\\\HTTP\\\\Response))\\n#4 \\\/var\\\/www\\\/owncloud\\\/3rdparty\\\/sabre\\\/event\\\/lib\\\/EventEmitterTrait.php(105): call_user_func_array(Array, Array)\\n#5 \\\/var\\\/www\\\/owncloud\\\/3rdparty\\\/sabre\\\/dav\\\/lib\\\/DAV\\\/Server.php(459): Sabre\\\\Event\\\\EventEmitter->emit('method:PUT', Array)\\n#6 \\\/var\\\/www\\\/owncloud\\\/3rdparty\\\/sabre\\\/dav\\\/lib\\\/DAV\\\/Server.php(248): Sabre\\\\DAV\\\\Server->invokeMethod(Object(Sabre\\\\HTTP\\\\Request), Object(Sabre\\\\HTTP\\\\Response))\\n#7 \\\/var\\\/www\\\/owncloud\\\/apps\\\/dav\\\/appinfo\\\/v1\\\/webdav.php(56): Sabre\\\\DAV\\\\Server->exec()\\n#8 \\\/var\\\/www\\\/owncloud\\\/remote.php(164): require_once('\\\/var\\\/www\\\/ownclo...')\\n#9 {main}\",\"File\":\"\\\/var\\\/www\\\/owncloud\\\/apps\\\/dav\\\/lib\\\/Connector\\\/Sabre\\\/File.php\",\"Line\":174,\"User\":\"user\"}","level":4,"time":"2016-11-09T14:52:29+00:00","method":"PUT","url":"\/remote.php\/webdav\/workspace\/.metadata\/.plugins\/org.eclipse.core.resources\/.projects\/sudoku\/org.eclipse.jdt.core\/state.dat","user":"user"}
{"reqId":"la+uRl6ZkWB6uigOyEfH","remoteAddr":"1.1.1.1","app":"webdav","message":"Exception: {\"Message\":\"HTTP\\\/1.1 423 \\\"test\\\/test\\\/test\\\/.git\\\/refs\\\/heads\\\/test\\\" is locked\",\"Exception\":\"OCA\\\\DAV\\\\Connector\\\\Sabre\\\\Exception\\\\FileLocked\",\"Code\":0,\"Trace\":\"#0 \\\/var\\\/www\\\/owncloud\\\/apps\\\/dav\\\/lib\\\/Connector\\\/Sabre\\\/Directory.php(136): OCA\\\\DAV\\\\Connector\\\\Sabre\\\\File->put(Resource id #271)\\n#1 \\\/var\\\/www\\\/owncloud\\\/3rdparty\\\/sabre\\\/dav\\\/lib\\\/DAV\\\/Server.php(1036): OCA\\\\DAV\\\\Connector\\\\Sabre\\\\Directory->createFile('test', Resource id #271)\\n#2 \\\/var\\\/www\\\/owncloud\\\/3rdparty\\\/sabre\\\/dav\\\/lib\\\/DAV\\\/CorePlugin.php(523): Sabre\\\\DAV\\\\Server->createFile('teaching\\\/test...', Resource id #271, NULL)\\n#3 [internal function]: Sabre\\\\DAV\\\\CorePlugin->httpPut(Object(Sabre\\\\HTTP\\\\Request), Object(Sabre\\\\HTTP\\\\Response))\\n#4 \\\/var\\\/www\\\/owncloud\\\/3rdparty\\\/sabre\\\/event\\\/lib\\\/EventEmitterTrait.php(105): call_user_func_array(Array, Array)\\n#5 \\\/var\\\/www\\\/owncloud\\\/3rdparty\\\/sabre\\\/dav\\\/lib\\\/DAV\\\/Server.php(459): Sabre\\\\Event\\\\EventEmitter->emit('method:PUT', Array)\\n#6 \\\/var\\\/www\\\/owncloud\\\/3rdparty\\\/sabre\\\/dav\\\/lib\\\/DAV\\\/Server.php(248): Sabre\\\\DAV\\\\Server->invokeMethod(Object(Sabre\\\\HTTP\\\\Request), Object(Sabre\\\\HTTP\\\\Response))\\n#7 \\\/var\\\/www\\\/owncloud\\\/apps\\\/dav\\\/appinfo\\\/v1\\\/webdav.php(56): Sabre\\\\DAV\\\\Server->exec()\\n#8 \\\/var\\\/www\\\/owncloud\\\/remote.php(164): require_once('\\\/var\\\/www\\\/ownclo...')\\n#9 {main}\",\"File\":\"\\\/var\\\/www\\\/owncloud\\\/apps\\\/dav\\\/lib\\\/Connector\\\/Sabre\\\/File.php\",\"Line\":174,\"User\":\"user\"}","level":4,"time":"2016-11-09T14:52:28+00:00","method":"PUT","url":"\/remote.php\/webdav\/test\/test\/test\/.git\/refs\/heads\/test","user":"user"}
Okay, I run an occ files:scan --all and now I do not get any locking messages like above anymore.
owncloud-9.1.1-1.fc24.noarch here. Command occ files:scan --all did not solve the problem
We have a similar problem on a small auxiliary installation running owncloud 9.1.3. Is this issue understood already and in the pipeline for fixing?
Files are sometimes locked and neither cleaning up the oc_file_locks table occ files:scan --all solves the problem. In this particular server the crons were not run for a long time due to misconfiguration. We corrected that and run the cron job few times by hand while trying to resolve the problem. It did not help.
The TTL entries in oc_file_locks are set to some insanely high values. What should they be normally? 3600s?
The problem appears for a folder which is "External Mount" pointing to the local disk on the same server and then shared with a user by administrator.
Transactional File Locking is not enabled -- should it?
Here are the server error messages.
{"reqId":"anLv5s55dWcTgOOydVOV","remoteAddr":"ip.address","app":"webdav","message":"Exception: {\"Message\":\"HTTP\\\/1.1 423 \\\"Data\\\/AutoGen\\\/GpsPhotoDb\\\/PhotoListByLatLng\\\/6.3 81.2.dat\\\" is locked\",\"Exception\":\"OCA\\\\DAV\\\\Connector\\\\Sabre\\\\Exception\\\\FileLocked\",\"Code\":0,\"Trace\":\"#0 \\\/var\\\/www\\\/owncloud\\\/3rdparty\\\/sabre\\\/dav\\\/lib\\\/DAV\\\/Server.php(1070): OCA\\\\DAV\\\\Connector\\\\Sabre\\\\File->put(Resource id #57)\\n#1 \\\/var\\\/www\\\/owncloud\\\/3rdparty\\\/sabre\\\/dav\\\/lib\\\/DAV\\\/CorePlugin.php(511): Sabre\\\\DAV\\\\Server->updateFile('Data\\\/AutoGen\\\/Gp...', Resource id #57, NULL)\\n#2 [internal function]: Sabre\\\\DAV\\\\CorePlugin->httpPut(Object(Sabre\\\\HTTP\\\\Request), Object(Sabre\\\\HTTP\\\\Response))\\n#3 \\\/var\\\/www\\\/owncloud\\\/3rdparty\\\/sabre\\\/event\\\/lib\\\/EventEmitterTrait.php(105): call_user_func_array(Array, Array)\\n#4 \\\/var\\\/www\\\/owncloud\\\/3rdparty\\\/sabre\\\/dav\\\/lib\\\/DAV\\\/Server.php(459): Sabre\\\\Event\\\\EventEmitter->emit('method:PUT', Array)\\n#5 \\\/var\\\/www\\\/owncloud\\\/3rdparty\\\/sabre\\\/dav\\\/lib\\\/DAV\\\/Server.php(248): Sabre\\\\DAV\\\\Server->invokeMethod(Object(Sabre\\\\HTTP\\\\Request), Object(Sabre\\\\HTTP\\\\Response))\\n#6 \\\/var\\\/www\\\/owncloud\\\/apps\\\/dav\\\/appinfo\\\/v1\\\/webdav.php(56): Sabre\\\\DAV\\\\Server->exec()\\n#7 \\\/var\\\/www\\\/owncloud\\\/remote.php(164): require_once('\\\/var\\\/www\\\/ownclo...')\\n#8 {main}\",\"File\":\"\\\/var\\\/www\\\/owncloud\\\/apps\\\/dav\\\/lib\\\/Connector\\\/Sabre\\\/File.php\",\"Line\":174,\"User\":\"service\"}","level":4,"time":"2017-01-17T18:11:10+00:00","method":"PUT","url":"\/owncloud\/remote.php\/webdav\/Data\/AutoGen\/GpsPhotoDb\/PhotoListByLatLng\/6.3%2081.2.dat","user":"service"}
@moscicki this issue here is about people using ajax cron where ajax cron doesn't run often enough to trigger the oc_filelock cleaning background job.
If you say that even clearing that table doesn't solve the problem, then it's a problem that has not been reproduced and understood yet. The TTL is set to 3600 here https://github.com/owncloud/core/blob/v9.1.3/lib/private/Lock/DBLockingProvider.php#L100 in the default value.
From my understand it's not that the lock isn't cleared when clearing the table. The problem is that the lock reappears after clearing and stays there.
The posted exception is about an upload (Webdav PUT).
Usually unlocking the lock is triggered after the fclose() from after the upload. If that fails, then it happens during PHP garbage collection.
However it was observed that if the upload is aborted and the PHP connection is lost, it is likely that the fclose() code is never reached and also the GC doesn't run any more, or can't run. This was discovered here with PHP 7: https://github.com/owncloud/core/issues/22370.
See https://github.com/owncloud/core/issues/22370#issuecomment-273442712 for possible fixes.
If no connection abortion or timeouts were involved, then the problem might be somewhere else.
@PVince81: I created a new issue for this because it looks like the exception causing this is different: #26980
I do have the same problem since today. I already cleared the database tables
mysql> DELETE FROM oc_file_locks WHERE 1;
Query OK, 0 rows affected (0.00 sec)
but when I try to move a file to another directory using the web interface I get the lock error message and see this in my log
OCA\DAV\Connector\Sabre\Exception\FileLocked: HTTP/1.1 423 "Atzumer Teich.mp4" is locked
Repeating the move procedure results in the same error, no moving possible.
We also had multiple folders locked for multiple users, which couldn't be used, or deleted... This was in centos 6 with cpanel. The only thing that worked for us was configuring redis as explained here and here.
Closing in favor of a more generic ticket about ajax cron: https://github.com/owncloud/core/issues/27574
Please discuss possible approaches there
Should this issue really be closed in favor of #27574 ?
I have locked files while I use cron for cron.
Furthermore, a solution on how to unlock the files is still needed.
See also: https://help.nextcloud.com/t/file-is-locked-how-to-unlock/1883/10
This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.
Most helpful comment
i had the same problem. My workaround :
Enable maintaince mode
Deleted every entry in table "oc_file_locks" in database
Disable Maintaince mode
Dirty but solved the problem ... for now