I'm seeing repeated attempts to upload modest sized files where the server is generating 507 Insufficient Storage errors. In each case we're talking about a small .DOCX or PDF file that is perhaps a page or two long. The users all have 50GB of quota (or more) and the organization as a whole has only 25GB of files. They have over 6,000 files, the largest of which is only 2 GB.
The server is a relatively new install of OC 8.0.2 (upgraded from 8.0.0) running on CentOS 7.0 with a MariaDB 5.5 database and Swift ObjectStore. Some of their users are running the 1.8 client, others are still using the 1.7 client. All users are generating similar errors.
The logs show repeated attempts to upload files, with the following result. Apparently no files have successfully synced (upload) for the past two weeks, so this problem predates the recent update from 8.0.0 to 8.0.2.
Apache log entries looks like this.
12.34.56.78 - [email protected] [23/Mar/2015:09:47:29 -0300] "PUT /remote.php/webdav/InspireAA%20Desktop/Referral%20Letters/Person%20Name.pdf HTTP/1.1" 507 192
The corresponding OwnCloud log entries looks like this.
Mar 23 09:47:29 vcloud ownCloud[22695]: {webdav} Exception: {"Message":"","Code":0,"Trace":"#0 [internal function]: OC_Connector_Sabre_QuotaPlugin->checkQuota('InspireAA Deskt...', Object(OC_Connector_Sabre_File), Resource id #420)\n#1 \/var\/www\/html\/owncloud\/3rdparty\/sabre\/dav\/lib\/Sabre\/DAV\/Server.php(433): call_user_func_array(Array, Array)\n#2 \/var\/www\/html\/owncloud\/3rdparty\/sabre\/dav\/lib\/Sabre\/DAV\/Server.php(886): Sabre\\DAV\\Server->broadcastEvent('beforeWriteCont...', Array)\n#3 [internal function]: Sabre\\DAV\\Server->httpPut('InspireAA Deskt...')\n#4 \/var\/www\/html\/owncloud\/3rdparty\/sabre\/dav\/lib\/Sabre\/DAV\/Server.php(474): call_user_func(Array, 'InspireAA Deskt...')\n#5 \/var\/www\/html\/owncloud\/3rdparty\/sabre\/dav\/lib\/Sabre\/DAV\/Server.php(214): Sabre\\DAV\\Server->invokeMethod('PUT', 'InspireAA Deskt...')\n#6 \/var\/www\/html\/owncloud\/apps\/files\/appinfo\/remote.php(61): Sabre\\DAV\\Server->exec()\n#7 \/var\/www\/html\/owncloud\/remote.php(54): require_once('\/var\/www\/html\/o...')\n#8 {main}","File":"\/var\/www\/html\/owncloud\/lib\/private\/connector\/sabre\/quotaplugin.php","Line":79}
can you check if the temp directory is full? or some other partition on the server? or is this uploaded into a shared folder of a user who is out of quota?
The temp directory and the entire OS is on a single 80 GiB partition that is only 3% full. The Swift store has 8 TiB free.
There are 4 users on the system. Three of the users have 50 GiB quotas. The fourth has a 300 GiB quota.
As a total organization they have approximately 7700 files, the largest of which is 2 GiB. Their total used storage is 23 GiB, less than half of the smallest user quota.
So no user is even close to being out of quota and we’re seeing the 507 error even with the user with the 300 GiB quota.
On Mar 26, 2015, at 7:32 AM, Frank Karlitschek [email protected] wrote:
can you check if the temp directory is full? or some other partition on the server? or is this uploaded into a shared folder of a user who is out of quota?
—
Reply to this email directly or view it on GitHub https://github.com/owncloud/core/issues/15189#issuecomment-86473845.
can you double check the personal pages of the users who fail to upload? The quota consumption is displayed on the top.
The personal pages are reporting the following for all users.
"You have used 0 B of the available ?"
(Minor edit: All of the users see the above message with one exception. My Admin account on their server is reporting "You have used 1.7 MB of the available 5 GB".)
If available says "?" it might mean that the "disk_free_space" PHP function doesn't work or is disabled for some reason.
Or did you set an explicit quota for every user ? (especially the ones with the "?") ?
All users have a quota.
The disk free space is working for my admin account, but not for the other users who are getting the 507 errors when they try to sync/upload.
On Mar 27, 2015, at 6:46 AM, Vincent Petry [email protected] wrote:
If available says "?" it might mean that the "disk_free_space" PHP function doesn't work or is disabled for some reason.
Or did you set an explicit quota for every user ? (especially the ones with the "?") ?
—
Reply to this email directly or view it on GitHub https://github.com/owncloud/core/issues/15189#issuecomment-86899163.
Just to be clear, we are talking about ownCloud quota (configured in the users page) not filesystem-based quota ?
Yes, there are no file system quotas enabled on the server.
On Mar 27, 2015, at 6:54 AM, Vincent Petry [email protected] wrote:
Just to be clear, we are talking about ownCloud quota (configured in the users page) not filesystem-based quota ?
—
Reply to this email directly or view it on GitHub https://github.com/owncloud/core/issues/15189#issuecomment-86900103.
I just went through the logs to see if I could determine the exact time/date and events surrounding the failures. I was hoping to find some significant package install or something which triggered this. After all they have successfully uploaded thousands of small files to the server.
This seems to fail on a user by user basis at different times.
It’s worth pointing out again that the smallest quota of any user is 50GB and that the total file system usage for the entire organization is only 23 GB (based on the size of the Swift container). Even if one of the users with the smallest quota were the owner of all of the organizations files, they still would be below 50%. In fact, most of the files are owned by the user with the 300 GB quota.
User1 has 300 GB quota. Last successful upload Feb 27th
507 errors begin Mar 2nd.
User2 has 50 GB quota. Last successful upload Mar 6th.
Next upload is Mar 15th when 507 errors begin
User3 has 50 GB quota. Last successful upload March 10th
Next upload is Mar 17th when 507 errors begin.
Hmmm... do these users have a lot of shared files ?
There's a bug where the quota usage wasn't reported/propagated properly when shared files are updated: https://github.com/owncloud/core/issues/14596
In your case it could be that share recipients have deleted files inside the shared folder but the owner's quota usage wasn't updated, so OC believes it cannot upload further files there.
You could try checking the value of select size from oc_filecache where path = 'files' and storage in (select numeric_id from oc_storages where id like '%userid%') (replace "userid" with the user's id, you might need to look up the correct entry in oc_storages first)
Compare the size you found with the actual size of the user's "data/$user/files" directory.
If the difference is big, it might indeed be the bug I mentionned.
Yes, they do have a large number of shared files.
The owncloud/data/$user/files directories do not exist (other than for my admin account). We are using Swift as our object store.
The sql query returns the following values
User1 -> "-1"
User2 -> "-1"
User3 -> "153560191" (153 MB)
User4 -> "449795" (inactive user)
My admin account -> "1807468"
I just confirmed with User1 (user with the 300 GB quota) that the local size of their ownCloud folder is 22 GB. This confirms what I'm seeing from the Swift container, knowing that User1 owns the lions share of the organization's files.
I also confirmed that they have not done mass deletions or re-addition of content to their folders that would come anywhere close to adding up to the user's quotas.
This seems to be very much like what is reported in #13975 and what I reported in https://github.com/owncloud/client/issues/2061 where it was decided this was a client issue.
As a temporary measure I removed the quota from all users. This is allowing new files to synch, however the files which failed earlier are all marked with a nice big red X. The OwnCloud folder itself shows that everything is synched.
How can my clients ensure that ALL of their files are correctly synched?
They have almost 7,000 files across many directories. I can't ask them to check every folder and delete and reload all the failed synchs.
If there is a database manipulation that I could run which would resolve this synch issue to make the files appear freshly added, I would be quite happy. Any solution that's not labour intensive for the busy executives which are using the shared would be acceptable.
So if I understand well the following now works with quota disabled:
And the following fails:
Is this correct ?
I still do not understand the issue properly so am not sure about the possible workaround.
AFAIK using "./occ files:scan --all" wouldn't fix it when using object store (@butonic correct me if I'm wrong)
@PVince81 if config.php has 'objectstore' configured to use as primary storage, then a filescan will not be able to scan anything. Metadata then resides only in OC.
@PVince81 I would agree with the "now works" scenarios but would modify the "fails" scenario to include the case where the files have never synched from when they were initially added client side, not just when they are next modified. All files which failed with 507 errors now appear in the users local OwnCloud directory, but are not synched with the server.
So we have the initial bug which caused users to have quota errors compounded with a bug preventing graceful recovery now that quota has been temporarily turned off. Should I open a new bug report for the latter?
Can you confirm that all the red crosses you see in the clients are all due to 507 errors and not other errors ?
@guruz @ogoffart does the sync client blacklist files from sync when a 507 error is met several times ?
I can't confirm them all as they are scattered across dozens of directories and I'm not in the same time zone as the clients. Nor can I keep asking the client to dig around for more info. These are corporate execs, not IT people.
I can confirm that all of the files in the specific directory which the customer reported to me as not synched were files which had failed initial upload because of the 507 error. In each case there are multiple 507 errors and then that file name never shows up again in the server logs.
I can also confirm that randomish spot checks of files which I see 507 errors in the server logs all follow the pattern above. None of the files appear in the logs again after the 507 errors. I checked a total of about 15 files.
The client will blacklist the files if the error 507 is repeated several times.
So I guess this means the files need to be unblacklisted, how to do that ? (for non-tech people)
Or is there a timeout to such blacklisting ? (might make sense when the blacklisting is related to free space)
I'm guessing we can resolve this with a few mouse clicks through the "Edit Ignored Files" dialogue.
Looks like that was just wishful thinking on my part. The customer reports that only the stock file extension exemptions are in the Ignored Files list. The 507 blacklisted files are not there.
According to https://github.com/owncloud/client/issues/2247 the blacklist is supposed to be periodically cleared and this would resolve itself. The decision was made not to offer any GUI component to manage the blacklist since it's supposedly auto-cleared.
Our case seems to show evidence to the contrary. This server has many files which were 507 blacklisted at the client several weeks ago and were never retried.
First thing would be to check whether it is indeed a blacklisting issue.
If you have access to one of the clients, you could use a SQLite client to open the file ".csync_journal.db" inside the local ownCloud folder. Then see the "blacklist" table and delete the entries if they exist and see whether these files will resync properly.
If yes, then this is indeed a blacklist problem and we need to find a "non-techie" solution for your other users.
If it is blacklisted it should say so in the activity listing.
And the blacklist expires after one 24 at worst.
I had user1 send me the .csync_journal.db file from her computer. This is the user with the 500 GB quota and who owns most of the files. Just as an update, they now have a combined total of 7,899 files in the Swift store totaling 23 GB.
The blacklist table from her database has 3691 entries.
path:
lastTryEtag: Null (Except two which have values 54f4871ca5736 and 54f07e6610b2b)
lastTryModtime: 1262876580 (ranging to 1427726334)
retrycount: 0
errorstring: Error downloading (URL) - server replied: Insufficient Storage
The last 507 error on the server was on March 30th when we turned quotas off.
Just to provide more info, the other tables have the following number of records.
downloadinfo -> 5
metadata -> 4511
uploadinfo -> 0
version -> 1
Can I safely delete all the entries from the blacklist table and send her back the database without causing any corruption or sync issues?
I also have the client log file from her computer. It begins with
It has approximately 3700 entries which look like the following.
|0|(FILENAME)|INST_NEW|Up|1426009981||12395||6|The item is not synced because of previous errors: Error downloading (URL) - server replied: Insufficient Storage|0|0|0|||INST_NONE|
From what I remember you should be able to delete these entries manually and the clientwill try and resume them.
So if I send her a slightly out of date copy of the database, will the client sort everything out gracefully?
I can't guarantee it as I'm not familiar enough with the sync client workings.
If you want to be sure, best would be to do the following:
Did as you advised. With the clean blacklist table, the files with the 507 errors are synching up nicely. There's 3700 of them so it's going to take awhile but I'm seeing 201's streaming across the apache log.
So that solves our immediate problem with the customer, leaving you guys to sort out the source of the 507 and the issue with the client blacklist.
Let me know if I can assist with testing in any way.
Good to know, thanks.
Not sure if related, but I'll see https://github.com/owncloud/core/issues/15601 which I observed with S3.
Basically if the database contains "-1" for a long time for the folder sizes (even when not justified), it could affect the quota calculation.
Think this will be fixed in the 8.1 release?
I'll set it to 8.1 to look into it. No guarantee though. That's quite deep in the core.
I suspect that the file scanner is needlessly setting the size to -1 in some cases.
@razyr Which client version was it that had the blacklist columns path, lastTryEtag, lastTryModtime, retrycount, errorstring but not lastTryTime and ignoreDuration? These two should exist since 1.7 and make sure blacklist entries get cleared regularly.
@ckamm It was the 1.7.1 client.

@razyr The blacklist will clear out automatically with client versions >=1.8.0, so that should solve that part of this issue.
@razyr any more luck with 8.1.3 and sync client 2.0.1 ?
@karlitschek @cmonteroluque
backlog or close or QA to verify - THX
assigned to @rperezb to distribute within QA to verify please
pinging @SergioBertolinSG @jvillafanez @davitol @rperezb as per our call this morning, just a reminder
Checked with a fresh 8.2RC2 the following cases (uploads to check for the quota restriction have been done through the webdav interface with curl):
The behaviour is correct in the server side.
ownCloud 8.1.3 (daily) Build:2015-10-14T03:40:36+00:00 85c8af596d622e114292fe847a94eec71ff5c668 also present the same behaviour described in the previous comment
OC 9.1.0 and Client 2.2.2 (OS X) - problem still persists
Same here OC 9.1.0 and OS X Client 2.2.2
I can also confirm that this problem still persists.
OC 9.1.0 and Windows Client 2.2.2
Any solution? Same problem like others.
Same here. OC 9.1. Windows Client 2.2.2, no quotas set.
Errors always look like that:
{"reqId":"OpwzdG+6P3cp46b02PZd","remoteAddr":"87.144.220.227","app":"webdav","message":"Exception: {\"Message\":\"HTTP\\\/1.1 507 Insufficient Storage\",\"Exception\":\"Sabre\\\\DAV\\\\Exception\\\\InsufficientStorage\",\"Code\":0,\"Trace\":\"#0 [internal function]: OCA\\\\DAV\\\\Connector\\\\Sabre\\\\QuotaPlugin->checkQuota('\\\/Fund Research\\\/...', Resource id #163, Object(OCA\\\\DAV\\\\Connector\\\\Sabre\\\\Directory), false)\\n#1 \\\/var\\\/www\\\/owncloud\\\/3rdparty\\\/sabre\\\/event\\\/lib\\\/EventEmitterTrait.php(105): call_user_func_array(Array, Array)\\n#2 \\\/var\\\/www\\\/owncloud\\\/3rdparty\\\/sabre\\\/dav\\\/lib\\\/DAV\\\/Server.php(1034): Sabre\\\\Event\\\\EventEmitter->emit('beforeCreateFil...', Array)\\n#3 \\\/var\\\/www\\\/owncloud\\\/3rdparty\\\/sabre\\\/dav\\\/lib\\\/DAV\\\/CorePlugin.php(523): Sabre\\\\DAV\\\\Server->createFile('Fund Research\\\/E...', Resource id #163, NULL)\\n#4 [internal function]: Sabre\\\\DAV\\\\CorePlugin->httpPut(Object(Sabre\\\\HTTP\\\\Request), Object(Sabre\\\\HTTP\\\\Response))\\n#5 \\\/var\\\/www\\\/owncloud\\\/3rdparty\\\/sabre\\\/event\\\/lib\\\/EventEmitterTrait.php(105): call_user_func_array(Array, Array)\\n#6 \\\/var\\\/www\\\/owncloud\\\/3rdparty\\\/sabre\\\/dav\\\/lib\\\/DAV\\\/Server.php(459): Sabre\\\\Event\\\\EventEmitter->emit('method:PUT', Array)\\n#7 \\\/var\\\/www\\\/owncloud\\\/3rdparty\\\/sabre\\\/dav\\\/lib\\\/DAV\\\/Server.php(248): Sabre\\\\DAV\\\\Server->invokeMethod(Object(Sabre\\\\HTTP\\\\Request), Object(Sabre\\\\HTTP\\\\Response))\\n#8 \\\/var\\\/www\\\/owncloud\\\/apps\\\/dav\\\/appinfo\\\/v1\\\/webdav.php(56): Sabre\\\\DAV\\\\Server->exec()\\n#9 \\\/var\\\/www\\\/owncloud\\\/remote.php(164): require_once('\\\/var\\\/www\\\/ownclo...')\\n#10 {main}\",\"File\":\"\\\/var\\\/www\\\/owncloud\\\/apps\\\/dav\\\/lib\\\/Connector\\\/Sabre\\\/QuotaPlugin.php\",\"Line\":108,\"User\":\"foo\"}","level":4,"time":"2016-07-28T16:45:36+00:00","method":"PUT","url":"\/remote.php\/webdav\/bar.pdf","user":"foo"}
I too am experiencing this problem. Running ownCloud on Debian linux with plenty of free space and quotas no where near being full.
We increase mysql connections from 100 to 1000, wait time changed from 600 sec to 120 sec, changed sizes of Mysql caches and looks fine.
I'm dealing probably with this too;
server: owncloud 9.1.0 on ubuntu 16.04
client: windows 2.2
db: mysql
Here a small dump from the log
{"reqId":"9VG9qP504Ph7IqFNSq4z","remoteAddr":"192.168.1.200","app":"webdav","message":"Exception: {\"Message\":\"HTTP\\\/1.1 507 Insufficient Storage\",\"Exception\":\"Sabre\\\\DAV\\\\Exception\\\\InsufficientStorage\",\"Code\":0,\"Trace\":\"#0 [internal function]: OCA\\\\DAV\\\\Connector\\\\Sabre\\\\QuotaPlugin->checkQuota('path\\\/20...', Resource id #135, Object(OCA\\\\DAV\\\\Connector\\\\Sabre\\\\Directory), false)\\n#1 \\\/var\\\/www\\\/owncloud\\\/3rdparty\\\/sabre\\\/event\\\/lib\\\/EventEmitterTrait.php(105): call_user_func_array(Array, Array)\\n#2 \\\/var\\\/www\\\/owncloud\\\/3rdparty\\\/sabre\\\/dav\\\/lib\\\/DAV\\\/Server.php(1034): Sabre\\\\Event\\\\EventEmitter->emit('beforeCreateFil...', Array)\\n#3 \\\/var\\\/www\\\/owncloud\\\/3rdparty\\\/sabre\\\/dav\\\/lib\\\/DAV\\\/CorePlugin.php(523): Sabre\\\\DAV\\\\Server->createFile('path\\\/20...', Resource id #135, NULL)\\n#4 [internal function]: Sabre\\\\DAV\\\\CorePlugin->httpPut(Object(Sabre\\\\HTTP\\\\Request), Object(Sabre\\\\HTTP\\\\Response))\\n#5 \\\/var\\\/www\\\/owncloud\\\/3rdparty\\\/sabre\\\/event\\\/lib\\\/EventEmitterTrait.php(105): call_user_func_array(Array, Array)\\n#6 \\\/var\\\/www\\\/owncloud\\\/3rdparty\\\/sabre\\\/dav\\\/lib\\\/DAV\\\/Server.php(459): Sabre\\\\Event\\\\EventEmitter->emit('method:PUT', Array)\\n#7 \\\/var\\\/www\\\/owncloud\\\/3rdparty\\\/sabre\\\/dav\\\/lib\\\/DAV\\\/Server.php(248): Sabre\\\\DAV\\\\Server->invokeMethod(Object(Sabre\\\\HTTP\\\\Request), Object(Sabre\\\\HTTP\\\\Response))\\n#8 \\\/var\\\/www\\\/owncloud\\\/apps\\\/dav\\\/appinfo\\\/v1\\\/webdav.php(56): Sabre\\\\DAV\\\\Server->exec()\\n#9 \\\/var\\\/www\\\/owncloud\\\/remote.php(164): require_once('\\\/var\\\/www\\\/ownclo...')\\n#10 {main}\",\"File\":\"\\\/var\\\/www\\\/owncloud\\\/apps\\\/dav\\\/lib\\\/Connector\\\/Sabre\\\/QuotaPlugin.php\",\"Line\":108,\"User\":\"username\"}","level":4,"time":"2016-08-02T09:53:33+02:00","method":"PUT","url":"\/owncloud\/remote.php\/webdav\/path\/path\/image2.PNG","user":"username"}
{"reqId":"kYEG0b31k\/aj96wHCF4J","remoteAddr":"192.168.1.200","app":"webdav","message":"Exception: {\"Message\":\"HTTP\\\/1.1 507 Insufficient Storage\",\"Exception\":\"Sabre\\\\DAV\\\\Exception\\\\InsufficientStorage\",\"Code\":0,\"Trace\":\"#0 [internal function]: OCA\\\\DAV\\\\Connector\\\\Sabre\\\\QuotaPlugin->checkQuota('path\\\/20...', Resource id #135, Object(OCA\\\\DAV\\\\Connector\\\\Sabre\\\\Directory), false)\\n#1 \\\/var\\\/www\\\/owncloud\\\/3rdparty\\\/sabre\\\/event\\\/lib\\\/EventEmitterTrait.php(105): call_user_func_array(Array, Array)\\n#2 \\\/var\\\/www\\\/owncloud\\\/3rdparty\\\/sabre\\\/dav\\\/lib\\\/DAV\\\/Server.php(1034): Sabre\\\\Event\\\\EventEmitter->emit('beforeCreateFil...', Array)\\n#3 \\\/var\\\/www\\\/owncloud\\\/3rdparty\\\/sabre\\\/dav\\\/lib\\\/DAV\\\/CorePlugin.php(523): Sabre\\\\DAV\\\\Server->createFile('path\\\/20...', Resource id #135, NULL)\\n#4 [internal function]: Sabre\\\\DAV\\\\CorePlugin->httpPut(Object(Sabre\\\\HTTP\\\\Request), Object(Sabre\\\\HTTP\\\\Response))\\n#5 \\\/var\\\/www\\\/owncloud\\\/3rdparty\\\/sabre\\\/event\\\/lib\\\/EventEmitterTrait.php(105): call_user_func_array(Array, Array)\\n#6 \\\/var\\\/www\\\/owncloud\\\/3rdparty\\\/sabre\\\/dav\\\/lib\\\/DAV\\\/Server.php(459): Sabre\\\\Event\\\\EventEmitter->emit('method:PUT', Array)\\n#7 \\\/var\\\/www\\\/owncloud\\\/3rdparty\\\/sabre\\\/dav\\\/lib\\\/DAV\\\/Server.php(248): Sabre\\\\DAV\\\\Server->invokeMethod(Object(Sabre\\\\HTTP\\\\Request), Object(Sabre\\\\HTTP\\\\Response))\\n#8 \\\/var\\\/www\\\/owncloud\\\/apps\\\/dav\\\/appinfo\\\/v1\\\/webdav.php(56): Sabre\\\\DAV\\\\Server->exec()\\n#9 \\\/var\\\/www\\\/owncloud\\\/remote.php(164): require_once('\\\/var\\\/www\\\/ownclo...')\\n#10 {main}\",\"File\":\"\\\/var\\\/www\\\/owncloud\\\/apps\\\/dav\\\/lib\\\/Connector\\\/Sabre\\\/QuotaPlugin.php\",\"Line\":108,\"User\":\"username\"}","level":4,"time":"2016-08-02T09:57:03+02:00","method":"PUT","url":"\/owncloud\/remote.php\/webdav\/path\/path\/image2.PNG","user":"username"}
{"reqId":"RYfIKlsj1I3boaGdFG3q","remoteAddr":"192.168.1.200","app":"webdav","message":"Exception: {\"Message\":\"HTTP\\\/1.1 507 Insufficient Storage\",\"Exception\":\"Sabre\\\\DAV\\\\Exception\\\\InsufficientStorage\",\"Code\":0,\"Trace\":\"#0 [internal function]: OCA\\\\DAV\\\\Connector\\\\Sabre\\\\QuotaPlugin->checkQuota('path\\\/20...', Resource id #135, Object(OCA\\\\DAV\\\\Connector\\\\Sabre\\\\Directory), false)\\n#1 \\\/var\\\/www\\\/owncloud\\\/3rdparty\\\/sabre\\\/event\\\/lib\\\/EventEmitterTrait.php(105): call_user_func_array(Array, Array)\\n#2 \\\/var\\\/www\\\/owncloud\\\/3rdparty\\\/sabre\\\/dav\\\/lib\\\/DAV\\\/Server.php(1034): Sabre\\\\Event\\\\EventEmitter->emit('beforeCreateFil...', Array)\\n#3 \\\/var\\\/www\\\/owncloud\\\/3rdparty\\\/sabre\\\/dav\\\/lib\\\/DAV\\\/CorePlugin.php(523): Sabre\\\\DAV\\\\Server->createFile('path\\\/20...', Resource id #135, NULL)\\n#4 [internal function]: Sabre\\\\DAV\\\\CorePlugin->httpPut(Object(Sabre\\\\HTTP\\\\Request), Object(Sabre\\\\HTTP\\\\Response))\\n#5 \\\/var\\\/www\\\/owncloud\\\/3rdparty\\\/sabre\\\/event\\\/lib\\\/EventEmitterTrait.php(105): call_user_func_array(Array, Array)\\n#6 \\\/var\\\/www\\\/owncloud\\\/3rdparty\\\/sabre\\\/dav\\\/lib\\\/DAV\\\/Server.php(459): Sabre\\\\Event\\\\EventEmitter->emit('method:PUT', Array)\\n#7 \\\/var\\\/www\\\/owncloud\\\/3rdparty\\\/sabre\\\/dav\\\/lib\\\/DAV\\\/Server.php(248): Sabre\\\\DAV\\\\Server->invokeMethod(Object(Sabre\\\\HTTP\\\\Request), Object(Sabre\\\\HTTP\\\\Response))\\n#8 \\\/var\\\/www\\\/owncloud\\\/apps\\\/dav\\\/appinfo\\\/v1\\\/webdav.php(56): Sabre\\\\DAV\\\\Server->exec()\\n#9 \\\/var\\\/www\\\/owncloud\\\/remote.php(164): require_once('\\\/var\\\/www\\\/ownclo...')\\n#10 {main}\",\"File\":\"\\\/var\\\/www\\\/owncloud\\\/apps\\\/dav\\\/lib\\\/Connector\\\/Sabre\\\/QuotaPlugin.php\",\"Line\":108,\"User\":\"username\"}","level":4,"time":"2016-08-02T09:57:04+02:00","method":"PUT","url":"\/owncloud\/remote.php\/webdav\/path\/path\/image1.PNG","user":"username"}
Same problem here. OSX 10.11.6, oc 9.10, Client Version 2.2.2 (build 3472).
Problem still persists on OwnCloud 9.1, our users and groups are successfully synchronized with LDAP (Active Directory) and the default storage quota is set to 20 Gb in OwnCloud settings (no storage quota set in LDAP).
When a user shares a folder either with a group or other users then only the folder's "sharer" can add or modify files into the shared folder while all other users get the "Insufficient Storage" error above.
All users are running client version 2.2.2 on multiple operating systems: Windows 10, Mac OS X 10.10 and 10.11, Ubuntu 14.04.
Setting the default storage quota to "Unlimited" prevents this from happening, but this is not a viable workaround because we ha a quite large setup with more than 4000 users, so we need storage quotas.
Perhaps these reports of failures using 9.1.0 are the same problem as https://github.com/owncloud/core/issues/25582 .
Please try the patch which solved it. https://github.com/owncloud/core/pull/25675
Thanks Sergio, I'll try to apply the patch and report back here.
I deleted my previous comments because I realized I was commenting a closed issue, moved them here:
https://github.com/owncloud/core/issues/25582#issuecomment-237278833
New to owncloud so sorry if this is wrong place to report but I'm still experiencing this issue using the owncloud 9.1 appliance VM. apt list owncloud is reporting I'm on:
owncloud/unknown,now 9.1.0-1.1 all [installed]
And I've run apt-get update && apt-get upgrade which had no effect.
The symptom is that I can place a single file in a shared dir and the file syncs with the server, however if I drop a directory with even ~30 files, the log reports that every file fails due to insufficient storage, similar to all the other reports.
I can't tell if 9.1.0-1.1 is the actual 9.1.1 patch referenced here? Or is the 9.1.1 release just not in the channel that would be default on the 9.1 appliance?
@oucil this ticket is likely a very old unrelated bug.
If you're on 9.1.0 you probably saw https://github.com/owncloud/core/issues/25582 which is fixed in 9.1.1. Please upgrade.
@PVince81 I mentioned I tried to upgrade via the repo's and apt, but it reports I'm on the latest version as of the last time I tried it a couple hours ago. How long does it normally take point releases to make it to the channels, and which channel would the appliance use by default?
@PVince81 Nevermind, it wasn't related to either, apparently the host of the repos changed the PGP keys which is why the appliance wasn't updating from the OwnCloud repo, but was updating the core packages. All fixed now that we're on 9.1.1.
This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.