Description already very clear..
Client dowloads all the files, server does not become instable.
Client says "waiting" until, after a very long time, it shows:
An error occured while opening a folder Operation canceled
In the server when I look at htop, I can see that php-fpm: pool Nextcloud is consuming all the 20GB memory of the server and after some time finally kills the processes, all cores at 100% load.
Operating system:
Windows 10
Web server:
Hyper V VM , 4 vCores, 20GB memory
Database:
Postgres 10.10
PHP version:
7.2
Nextcloud version: (see Nextcloud admin page)
17.0.1
Updated from an older Nextcloud/ownCloud or fresh install:
No
Where did you install Nextcloud from:
https://shop.hanssonit.se/product/nextcloud-vm-microsoft-hyper-v-500gb/
Signing status:
No errors have been found.
List of activated apps:
Enabled:
Nextcloud configuration:
{
"system": {
"passwordsalt": "REMOVED SENSITIVE VALUE",
"secret": "REMOVED SENSITIVE VALUE",
"trusted_domains": [
"localhost",
"SENSITIVE VALUE",
"SENSITIVE VALUE",
"SENSITIVE VALUE"
],
"datadirectory": "REMOVED SENSITIVE VALUE",
"dbtype": "pgsql",
"version": "17.0.1.1",
"overwrite.cli.url": "https:\/\/SENSITIVE VALUE\/",
"dbname": "REMOVED SENSITIVE VALUE",
"dbhost": "REMOVED SENSITIVE VALUE",
"dbport": "",
"dbtableprefix": "oc_",
"dbuser": "REMOVED SENSITIVE VALUE",
"dbpassword": "REMOVED SENSITIVE VALUE",
"installed": true,
"instanceid": "REMOVED SENSITIVE VALUE",
"log_type": "file",
"logfile": "\/var\/log\/nextcloud\/nextcloud.log",
"loglevel": "0",
"mail_smtpmode": "smtp",
"remember_login_cookie_lifetime": "1800",
"log_rotate_size": "10485760",
"trashbin_retention_obligation": "auto, 180",
"versions_retention_obligation": "auto, 365",
"simpleSignUpLink.shown": "false",
"memcache.local": "\OC\Memcache\APCu",
"filelocking.enabled": true,
"memcache.distributed": "\OC\Memcache\Redis",
"memcache.locking": "\OC\Memcache\Redis",
"redis": {
"host": "REMOVED SENSITIVE VALUE",
"port": 0,
"timeout": 0.5,
"dbindex": 0,
"password": "REMOVED SENSITIVE VALUE"
},
"logtimezone": "Europe\/Berlin",
"htaccess.RewriteBase": "\/",
"maintenance": false
}
}
Are you using external storage, if yes which one: No
Are you using encryption: No
Are you using an external user-backend, if yes which one: No
Client
Windows 10 client 2.6.1sstable-Win64 (build 20191105)
Browser:
Not applicable
Operating system:
Windows 10
Not applicable, synced with Windows client
Tried with:
/etc/php/7.2/apache/ : memory_limit = -1
/etc/php/7.2/fpm/ : memory_limit = -1
/etc/php/7.2/cli/ : memory_limit = -1
And with:
/etc/php/7.2/apache/ : memory_limit = 512M
/etc/php/7.2/fpm/ : memory_limit = 256M
/etc/php/7.2/cli/ : memory_limit = -1
Same result
Allowed memory size of 536870912 bytes exhausted (tried to allocate 33554440 bytes)
Probably php needs more memory
Redis server \/var\/run\/redis\/redis-server.sock:0 went away
Please ensure that redis is available
Note: This is the issue tracker of Nextcloud, please do NOT use this to get answers to your questions or get help for fixing your installation. This is a place to report bugs to developers, after your server has been debugged. You can find help debugging your system on our home user forums: https://help.nextcloud.com or, if you use Nextcloud in a large organization, ask our engineers on https://portal.nextcloud.com. See also https://nextcloud.com/support for support options.
Probably php needs more memory
Edited initial post:
Tries to increase memory limit for nextcloud
Tried with:
/etc/php/7.2/apache/ : memory_limit = -1
/etc/php/7.2/fpm/ : memory_limit = -1
/etc/php/7.2/cli/ : memory_limit = -1
And with:
/etc/php/7.2/apache/ : memory_limit = 512M
/etc/php/7.2/fpm/ : memory_limit = 256M
/etc/php/7.2/cli/ : memory_limit = -1
Same result and as already said the server will consume ALL the memory until 20GB / 20GB are used and then ultimately fail.
Redis might have died due to the memory consumption with oom. I would guess the sync client does not paginate and the server is building (not streaming) a response for the 1,2m files, which will kill it.
Well, that does not sound very healthy..
whoah, syncing 1.2 mln files. That's quite a thing. @go2sh might very well be right, that the current design wasn't really put together with that in mind. Which probably means it needs quite some changes on server and client, though there might be some ways to decrease the server load if you profile the memory usage of this. Then again, you can then simply re-create the problem by going go 2 or 3 million files :roller_coaster:
Which makes me wonder - why you want to do that? I can not really think of a scenario... Seems a bit of the 'poke-finger-in-eye' thing :roll_eyes: where the doctor tells you: "don't" :laughing:
If you are interested in digging: https://github.com/nextcloud/server/issues/8962 There are some recommendations for indexes to improve the query performance. They're only work for pqsql unfortunately.
I'd run a more recent PHP and use redis 5 with Unix sockets instead of TCP.
Still it doesn't sound like a good thing with so many files. How is the file structure? A flat dir or subfolders?
What is eating the memory? Sql, redis, apache, fpm?
Do you use redis for local cache or only locking?
1M files is not a terribly large collection for some machine learning scientists (it can be even quite modest in data volume, since files are usually small), and it's too bad nextcloud is not thinking about working with those. It could've been a nice tool for scientists to sync their collections for processing.
Not looking to hijack, but I think I have a similar problem on a much smaller scale.
Directory with ~6800 files, total 7GB.
For me Nextcloud client sits on "Checking for changes in remote [dirname]", and then fails with "Operation canceled". Client logs show:
[OCC::LsColJob::finished LSCOL of QUrl("https://_snip_/remote.php/dav/files/_snip_") FINISHED WITH STATUS "OperationCanceledError Connection timed out"
Doesn't look like resources problem though, with RAM sitting steady. The folder also doesn't open in web GUI, but upload from a mobile app apparently does work for it. No other folder manifests this problem, but this error kills the entire sync so other folders aren't being synced.
I can reach the folder over the filesystem and access the files with no issue.
I'm with the same problem, folder with a lot of small files (2k files) don't syncronize in the desktop app and not show in web page.
Most helpful comment
1M files is not a terribly large collection for some machine learning scientists (it can be even quite modest in data volume, since files are usually small), and it's too bad nextcloud is not thinking about working with those. It could've been a nice tool for scientists to sync their collections for processing.