All s3 calls should complete successfully
Getting error
{code}Error executing "ListObjects" on "https://<bucket>.digitaloceanspaces.com/?delimiter=%2F&prefix=kube-event-exporter%2Fvendor%2Fk8s.io%2Fclient-go%2Fkubernetes%2Ftyped%2Fbatch%2Fv1%2F&encoding-type=url"; AWS HTTP error: Server error: `GET https://<bucket>.digitaloceanspaces.com/?delimiter=%2F&prefix=<File>%2F&encoding-type=url` resulted in a `503 Slow Down` response: <?xml version="1.0" encoding="UTF-8"?> <Error> <Code>SlowDown</Code> <Message>Please reduce your request rate.</Message> (truncated...) SlowDown (server): Please reduce your request rate. - <?xml version="1.0" encoding="UTF-8"?> <Error> <Code>SlowDown</Code> <Message>Please reduce your request rate.</Message> <RequestId></RequestId> </Error>
Operating system:
Docker on Ubuntu 18.04
Web server:
Nginx
Database:
MariaDB
PHP version:
Nextcloud version: (see Nextcloud admin page)
15.0.7
Updated from an older Nextcloud/ownCloud or fresh install:
Where did you install Nextcloud from:
Official docker image
Signing status:
Signing status
No errors have been found.
List of activated apps:
App list
Enabled:
- activity: 2.8.2
- bruteforcesettings: 1.3.0
- cloud_federation_api: 0.1.0
- dav: 1.8.1
- drawio: 0.9.2
- extract: 1.0.0
- federatedfilesharing: 1.5.0
- files: 1.10.0
- files_external: 1.6.0
- files_pdfviewer: 1.4.0
- files_trashbin: 1.5.0
- files_versions: 1.8.0
- files_videoplayer: 1.4.0
- gallery: 18.2.0
- logreader: 2.0.0
- lookup_server_connector: 1.3.0
- notifications: 2.3.0
- oauth2: 1.3.0
- onlyoffice: 2.1.6
- provisioning_api: 1.5.0
- ransomware_protection: 1.3.0
- theming: 1.6.0
- twofactor_backupcodes: 1.4.1
- updatenotification: 1.5.0
- workflowengine: 1.5.0
Nextcloud configuration:
Config report
{
"system": {
"htaccess.RewriteBase": "\/",
"memcache.local": "\\OC\\Memcache\\APCu",
"apps_paths": [
{
"path": "\/var\/www\/html\/apps",
"url": "\/apps",
"writable": false
},
{
"path": "\/var\/www\/html\/custom_apps",
"url": "\/custom_apps",
"writable": true
}
],
"memcache.distributed": "\\OC\\Memcache\\Redis",
"memcache.locking": "\\OC\\Memcache\\Redis",
"redis": {
"host": "***REMOVED SENSITIVE VALUE***",
"port": 6379
},
"passwordsalt": "***REMOVED SENSITIVE VALUE***",
"secret": "***REMOVED SENSITIVE VALUE***",
"trusted_domains": [
"localhost",
"cloud.via-justa.com"
],
"datadirectory": "***REMOVED SENSITIVE VALUE***",
"dbtype": "mysql",
"version": "15.0.7.0",
"overwrite.cli.url": "http:\/\/localhost",
"dbname": "***REMOVED SENSITIVE VALUE***",
"dbhost": "***REMOVED SENSITIVE VALUE***",
"dbport": "",
"dbtableprefix": "",
"mysql.utf8mb4": true,
"dbuser": "***REMOVED SENSITIVE VALUE***",
"dbpassword": "***REMOVED SENSITIVE VALUE***",
"installed": true,
"instanceid": "***REMOVED SENSITIVE VALUE***",
"twofactor_enforced": "false",
"twofactor_enforced_groups": [],
"twofactor_enforced_excluded_groups": [],
"loglevel": 2,
"maintenance": false
}
}
Are you using external storage, if yes which one: local/smb/sftp/...
S3
Are you using encryption: yes/no
No
Are you using an external user-backend, if yes which one: LDAP/ActiveDirectory/Webdav/...
No
Browser:
Operating system:
Web server error log
Insert your webserver log here
Nextcloud log
Insert your Nextcloud log here
Browser log
Insert your browser log here, this could for example include:
a) The javascript console log
b) The network log
c) ...
You can drastically reduce the number of S3 API calls if you set the option "check for changes: never" for each mounted S3 bucket. This way, Nextcloud's file list is only populated from the database instead of requesting all file listings from the S3 service each time a request to the respective folder is made.
To have this setup work properly, you should not upload files to the S3 bucket from outside of Nextcloud. In this case, Nextcloud would know that there is a new file.
In my setup, I run "$ php ./occ files:scan --all" once a day per cron job to ensure a proper update of Nextcloud's database. Just in case, a file will be uploaded to S3 bypassing Nextcloud...
I set the option as @mfridge suggested. I didn't see any 503 errors after that change...
check for changes: never
Will reduce the number of requests. Otherwise Nextcloud fetch the data for every request. A option like "cache changes for 30 minutes" is not available unfortunately.
Tried @mfridge 's fix, but I'm still getting this error. I'm moving several gigs of very small files to my NC folder from my Dropbox, so this is an unusual situation that doesn't happen very often. I guess one solution is to move in slowly, but it will still be nice if there was a way to hard limit the number of requests to an S3 backend.