was https://github.com/nextcloud/server/pull/17807#issuecomment-558101692
30 seconds is a reasonable default. If we require a longer timeout on some requests, we should specify them on those requests individually
This may be correct and technically valid from the devs point of view.
However, please make 'timeout for http requests' one or more configurable items in config.php
or beyond to better meet the real world requirements of server admins and users.
This would be a HUGE improvement for Federated Sharing.
I'm getting a lot of CURL 28 Errors due to too large files, that can't be fetched in 30 seconds.
Is it possible to change the timeout manually right now?
How is this STILL not addressed?? Hello?? I can't upload my files or install any decent app due to this!
Like nickvergessen in #17808 said:
this value here only affects your nextcloud connecting with other nextclouds, the appstore, .... Not the user-to-nextcloud connection
So, if you have issues with uploading files to your Nextcloud (without federated sharing) you are looking for a different issue. Probably a config issue in your Webserver and / or php.
How much bandwidth does your Nextcloud have, that it gets a timeout during you App installation?
Just came here to say that this PR fixed a timeout that made me unable to download the larger apps (e.g. Community Document Server).
That is, using the docker nextcloud apache image.
It is crazy to not being able to update apps, like the mail app, because of a hard coded timeout being on a 30 Mbit/s fiber connection!
Why not add a "Cancel" Button and set unlimited timeout?
To provide another data point: my server here in Australia sits on a broadband connection with a reported speed (according to ozspeedtest.com) of 42.92 Mbps, and with each upgrade of Nc, i'm modifying lib/private/Http/Client/Client.php to a higher timeout value.
[鈥 42.92 Mbps, and with each upgrade of Nc, i'm modifying lib/private/Http/Client/Client.php to a higher timeout value.
Why? With your connection you should be able to download apps with a size of up to 150MB. Nextcloud 18.0.1 also sets the default timeout value to 120 seconds, which means you're able to download up to 630MB, which is more than enough right now.
@KopfKrieg: In theory, yes. In practice, in my experience, no. When i reinstalled 18.0.0 from scratch recently, and tried to install basic apps like Calendar and Contacts, the installations failed unless i set a higher timeout value. i just upgraded to 18.0.1 yesterday, so i can't remember what that timeout value was; but after the upgrade, i still needed to set the timeout to a higher value (i ended up setting it to 300s) to get an app update to work. Cf. also https://github.com/janis91/ocr/issues/249#issuecomment-578330599.
The theoretic bandwidth and the real bandwidth are two different things in some parts of the world to other parts of the world. Latency is also impacted.
I'm also in Australia and have problems with extreme network latency with *.nextcloud.com servers. Hacking core files to increase a hardcoded timeout should be considered a bug.
Hi there,
I have a 300 Mbps internet speed but I still face this issue. I had to set my cURL timeout as high as 1200s to be able to install/update my apps. So I don't think internet speed is the only element causing this issue.
Also, the cURL timeout resets after every update so it's not practical to reconfigure it everytime, especially for non-technical users.
Why not add a "Cancel" Button and set unlimited timeout?
This seems like a good solution 馃憤, atleast temporarily until we get to the root(s) of this issue.
Two days ago I had to increase the curl timeout too because no app was showing in the WebUI. Even the occ app:update -- all
command wasn't able to update and was showing no output.
I had to do it again today, as upgrade override the changes also on a 300Mbps fiber connection.
Please also note that some users can only dream of speeds above 20Mbps. On a shared line, on an average day, it's more like 2Mbps where I live (and in many other places on the African continent).
Short timeout values that work in the better-connected parts of the world certainly don't work here.
lol, 2Mbps
I'm in the middle of nowhere with a Hughesnet connection... most of the time it's a dial up modem crap speed from the 90's
... most of the time it's a dial up modem crap speed from the 90's
Malconnected of the world, unite! :-) But on a serious note: the lack of good network connectivity is precisely why I, and I presume others in comparable settings, resort to on-premise solutions.
Yes, we can use the likes of Google docs, but if 90% of users is in-house, while the 10% outside are on slow connections anyway, it makes perfect sense to invert the architecture: put the cloud in the house and open it to outside. That's why NextCloud is such a blessing.
Like @akoyaxd said, this is still a big problem when using federated shares. I had to manually increase the timeout in Client.php to make uploading/downloading large files on a federated share work. The correct fix there would be to make the federated connection between two servers use chunking, like the official clients do for regular uploads, but that's a big change. Making this timeout configurable would quickly help a lot of people (see all the issues about this problem!), while fixing federated shares in the background for a future release.
I am, like many, running my Nextcloud off low-end hardware, namely a raspberry Pi. In my case the theoretical bandwidth I get from my ISP at my flat isn't as much of an issue, nor the less-than-optimal wireless signal through the incredibly thick walls of 1860s properties in Edinburgh. I'm limited by a small amount of RAM (shared with ZFS) and storage off USB2, with docker for several containers, and not a great deal of processing power. I'm not running a sevrer for a corporation with dedicated rackspace, like lots of people I'm getting sick of having a million online accounts and being space-limited on these unless I hand over a monthly subscription, so I am running a personal server where I have control over my data and access to my appss remotely. In tihi case, low power usage is a must for a home server when energy costs and noise and heat are of relevance if it's on 24/7.
__TL;DR__: A debate on whether or not the default value should be enough is a bit of a moot point in my view, if it seems that for multiple people, it isn't. I am drawn to a private cloud so I can configure things on my hardware with my data my way, so by all means having a recommended default is sensible, but I don't see any reason why it shouldn't be advantageous to make some of these options configurable (I have also been hit by the upload timeouts and size limits etc.) as we come here with quite differntt scenarios and hardware/connection speeds, and given the number of bugs filed it seems many people are coming across htis issue and it's not got a root cause anywhere else.
never had this problem but now i have, there are some apps im unable to install on a connection with10MB/s download.
Thats horrible, and there seems no activity on this issue, this issue should really be adressed in near future :)
Most helpful comment
How is this STILL not addressed?? Hello?? I can't upload my files or install any decent app due to this!