We are currently failing all builds installing any of the following packages. They fail at the pip install step because OOM.
List of known packages that fail so far:
Unfortunately, we do no have a workaround currently. Increasing the memory limits has not helped a lot when tried. For now, we suggest you to follow our guide to reduce the resource consumption https://docs.readthedocs.io/en/stable/guides/build-using-too-many-resources.html
I'm creating this issue to have one and only one place to track this down and also because users can subscribe to this issue for news.
We have a branch on
test-buildsto trypytorch. A failing build, https://readthedocs.org/projects/test-builds/builds/10553088/
Reference: #6727 #6664 #6537
I believe the current plan is to roll out a large memory set of builders behind the build:large queue in our backend. Then we will give builds on those builders ~4GB of memory.
One way to avoid this issue entirely is to use stubs for the big packages instead of installing them. This is a suitable option assuming the only reason you need the package is to prevent imports from failing. I implemented such a solution for TensorFlow using https://github.com/faustomorales/keras-ocr/commit/e25f11be3e0fc8dc4422492f391d60c2a32dc1a2. Naturally, if you're actually using TensorFlow as part of your build, this will not be suitable for you.
The benefits of this approach are that it makes my builds much faster and also, hopefully, saves readthedocs.org some money by avoiding the download of a >400MB package (and use of a high memory instance) on each build that isn't actually needed to generate documentation (for me).
Hope this helps someone out there! :)
I just tested this in the new infra at https://readthedocs.org/projects/test-builds/builds/10881199/ and it was executed by build:default queue. It succeed without problems.
Besides, I haven't heard more users reporting this kind of issues anymore. We can probably close this issue now.
In case we hit this again, we could add a new rule in our router that checks if any of these packages are in the requirements.txt file to decide routing it to build:large --I don't really like that, though.
@humitos It still has this problem. See https://readthedocs.org/projects/tianshou/builds/10931997/
I add torch in my setup.py and it fails to download this package in some cases. I'll try your "./stubs" method later.
Oh, after I re-add the torch package in docs/requirements.txt, it seems to be much faster.
@Trinkle23897 happy that it worked. However, the build linked had a problem about time out not OOM (which was this issue about).
Keep me posted if you still have this or any other issue building with these packages.
Most helpful comment
One way to avoid this issue entirely is to use stubs for the big packages instead of installing them. This is a suitable option assuming the only reason you need the package is to prevent imports from failing. I implemented such a solution for TensorFlow using https://github.com/faustomorales/keras-ocr/commit/e25f11be3e0fc8dc4422492f391d60c2a32dc1a2. Naturally, if you're actually using TensorFlow as part of your build, this will not be suitable for you.
The benefits of this approach are that it makes my builds much faster and also, hopefully, saves readthedocs.org some money by avoiding the download of a >400MB package (and use of a high memory instance) on each build that isn't actually needed to generate documentation (for me).
Hope this helps someone out there! :)