Sphinx: Add exponential backoff to linkcheck

Created on 2 Aug 2019  路  5Comments  路  Source: sphinx-doc/sphinx

Is your feature request related to a problem? Please describe.
We link to Github PR/issues for each entry in our changelog, which is included in our documentation. That means that we fire off a hundred or so checks to various pages on Github. Whenever it's not running smoothly, we end up with timeouts, which slows down our development until Github is more reliable again.

Describe the solution you'd like
Allow the linkcheck builder to retry requests a few times (maybe 3, or a configurable amount) with exponential backoff. https://www.peterbe.com/plog/best-practice-with-retries-with-requests gives a quick and easy implementation of how to do this with requests.get, which is what linkcheck currently uses to get pages.

Describe alternatives you've considered
Another thing that might help would be to collect all the links to check in one step, then sort them such that requests to the same domain happen more sequentially than requests to different domains. I.e., you would do one request to each unique domain first before you try a request to a domain you've already tried before.

enhancement extensions

Most helpful comment

Sounds good :-) Let's moving to new architecture!

All 5 comments

Reasonable. Could you make a PR please?

Hi. I鈥檓 trying to solving this.

The linkcheck behavior is start worker threads, then queue all links it encounters in a queue.Queue. Threads consume that queue to offer parallelization.

I built a solution based on priority queues detailed below, but I鈥檓 not satisfied with it and would like to increase the scope to use asyncio. That has ramifications for the project and would like to discuss it before getting too deep into the implementation.

PriorityQueue

Replace the work Queue with a queue.PriorityQueue. The priority is the next timestamp when the link should be checked.

The priority is time.time() for new links.
When a server replies with a 429, the thread receiving that response computes the next priority:

  1. if max_retries for that link are reached, bail out. max_retries is user controlled, per domain, can be different from linkcheck_retries.
  2. insert link back into the PriorityQueue, new priority computed as follows:

    • Retry-After is an int: priority = time.time() + retry_after

    • Retry-After is an HTTP date, priority = http_date_to_unix_timestamp(retry_after)

    • Retry-After absent or invalid: priority = time.time() + exponential_backoff_starting_at_60

  3. Queue link back into the PriorityQueue

Each worker thread pulls from the queue. If priority is in the future, requeue the message with the same priority and go to sleep.

Issues

  • _(existing)_ The number of threads limits concurrency. With a reasonable number of threads (e.g. 3), all threads may be waiting for a response and the work queue does not get consumed. Asynchronously checking links would increase concurrency by keeping the CPU busy.
  • _(existing)_ Use of multiple threads requires thread-safe operations and data structures, which adds complexity. For example, the results writer functions keep opening and closing the result files because each thread needs to write to them.
  • _(new)_ With the need to sleep to honor rate-limits, threads exhibit two sub-optimal behaviors:

    1. wake when there is nothing to do, only to go back to sleep
    2. be sleeping when a new link is queued up

    The added scheduling to handle 429 is a poor substitute for what an event loop has built-in. For example, loop.call_at() would be very convenient.

Suggested changes

  • Replace requests with aiohttp.

    1. Use a wrapper that makes async code synchronous for sync use cases (get event loop, queue the request, run event loop until complete).

    2. Adapt existing code that expects a requests.Response to use an aiohttp.ClientResponse. Both look pretty similar.

    3. Make compatibility wrapper for arguments where a requests object was expected and aiohttp expects a different input. Consider REQUESTS_CA_BUNDLE, tls_cacerts, auth_info for the linkcheck_auth setting.

  • Deprecate and eventually drop linkcheck_workers setting.

Next steps

  1. Replace requests with aiohttp.
  2. Use asynchronous concurrency for linkcheck:

    1. Run an event loop in a separate thread.

    2. The builder queues async functions to check each link onto the event loop with asyncio.run_coroutine_threadsafe())

  3. Solve this issue by teaching the check coroutine to sleep when it receives a 429.

If there鈥檚 interest in that plan, I鈥檓 happy to break the next steps into separate issues and tackle them.

Possible extension

To squeeze out even more performance for linkcheck, threads could be introduced again: a ThreadPoolExecutor could execute the coroutines. That introduces the complexity of sharing data across threads and is left for future work.

If there鈥檚 interest in that plan, I鈥檓 happy to break the next steps into separate issues and tackle them.

My large concern is who maintains it. I'm not good at asyncio and aiohttp. So it would be nice if you become a maintainer of the new linkcheck builder. What do you think?

Note: We have to care the new one is working fine on Windows too.

Thanks for the quick feedback. I don鈥檛 mind maintaining linkcheck and helping out with aiohttp. I use linkcheck in factory_boy and think it鈥檚 a very useful extension generally.

My day job is as a web developer (mostly Python and Django). I鈥檝e been doing that for about 7 years, I鈥檓 pretty familiar with the Web and Python.

I鈥檓 new to async, but eager to work with it. This change is a good opportunity to grow more familiar with async, and fixing the (hopefully few) issues arising from this change will be a great learning experience.

If not being experienced with async beyond a couple personal testing projects is a big concern, I鈥檓 okay sticking with the multi-threaded solution, the priority queue and requests. It just does not seem the best way to solve the problem, and async offers exciting possibilities.

Sounds good :-) Let's moving to new architecture!

Was this page helpful?
0 / 5 - 0 ratings