Consider something like this:
@api.get('/handler')
async def handler():
...
# Slow async function
await my_async_function()
....
# Slow running sync function
sync_function()
As written above, it'll work, but the sync function will block the async event loop. Is there any way to avoid this? If the handler is sync instead, then there's no async loop with which I can use to run the my_async_function. Is it possible to get the underlying event loop so I can run my own async function within a synchronous handler? I found a related question--is this my best bet or should I just stick with the above?
https://github.com/tiangolo/fastapi/issues/825
Hi @Toad2186. One possible solution is to run the slow function in its own thread (https://docs.python.org/3/library/asyncio-eventloop.html#executing-code-in-thread-or-process-pools):
import asyncio
from functools import partial
async def run_in_thread(sync_function, *args, **kwargs):
# Get the loop that's running the handler.
loop = asyncio.get_running_loop()
# run_in_executor expects a callable with no arguments. We can use `partial` or maybe `lambda` for that.
sync_function_noargs = partial(sync_function, *args, **kwargs)
return await loop.run_in_executor(None, sync_function_noargs)
@api.get('/handler')
async def handler():
...
# Slow async function
await my_async_function()
....
# Slow running sync function is now a coroutine
await run_in_thread(sync_function)
What's nice about run_in_executor is that it will return the result of the synchronous function, or raise the exception raised by the function. You can do something like:
def slow():
time.sleep(10)
raise ValueError
async def main():
try:
await run_in_thread(slow)
except ValueError:
pass
It can also seamlessly use variables, but care must be taken with non-thread-safe synchronous functions.
@mlaradji
Hi, thanks for your reply.
I looked at run_in_executor as you call it currently, it would still run in the async thread, no? Should I create a thread pool and pass that in as the first argument, so that long-running sync functions would block threads in the thread pool instead of blocking the async thread?
Hi @Toad2186. That's a good question. If I'm not mistaken, this is what happens in run_in_executor:
No need to create a new pool, as run_in_executor will take care of that if not passed an executor. As the doc says, if None is passed as the executor, the default executor pool would be used, and the pool would be created if it doesn't already exist.
I would suggest to test it out and see if it works as intended. Spawning new threads may also add a bit of overhead.
As written above, it'll work, but the sync function will block the async event loop. Is there any way to avoid this?
from the async def you can run the blocking sync code in an executor like @mlaradji explained or you can use the run_in_threadpool method from starlette which is essentially a wrapper around it, I think it preserves the context on top of running your sync block (dont quote me on that, others may give you a better explanation) but this is pretty much the same.
If the handler is sync instead, then there's no async loop with which I can use to run the my_async_function.
if your handler is a sync one, you're still running in the event loop provided by uvicorn
Is it possible to get the underlying event loop so I can run my own async function within a synchronous handler? I found a related question--is this my best bet or should I just stick with the above?
yes you can but it requires configuring the webserver manually to get the loop, you can find example of that in https://github.com/tiangolo/fastapi/issues/825
I'd stick with the above
for a very good sync in sync, sync in async, async in sync and async in async overview this post is excellent
There's a utility function for that, run_in_threadpool():
from fastapi.concurrency import run_in_threadpool
@api.get('/handler')
async def handler():
...
# Slow async function
await my_async_function()
....
# Slow running sync function
await run_in_threadpool(sync_function)
Sadly it's still not documented, but you can use it already.
Assuming the original issue was solved, it will be automatically closed now. But feel free to add more comments or create new issues.
@tiangolo Can we control the size of the thread pool?
Most helpful comment
There's a utility function for that,
run_in_threadpool():Sadly it's still not documented, but you can use it already.