If we don't pass a loop to the run method, it creates a new loop and sets it as the event loop for asyncio.
loop = loop or async_loop.new_event_loop()
asyncio.set_event_loop(loop)
But if any other libraries (for example motor) run any code involving the event loop, before we call the run method, they are likely to get the default event loop from asyncio before the above code executes. So they would be running on a separate event loop and my server would be on another event loop.
The obvious solution is to create an uvloop ourselves and passing it to run method. Or deferring any event loop related codes until the server starts (may await from other async functions inside views). We can also choose use get_event_loop method. If uvloop is available and the get_event_loop is not using uvloop, we display a warning.
Or we can simply add a detailed note somewhere as part of the documentation.
My 2 cents on this: starting with Python 3.6 and 3.5.3, it will be recommended to never pass the loop explicitly. asyncio.get_event_loop was fixed to always return you the correct event loop when called from a coroutine.
Instead of new_event_loop, calling get_event_loop should also fix this issue. But users might forget to use uvloop, so if uvloop is not being used, Sanic could print a warning or a message to let the user know that uvloop is available.
@masnun How should we go about knowing if the user isn't using the uvloop.EventLoopPolicy with their asyncio loop?
Hi, we should be able to use this function - https://docs.python.org/3/library/asyncio-eventloops.html#asyncio.get_event_loop_policy to do so?
I didn't try it out, a quick google got me there.
Taken care of in sanic 0.2.0 I think. Closing for now!
Don't think it's fixed, but ok.
I'll reopen then @masnun
Thanks. I will try to come up with some ideas or perhaps a PR. Let's see.
I'm having some problems/confusion around mixing sanic loops and workers when using asyncio or aiohttp as well.
Passing in an both explicit loop (asyncio / uvloop) and specifying multiple wokers e.g.: app.run(host="0.0.0.0", port=8000, loop=loop, workers=4) gives these errors:
2017-01-21 16:30:58,476: INFO: Goin' Fast @ http://0.0.0.0:8000
2017-01-21 16:30:58,476: INFO: Spinning up 4 workers...
Assertion failed: (unsigned) fd < loop->nwatchers (src/unix/linux-core.c: uv__io_poll: 336)
Assertion failed: (unsigned) fd < loop->nwatchers (src/unix/linux-core.c: uv__io_poll: 336)
Assertion failed: (unsigned) fd < loop->nwatchers (src/unix/linux-core.c: uv__io_poll: 336)
However, now, omitting the loop=loop will give these errors:
File "/usr/lib/python3.5/asyncio/subprocess.py", line 212, in create_subprocess_exec
stderr=stderr, **kwds)
File "uvloop/loop.pyx", line 2176, in __subprocess_run (uvloop/loop.c:39551)
File "uvloop/loop.pyx", line 2173, in uvloop.loop.Loop.__subprocess_run (uvloop/loop.c:39469)
File "uvloop/future.pyx", line 241, in __await__ (uvloop/loop.c:110786)
RuntimeError: Task <Task pending coro=<Sanic.handle_request() running at /usr/lib/python3.5/site-packages/sanic/sanic.py:228>> got Future <Future pending> attached to a different loop
I'm attempting to create a simple video thumbnail service using ffmpeg:
https://github.com/ubergarm/pythumbio/blob/aiohttp/server.py
Both of these issues can be easily reproduced using the existing aiohttp_example.py and either omitting the loop=loop or adding workers=4 etc.
Anyway, its exciting to see python beginning to catch up a little in the concurrent services world!
Thanks,
-John
@ubergarm that example was incorrect, I'm updating it. Below is the updated version (which works). With multiple workers (at least) it doesn't seem that passing the event loop is the correct way.
from sanic import Sanic
from sanic.response import json
import asyncio
import aiohttp
app = Sanic(__name__)
async def fetch(session, url):
"""
Use session object to perform 'get' request on url
"""
async with session.get(url) as response:
return await response.json()
@app.route("/")
async def test(request):
"""
Download and serve example JSON
"""
url = "https://api.github.com/repos/channelcat/sanic"
async with aiohttp.ClientSession(loop=
asyncio.get_event_loop()) as session:
response = await fetch(session, url)
return json(response)
app.run(host="0.0.0.0", port=8000, workers=2)
Note that we no longer pass the loop to run, and we don't create a new event loop, instead we use asyncio.get_event_loop() to use the current event loop directly in test.
@masnun based on the above comment by @1st1, I think the problem here is actually twofold:
1) our examples are showing the wrong way of doing it
2) it probably doesn't even make sense to allow passing the loop, at least in the multiple workers implementation. That is, we should remove that as an argument to serve_multiple.
instead we use asyncio.get_event_loop() to use the current event loop directly in test.
This seems to be the correct approach.
Thanks for clearing this up, things are moving fast!
I still wanted a way to limit the maximum concurrency and the way I came up with to get the sanic app's loop out to a global semaphore object was to use the before_start method which seems to pass the sanic app itself and the loop once for each worker.
Is there a cleaner or preferred way to do something like this yet?
from sanic import Sanic
from sanic.response import json
import asyncio
import aiohttp
app = Sanic(__name__)
sem = None
def init(sanic, loop):
global sem
CONCURRENCY_PER_WORKER = 4
sem = asyncio.Semaphore(CONCURRENCY_PER_WORKER, loop=loop)
async def bounded_fetch(session, url):
"""
Use session object to perform 'get' request on url
"""
async with sem, session.get(url) as response:
return await response.json()
@app.route("/")
async def test(request):
"""
Download and serve example JSON
"""
url = "https://api.github.com/repos/channelcat/sanic"
async with aiohttp.ClientSession(loop=
asyncio.get_event_loop()) as session:
response = await bounded_fetch(session, url)
return json(response)
app.run(host="0.0.0.0", port=8000, workers=2, before_start=init)
The before_start solution is smart. May be we can promote that in the examples / docs as a way of passing the loop to external libraries which need the event loop.
instead we use asyncio.get_event_loop() to use the current event loop
If the code is executed before you call the run method on Sanic, it will get a different loop.
One solution is to use before_start like mentioned above to initialize other libraries before the app actually starts up.
I'm not sure it is even necessary to call asyncio.get_event_loop() when using aiohttp's ClientSession. Looks like this happens for you:
http://aiohttp.readthedocs.io/en/stable/client_reference.html#aiohttp.ClientSession (read about the loop param a few lines down)
So you can go from this:
async with aiohttp.ClientSession(loop=asyncio.get_event_loop()) as session:
to just this:
async with aiohttp.ClientSession() as session:
and it should be the same I think.
I think this should be closed per https://github.com/channelcat/sanic/pull/348
It's nice you want sanic to be fast and it is fastest with uvloop, but it should be resolved by documentation and not magic.
@JendaPlhak which is why we recommend that users (if they need to share the event loop that gets created by sanic) use the before_start or after_start server events to share the loop amongst their other libraries.
Sorry if that wasn't clear through the documentation. I agree that we shouldn't be setting the global loop like that as well.
Ok, and are you going to change that? (Since usage of mentioned arguments is fairly inconvenient)
Sharing a global loop, that's instantiated before a sanic application is run is, in my opinion, an anti-pattern. Going to close this again, will reopen if necessary.
@1st1 sorry, late to this party. But can you clarify what you mean by:
My 2 cents on this: starting with Python 3.6 and 3.5.3, it will be recommended to never pass the loop explicitly. asyncio.get_event_loop was fixed to always return you the correct event loop when called from a coroutine.
Do you mean passing the loop specifically in the context of coroutines? Or do you mean to workers, etc? How else are we to add tasks to a loop from an executor?
Do you mean passing the loop specifically in the context of coroutines? Or do you mean to workers, etc? How else are we to add tasks to a loop from an executor?
Whenever you're in async/await context, i.e. in the body of a coroutine, it's safe to call asyncio.get_event_loop() to get the loop the is running the coroutine. So there is no need to pass the loop around in your async/await code.
If, however, you're writing some callback based code (read Protocols) then you'll still need to pass the loop reference.
Hey , i want use before_start and after_start but they are no more in the args of the run function
what is the best way to do ? thank you
def run(self, host=None, port=None, debug=False, ssl=None,
sock=None, workers=1, protocol=None,
backlog=100, stop_event=None, register_sys_signals=True,
access_log=True):
Most helpful comment
My 2 cents on this: starting with Python 3.6 and 3.5.3, it will be recommended to never pass the loop explicitly.
asyncio.get_event_loopwas fixed to always return you the correct event loop when called from a coroutine.