Describe the bug
When using a blueprint with multiple workers on windows, sanic fails on startup due to a failure to pickle the route:
Exception has occurred: _pickle.PicklingError
Can't pickle: attribute lookup Route on sanic.blueprints failed
Code snippet
from sanic import Sanic
from sanic import Blueprint
from sanic.response import json
blueprint = Blueprint("API_blueprint")
@blueprint.route("/")
async def test(request):
return json({"hello": "world"})
app = Sanic()
app.blueprint(blueprint)
def main():
app.run(workers=2)
Expected behavior
It should be possible to run with multiple workers using blueprints in the same way as it is using the sanic app object directly
Environment (please complete the following information):
Additional context
A similar issue with pickling was recently fixed in sanic-cors. No idea if the resolution to that one will be helpful
I can look into this.
PR created: https://github.com/huge-success/sanic/pull/1393
The blueprint pickling problem is fixed, though I don't have a windows machine to test that it fixes the multiprocessing issue on Windows.
@vltr didn't you say you had some credits in Azure?
The multi-worker mode is not very reliable currently, it requires more robust process management on top of it, and it's not suggested to run in production environment.
@yunstanford
it's not suggested to run in production environment
That's a bit worrying for me as we were planning on going live with something soon. We've been developing with 1 worker and should have tried more earlier. Are there particular issues highlighting the problems with the current version?
It doesn't have robust worker process management, it can't automatically manage workers very well. For example, it doesn't restart worker if any single worker dies.
For deployment in production environment, I'd prefer Nginx + supervisord or Nginx + gunicorn.
Unfortunately for now we are on windows so I can't use either of those. Maybe I can use waitress with it though which does support windows. We are only serving from localhost (for now), so I can live without nginx or similar I think
I would suggest docker and kubernetes as the easiest way to spin up multiple processes in production.
@ahopkins I do not have an Azure account (unfortunatelly), just a Microsoft Technet Insider subscription :wink:
Anyway, I think that subscription is enough so I'm already building a Windows Server VM for Sanic testing and development - since I'll have to work on it also for the auto reloader.
@mungojam does Windows support multiple processes using a single port now? :no_mouth: Sorry for the lame question, it's been years since I last used Windows for anything rather than gaming - something I have almost no time to as well :sweat_smile:
@vltr
@mungojam does Windows support multiple processes using a single port now? 馃樁 Sorry for the lame question, it's been years since I last used Windows for anything rather than gaming - something I have almost no time to as well 馃槄
Afraid I have no idea on this one, I haven't done it before and I think I've been spoilt by frameworks and languages that handled all that for me and did multithreading without a GIL to worry about
@mungojam
I haven't done it before and I think I've been spoilt by frameworks and languages that handled all that for me and did multithreading without a GIL to worry about
No problem, in fact this has not to do with GIL or even Python. I used to deliver solutions for the Windows platform for quite some time, and from what I can remember (at least up until Windows 2008 Server), one process (no matter which technology it was written) simply could not share the same socket with another process (which can be easily achieved in Linux and I guess most BSD based systems, including macOS if I'm not mistaken).
This option (to share a socket connection) can be seen here in Sanic, under the serve_multiple function (that is called if worker is greater than 1 in app.run).
I might be - and probably am - very outdated regarding this feature, even though my only concern is that the Windows kernel is the same for the past 20 years (NT). That's what I want to check out :wink:
@vltr
in fact this has not to do with GIL or even Python
Sure, my rather off-topic reference is to the fact that you don't need multiple-processes with other frameworks like .net because they support using multiple cores/processors within the same process just by multi-threading. I'm a bit in over my depth with how that's different to forking which I believe Windows doesn't support though.
that is called if worker is greater than 1 in app.run
Does that mean that if I had multiple workers working the other day, then Windows must support it?
Does that mean that if I had multiple workers working the other day, then Windows must support it?
Perhaps. Try this simple application (based on yours at the start of this issue) and see if it returns different pid numbers (and if this is the correct way to get the pid under Windows as well):
import os
from sanic import Sanic
from sanic.response import text
app = Sanic()
@app.route("/")
async def test(request):
return text(str(os.getpid()))
def main():
app.run(workers=4)
if __name__ == "__main__":
main()
If it does returns different pids, then I guess it supports this feature now :smiley:
If it does returns different pids, then I guess it supports this feature now
Just tried it on my Windows 10 machine and it works fine. Different pids
@mungojam great news! It's one more thing to not worry about (on Windows) ...
Acutally, it doesn't work with group blueprint
`from sanic import Sanic
from sanic import Blueprint
from sanic.response import text
bp1 = Blueprint('bp1', url_prefix='/bp1')
bp2 = Blueprint('bp2', url_prefix='/bp2')
@bp1.middleware('request')
async def bp1_only_middleware(request):
print('applied on Blueprint : bp1 Only')
@bp1.route('/')
async def bp1_route(request):
return text('bp1')
@bp2.route('/')
async def bp2_route(request, param):
return text(param)
group = Blueprint.group(bp1, bp2)
@group.middleware('request')
async def group_middleware(request):
print('common middleware applied for both bp1 and bp2')
app = Sanic()
app.blueprint(bp1)
if __name__ == '__main__':
app.run(workers=2)
`
Error:
[2019-05-17 10:20:26 +0800] [21508] [ERROR] Experienced exception while trying to serve
Traceback (most recent call last):
File "C:\ProgramData\Anaconda3\envs\heimdall\lib\site-packagessanicapp.py", line 1098, in run
serve_multiple(server_settings, workers)
File "C:\ProgramData\Anaconda3\envs\heimdall\lib\site-packagessanic\server.py", line 847, in serve_multiple
process.start()
File "C:\ProgramData\Anaconda3\envs\heimdall\libmultiprocessing\process.py", line 112, in start
self._popen = self._Popen(self)
File "C:\ProgramData\Anaconda3\envs\heimdall\libmultiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "C:\ProgramData\Anaconda3\envs\heimdall\libmultiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "C:\ProgramData\Anaconda3\envs\heimdall\libmultiprocessing\popen_spawn_win32.py", line 65, in __init__
reduction.dump(process_obj, to_child)
File "C:\ProgramData\Anaconda3\envs\heimdall\libmultiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
_pickle.PicklingError: Can't pickle: it's not the same object as __main__.group_middleware
Traceback (most recent call last):
File "C:\Program Files\JetBrains\PyCharm Community Edition 2018.3.4\helpers\pydev\pydevd.py", line 1741, in
main()
File "C:\Program Files\JetBrains\PyCharm Community Edition 2018.3.4\helpers\pydev\pydevd.py", line 1735, in main
globals = debugger.run(setup['file'], None, None, is_module)
File "C:\Program Files\JetBrains\PyCharm Community Edition 2018.3.4\helpers\pydev\pydevd.py", line 1135, in run
pydev_imports.execfile(file, globals, locals) # execute the script
File "C:\Program Files\JetBrains\PyCharm Community Edition 2018.3.4\helpers\pydev_pydev_imps_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "C:/Segantii/heimdall/app6.py", line 31, in
app.run(workers=2)
File "C:\ProgramData\Anaconda3\envs\heimdall\lib\site-packagessanicapp.py", line 1098, in run
serve_multiple(server_settings, workers)
File "C:\ProgramData\Anaconda3\envs\heimdall\lib\site-packagessanic\server.py", line 847, in serve_multiple
process.start()
File "C:\ProgramData\Anaconda3\envs\heimdall\libmultiprocessing\process.py", line 112, in start
self._popen = self._Popen(self)
File "C:\ProgramData\Anaconda3\envs\heimdall\libmultiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "C:\ProgramData\Anaconda3\envs\heimdall\libmultiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "C:\ProgramData\Anaconda3\envs\heimdall\libmultiprocessing\popen_spawn_win32.py", line 65, in __init__
reduction.dump(process_obj, to_child)
File "C:\ProgramData\Anaconda3\envs\heimdall\libmultiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
_pickle.PicklingError: Can't pickle: it's not the same object as __main__.group_middleware
Most helpful comment
PR created: https://github.com/huge-success/sanic/pull/1393
The blueprint pickling problem is fixed, though I don't have a windows machine to test that it fixes the multiprocessing issue on Windows.