Sanic: Workers don't stop on sigint

Created on 26 Mar 2017  路  8Comments  路  Source: sanic-org/sanic

To reproduce this use:

worker_test.py

from sanic import Sanic
from sanic.response import json

app = Sanic(__name__)


@app.route("/")
async def test(request):
    return json({"test": True})


if __name__ == '__main__':
    app.run(host="0.0.0.0", port=8000, workers=4)

Then run kill -INT <pid of server>.

Finally you can see they haven't all been killed:

ps aux | grep worker_test.py

Most helpful comment

Totally understand this is merged/closed/done, ..._but_ wanted to leave this here for anyone who stumbles upon this issue while researching combinations of Sanic and Supervisord. Apologies if this comes across as spam for anyone.

If you run Sanic under Supervisord in debug mode, you'll want to add stopasgroup=true to your Supervisord config, otherwise you wind up with orphaned child processes on restarts. Throwing this config in does wonders.

All 8 comments

I do not recommend to integrate multiprocessing within Sanic. I know some people want it. I think it is a really bad idea. It is much safer and potentially more performant to use a process supervisor such as supervisord to run multiple Sanic workers.

Supervisord Example: to start 4 processes, on ports 3000, 3001, 3002, 3003, and load balance between them:

Sanic:

if __name__ == '__main__':
    if __debug__:
        app.run(host='0.0.0.0', port=8080, debug=True)
    else:
        app.run(host='0.0.0.0', port=int(sys.argv[1]))

Supervisor configuration file:

[program:my_server]
process_name=my_server_%(process_num)s
command=/home/ubuntu/.pyenv/versions/3.6.0/bin/python3.6 -OO /home/ubuntu/my_server.py %(process_num)s
user=ubuntu
directory=/home/ubuntu
numprocs=4
numprocs_start=3000
stdout_logfile=/home/ubuntu/my_server.stdout.log
stderr_logfile=/home/ubuntu/my_server.stderr.log

Nginx configuration:

upstream my_server {
  server 127.0.0.1:3000;
  server 127.0.0.1:3001;
  server 127.0.0.1:3002;
  server 127.0.0.1:3003;
}

server {

  # fill me in

  location / {
    proxy_pass                  http://my_server;
    proxy_set_header            Host $host;
  }

}

@nszceta I don't know that I would go that far, I'm running it with multiple workers on a project and it hasn't failed or anything. Since you say it's more performant, are you willing to provide a benchmark comparison? wrk -t 10 -c 100 -d 30 http://localhost:8000 would work for that.

Anyway if we have an option there to run multiple workers (which some people are using) it makes sense to have have it respond to sigint properly. This isn't a statement about sanic multiple workers being a better method for production than supervisord.

What happens when one of your processes hangs for whatever reason? Does your app keep working normally?

I haven't experienced that yet. Yes it's just kept working so far... (crosses fingers ;) )

FYI, I don't believe workers are working properly in Sanic (even on the master branch). If you run the following code...

from sanic import Sanic
from sanic.response import json
from sanic.response import stream
import logging
import asyncio
log = logging.getLogger("sanic")
import uuid
import os
import random

app = Sanic()

def swap( A, x, y ):
    tmp = A[x]
    A[x] = A[y]
    A[y] = tmp

def bubblesort( A ):
    for i in range(len(A)):
        for k in range(len(A) - 1, i, -1):
            if ( A[k] < A[k - 1] ):
                swap( A, k, k - 1 )

@app.route("/")
async def test(request):
    log.info(os.getpid())    
    tosort = random.sample(range(1, 100000), 10000)
    bubblesort(tosort)
    response = {}
    response['sorted'] = tosort
    return json(response)

if __name__ == "__main__":
    app.run(workers=4)

And hit http://localhost:8000/ over two or three tabs, you'll notice that the responses for the tabs are queued one after another, and always return the same PID. Tested on Linux Kernel 4.9.20-12.lts (SolusOS distro).

@atungw the problem with your test is that with the Keep-Alive header being True on most browsers when you ping it again with the same browser it's most likely using the same connection (which will relate back to the same process).

Totally understand this is merged/closed/done, ..._but_ wanted to leave this here for anyone who stumbles upon this issue while researching combinations of Sanic and Supervisord. Apologies if this comes across as spam for anyone.

If you run Sanic under Supervisord in debug mode, you'll want to add stopasgroup=true to your Supervisord config, otherwise you wind up with orphaned child processes on restarts. Throwing this config in does wonders.

Was this page helpful?
0 / 5 - 0 ratings