Fastapi: FastAPI websocket can`t handle a large incoming of streams?

Created on 26 Sep 2020  Â·  14Comments  Â·  Source: tiangolo/fastapi

Hi there. I have am using FastAPI websocket on Docker in my Ubuntu server. I have come very far in finishing it, and am currently in the test phase before everyting is "completed".

While running different tests, I experienced a strange problem. From my client side, I am simulating an IoT devices sending realtime data including "position", meaning as long as it is moving, it sends JSON object including its position. Im extremely confused at this point. Is it my code? Is my Dockerfile, is the linux server the problem? Therefore i feel the need to share all my file.

Before sharing my websocket code, here is my simple client code which i use to send data to my websocket server:

from websocket import create_connection
import json
import time

ws = create_connection("ws://139.59.210.113:5080/ws/testchannel")

time.sleep(1)


def send_json_all_the_time(position):
    generate_json = { "machineID":"001", "RepSensor": position}
    send_json = json.dumps(generate_json)
    print("JSON SENT FROM IoT SENSOR: {}".format(send_json))
    time.sleep(0.3)
    ws.send(json.dumps(send_json))
    time.sleep(0.3)


while True:
    for x in range(100):
        send_json_all_the_time(x)

    for x in range(100, -1, -1):
        ws.send("pause a little bit, starting again soon!")
        send_json_all_the_time(x)

The code above is simply runned locally from my computer, and sends a lot og realtime data to the websocket server.

The code snippet below is taken from my websocket script that is runned on the linux ubuntu server:

@app.websocket("/ws/testchannel")
async def websocket_endpoint(websocket: WebSocket):
    await websocket.accept()
    try:
        while True:
            data = await websocket.receive_text()
            print("Received data: {} ".format(data))
            await websocket.send_text(f"you sent message: {data}")
            await connection_manager.send_message_to_absolutely_everybody(data)

            """
            PROBLEM: if i use this " await connection_manager.send_message_to_absolutely_everybody(data) " line, than i get delay into the server.
            I think by optimizing the code this can be solved. You can test by commenting out the line 408 and you will see the difference.
            """

    except WebSocketDisconnect:
        print("client left chat.")

"""
The second problem: when a IoT device send a data to the server in real time and high speed, it gets disconnected randomly! 
"""

So while I am watching the terminal, the data comes in.. but then "client left the chat" occurs randomly!

So my question is, is the problem at the script? the server? dockerfile? docker commands?

For instance, i share also my Dockerfile as well. And if someone is interested, i could invite to my private github repo.

FROM ubuntu:latest
FROM python:3

MAINTAINER raxor2k "xxx.com"

RUN apt-get update -y

RUN apt-get install -y python3-pip build-essential python3-dev

COPY . /app
WORKDIR /app

RUN pip3 install --upgrade pip
RUN pip3 install -r requirements.txt
RUN pip3 install fastapi uvicorn #dennekanfjernes?

CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "80", "--reload"]

Any answers or help would be appreciated!

question

All 14 comments

Hi,

  • "--reload" in CMD of Dockerfile, cause high Memory and CPU usage. I suggest that remove "--reload" arg.-
  • on production it is better to use Gunicorn with Uvicorn workers. gunicorn app:app -w 4 -k uvicorn.workers.UvicornWorker
    check this out
    may these changes helps you!

Hi,

  • "--reload" in CMD of Dockerfile, cause high Memory and CPU usage. I suggest that remove "--reload" arg.-
  • on production it is better to use Gunicorn with Uvicorn workers. gunicorn app:app -w 4 -k uvicorn.workers.UvicornWorker
    check this out
    may these changes helps you!

Hi there! I found out the issue, and it was no problem with the server, but at the client side. I missed a "ping pong" syncronization in my client which causet the server "kick the client", but now this got solved :)

Hi,

  • "--reload" in CMD of Dockerfile, cause high Memory and CPU usage. I suggest that remove "--reload" arg.-
  • on production it is better to use Gunicorn with Uvicorn workers. gunicorn app:app -w 4 -k uvicorn.workers.UvicornWorker
    check this out
    may these changes helps you!

Hi there! I found out the issue, and it was no problem with the server, but at the client side. I missed a "ping pong" syncronization in my client which causet the server "kick the client", but now this got solved :)

great!
What I said, can make your server more stable in production.

So according to you, how chould i change this line ? :

CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "80"]

Is this all i need to change or something else here as well?:

docker run -d -e PYTHONUNBUFFERED=1 --name run-fastapi-websocket -p 5080:80 fastapiserver

CMD gunicorn main:app  -k uvicorn.workers.UvicornWorker --bind 0.0.0.0:80

check this out for more detail about gunicorns configs

Cool! I will definitely test this out!

I supposed i need to add gunicorn to my requrements.txt file? This is how my file currently looks like:

Click==7.0
fastapi==0.54.1
h11==0.8.1
httptools==0.1.1
pydantic==1.5.1
python-engineio==3.12.1
python-socketio==4.5.1
six==1.11.0
starlette==0.13.2
uvicorn==0.11.5
uvloop==0.14.0
websockets==8.1
hbmqtt==0.9.6

Cool! I will definitely test this out!

I supposed i need to add gunicorn to my requrements.txt file? This is how my file currently looks like:

Click==7.0
fastapi==0.54.1
h11==0.8.1
httptools==0.1.1
pydantic==1.5.1
python-engineio==3.12.1
python-socketio==4.5.1
six==1.11.0
starlette==0.13.2
uvicorn==0.11.5
uvloop==0.14.0
websockets==8.1
hbmqtt==0.9.6

add this:

gunicorn==20.0.4


I also tested running the server as you suggested. To be fair i can not see any differences. But i will write your suggestion down in meantime, maybe i could need this later :) Thank you anyways!

Do what needs to be done ✅

Gunicorn -k uvicornworker -w 13

Not able to handle more than 13 concurrent request

How can I solve this,if want to handle 500 concurrent requests without increasing the workers

Gunicorn -k uvicornworker -w 13

Not able to handle more than 13 concurrent request

How can I solve this,if want to handle 500 concurrent requests without increasing the workers

Hi
check this out

#
# Worker processes
#
#   workers - The number of worker processes that this server
#       should keep alive for handling requests.
#
#       A positive integer generally in the 2-4 x $(NUM_CORES)
#       range. You'll want to vary this a bit to find the best
#       for your particular application's work load.
#
#   worker_class - The type of workers to use. The default
#       sync class should handle most 'normal' types of work
#       loads. You'll want to read
#       http://docs.gunicorn.org/en/latest/design.html#choosing-a-worker-type
#       for information on when you might want to choose one
#       of the other worker classes.
#
#       A string referring to a Python path to a subclass of
#       gunicorn.workers.base.Worker. The default provided values
#       can be seen at
#       http://docs.gunicorn.org/en/latest/settings.html#worker-class
#
#   worker_connections - For the eventlet and gevent worker classes
#       this limits the maximum number of simultaneous clients that
#       a single process can handle.
#
#       A positive integer generally set to around 1000.
#
#   timeout - If a worker does not notify the master process in this
#       number of seconds it is killed and a new worker is spawned
#       to replace it.
#
#       Generally set to thirty seconds. Only set this noticeably
#       higher if you're sure of the repercussions for sync workers.
#       For the non sync workers it just means that the worker
#       process is still communicating and is not tied to the length
#       of time required to handle a single request.
#
#   keepalive - The number of seconds to wait for the next request
#       on a Keep-Alive HTTP connection.
#
#       A positive integer. Generally set in the 1-5 seconds range.
#

workers = 1
worker_class = 'sync'
worker_connections = 1000
timeout = 30
keepalive = 2

Can I use sync as worker class with fastapi??
I think uvicorn worker class is mandatory??

On Thu, 15 Oct 2020 at 3:33 AM, amin jamal notifications@github.com wrote:

Gunicorn -k uvicornworker -w 13

Not able to handle more than 13 concurrent request

How can I solve this,if want to handle 500 concurrent requests without
increasing the workers

Hi
check this
https://github.com/benoitc/gunicorn/blob/master/examples/example_config.py
out

#

Worker processes

#

workers - The number of worker processes that this server

should keep alive for handling requests.

#

A positive integer generally in the 2-4 x $(NUM_CORES)

range. You'll want to vary this a bit to find the best

for your particular application's work load.

#

worker_class - The type of workers to use. The default

sync class should handle most 'normal' types of work

loads. You'll want to read

http://docs.gunicorn.org/en/latest/design.html#choosing-a-worker-type

for information on when you might want to choose one

of the other worker classes.

#

A string referring to a Python path to a subclass of

gunicorn.workers.base.Worker. The default provided values

can be seen at

http://docs.gunicorn.org/en/latest/settings.html#worker-class

#

worker_connections - For the eventlet and gevent worker classes

this limits the maximum number of simultaneous clients that

a single process can handle.

#

A positive integer generally set to around 1000.

#

timeout - If a worker does not notify the master process in this

number of seconds it is killed and a new worker is spawned

to replace it.

#

Generally set to thirty seconds. Only set this noticeably

higher if you're sure of the repercussions for sync workers.

For the non sync workers it just means that the worker

process is still communicating and is not tied to the length

of time required to handle a single request.

#

keepalive - The number of seconds to wait for the next request

on a Keep-Alive HTTP connection.

#

A positive integer. Generally set in the 1-5 seconds range.

#

workers = 1
worker_class = 'sync'
worker_connections = 1000
timeout = 30
keepalive = 2

—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/tiangolo/fastapi/issues/2099#issuecomment-708615407,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AJXNYTHA3DIE5GQFSPGN6RTSKX4IZANCNFSM4R2Z3ZCQ
.

>

Regards
Priyatam Nayak+6584584608

@PriyatamNayak

no, that was just a sample configuration from docs. I meant you have to check this sample to configure your gunicorn

Other apps is working fine..am aware of this..
Am particularly interested how can use with fast api,
For that I created this issue in fastapi GitHub..
Fastapi documents is not clear

On Thu, 15 Oct 2020 at 3:08 PM, amin jamal notifications@github.com wrote:

@PriyatamNayak https://github.com/PriyatamNayak

no, that was just a sample configuration from docs. I meant you have to
check this sample to configure your gunicorn

—
You are receiving this because you were mentioned.

Reply to this email directly, view it on GitHub
https://github.com/tiangolo/fastapi/issues/2099#issuecomment-708948146,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AJXNYTBSU5CKEMEEPIZUO7DSK2NWDANCNFSM4R2Z3ZCQ
.

>

Regards
Priyatam Nayak+6584584608

Was this page helpful?
0 / 5 - 0 ratings