Fastapi: Serving ML models with multiple workers linearly adds the RAM's load.

Created on 26 Nov 2020  路  11Comments  路  Source: tiangolo/fastapi

Recently, we deployed a ML model with FastAPI, and encountered an issue.

The code looks like this.

from ocr_pipeline.model.ocr_wrapper import OcrWrapper
ocr_wrapper = OcrWrapper(**config.model_load_params) # loads 1.5 GB PyTorch model

...

@api.post('/')
async def predict(file: UploadFile = File(...)):
       preds = ocr_wrapper.predict(file.file, **config.model_predict_params)
       return json.dumps({"data": preds})

The above written command consumes min. 3GB of RAM.

gunicorn --workers 2 --worker-class=uvicorn.workers.UvicornWorker app.main:api

Is there any way to scale the number of workers without consuming too much RAM?

ENVIRONMENT:
Ubuntu 18.04
Python 3.6.9

fastapi==0.61.2
uvicorn==0.12.2
gunicorn==20.0.4
uvloop==0.14.0

@tiangolo

question

All 11 comments

I believe this issue is duplicate of #596. Have you tried to workarounds over there, like trying with Python 3.8?

I believe this issue is duplicate of #596. Have you tried to workarounds over there, like trying with Python 3.8?

Threre is no problem with consuming RAM forever. When it reaches the point of (num_workers) * (model_size), it stops there.

Well, i think it's pretty normal since you are loading 1.5 GB directly into the memory in two separate threads.

Approximately how long does it takes to answer one request?

Well, i think it's pretty normal since you are loading 1.5 GB directly into the memory in two separate threads.

Approximately how long does it takes to answer one request?

Yeah, I know that it's the normal behavior to load in 2 separate threads. The question is that, is there any way to make sure that all threads are using the same model, thus not copying them. It's even more important when the inference is done with GPU.

Not exactly sure, if it is about using shared memory among different threads. we recently faced similar issue while running celery workers.. The fix is discussed in the following stack-overflow post about using shared memory.. Not sure, will this technique help in your scenario or not..

https://stackoverflow.com/questions/9565542/share-memory-areas-between-celery-workers-on-one-machine

This is not a specific fastAPI question (more a gunicorn one) , it's about sharing memory between process

The solution would be loading the model in ram before the fork of the workers (of gunicorn)

so you need to use --preload

gunicorn --workers 2 --preload --worker-class=uvicorn.workers.UvicornWorker app.main:api

your main.py file inside folder app

def create_app():
    MY_MODEL.load("model_path")
    app = FastAPI()
    app.include_router(my_router)
    return app
api = create_app()

If you have more question about gunicorn _or_ python _or_ fork _or_ copy-on-write _or_ python reference counting _or_ memory leak -> stackoverflow

YOU can very probably CLOSE this issue , thank you :)

This is not a specific fastAPI question (more a gunicorn one) , it's about sharing memory between process

That's right.

The solution would be loading the model in ram before the fork of the workers (of gunicorn)
so you need to use --preload

gunicorn --workers 2 --preload --worker-class=uvicorn.workers.UvicornWorker app.main:api

your main.py file inside folder app

def create_app():
    MY_MODEL.load("model_path")
    app = FastAPI()
    app.include_router(my_router)
    return app
api = create_app()

Since I have the same problem, I'm going to try this. I suspect this will not be a viable solution.

Trying to share PyTorch models in that way causes them to stop working. Whenever the model is used for inference, not always, but almost always, the worker hangs resulting in a timeout and consequent new worker spawn.

Here's a thread discussing the same issue: https://github.com/benoitc/gunicorn/issues/2157

If you have more question about gunicorn _or_ python _or_ fork _or_ copy-on-write _or_ python reference counting _or_ memory leak -> stackoverflow

I don't see how is this useful.

Since I have the same problem, I'm going to try this. I suspect this will not be a viable solution.

The preload option work , and do what is supposed to do , if it do not work with your case that's your lib or code problem.

Since I have the same problem, I'm going to try this. I suspect this will not be a viable solution.

This is working in production with a model taking more than 40GO of ram shared by 8 workers

I don't see how is this useful.

That explicit the fact that --preload is not magic and will not work easily depending of the memory to share , like your PyTorch problem.

This is working in production with a model taking more than 40GO of ram shared by 8 workers

Great to hear, then please share as much detail as you can about that, because evidently it's not working for everyone, despite --preload working correctly.

Is it a Pytorch model? Is it a pipeline? In my case, I use a SentenceTransformer model and only use it to get embeddings (model.encode()), not to do a full inference. Having more details about this could help both me and OP to find a solution.

That might be more useful in the thread I mentioned (or elsewhere) rather than here, since it's not a FastAPI problem.

Just found out that if I change my app methods from:

@app.post("/clusters", response_model=ClusteringResponse)
async def cluster(request: ClusteringRequest, model=Depends(get_model)):
    """Cluster a list of text sentences"""
    ...

to:

@app.post("/clusters", response_model=ClusteringResponse)
def cluster(request: ClusteringRequest, model=Depends(get_model)):
    """Cluster a list of text sentences"""
    ...

removing the async qualifier, the model does indeed work as expected.

@sevakharutyunyan are you able to verify if this works for you?

Just found out that if I change my app methods from:

@app.post("/clusters", response_model=ClusteringResponse)
async def cluster(request: ClusteringRequest, model=Depends(get_model)):
    """Cluster a list of text sentences"""
    ...

to:

@app.post("/clusters", response_model=ClusteringResponse)
def cluster(request: ClusteringRequest, model=Depends(get_model)):
    """Cluster a list of text sentences"""
    ...

removing the async qualifier, the model does indeed work as expected.

@sevakharutyunyan are you able to verify if this works for you?

removing async doesn't help. --preload option for gunicorn indeed works for small network, but not for every case.

Was this page helpful?
0 / 5 - 0 ratings