Rasa: Cant start rasa server with gunicorn

Created on 28 Jun 2017  路  12Comments  路  Source: RasaHQ/rasa

rasa NLU version : 0.9.0a3

Used backend / pipeline : spacy_sklearn

Operating system : unix (docker)

Issue:
If i try to start my rasa server with the command prompt with the following command:
gunicorn -w 4 --threads 12 -k gevent -b 127.0.0.1:5000 rasa_nlu.wsgi

i get a worker timeout and it is always trying to start the app on a new pid. Seems like a loop.

So i tried:
gunicorn -w 1 --threads 1 -k gevent -b 127.0.0.1:5000 --log-level=DEBUG --timeout 120 rasa_nlu.wsgi

The command line printed that the server finished setting up the application.
But if try now:
http://127.0.0.1:5000/config

I get "Could not get any response"
Is it my fault or why isn't it possible to work with gunicorn?

My Congfig file looks like:
{
"language": "de",
"path": "./models",
"data": "./data/default.json",
"pipeline": [
"nlp_spacy",
"ner_crf",
"ner_synonyms",
"intent_featurizer_spacy",
"intent_classifier_sklearn"
],
"server_model_dirs": "./model_Default"
}

type

Most helpful comment

I think i found a fix for this behavior.
Seems like the 'de' model loading needs to much time so i add the preload flag:
" --preload
Load application code before the worker processes are forked. [False]"

And now all works for me.

All 12 comments

@PHLF I've heard you talk about using gunicorn before, have you ran Rasa with it? I unfortunately haven't and will be of little help on this one.

Let's see if I can be of any help 馃槂
@HansKannsNLP Did you try running RASA:

  • without gunicorn inside docker ?
  • with gunicorn outside docker ?

Let's try it :)

without gunicorn inside docker ? Yes, and it works fine

with gunicorn outside docker ? No

Hi, I got a similar issues when deploying rasa with gunicorn.
Have you tried with gunicorn inside docker? If it works it may mean it's something on your machine.
My particular issue was that it couldn't load the model located in a cloud bucket. But the logs were not clear about the issue. Do you have logs for all those events?

INFO:root:Logging requests to '/app/logs/rasa_nlu_log-20170616-221901-35.log'.
INFO:root:Loading model 'current_en'...
INFO:root:Trying to load spacy model with name 'en'
INFO:root:Added 'nlp_spacy' to component cache. Key 'nlp_spacy-en'.
INFO:root:Finished setting up application

I got this log:

INFO:rasa_nlu.data_router:Logging requests to '/app/rasa_nlu_test/logs/rasa_nlu_log-20170630-053016-9.log'.
INFO:rasa_nlu.data_router:Loading model './model_Default'...
INFO:rasa_nlu.utils.spacy_utils:Trying to load spacy model with name 'de'
INFO:rasa_nlu.components:Added 'nlp_spacy' to component cache. Key 'nlp_spacy-de'.
INFO:rasa_nlu.wsgi:Finished setting up application

when i start rasa with guinorn inside docker with simply one worker and one thread.
As I have already mentioned, it only works if I put both on one and i cant connect to it.

@HansKannsNLP I don't see INFO:root:Loading model...

Hmm sorry maybe you just skip it @znat

INFO:rasa_nlu.data_router:Logging requests to '/app/rasa_nlu_test/logs/rasa_nlu_log-20170630-053016-9.log'.
_INFO:rasa_nlu.data_router:Loading model './model_Default'..._
INFO:rasa_nlu.utils.spacy_utils:Trying to load spacy model with name 'de'
INFO:rasa_nlu.components:Added 'nlp_spacy' to component cache. Key 'nlp_spacy-de'.
INFO:rasa_nlu.wsgi:Finished setting up application

Right @HansKannsNLP , indeed :)

Let me share how I make it work. May be you can try from there.
Here are my Dockerfile and Entrypoint.sh (note that I'm still using 0.9.0a1)

Dockerfile

FROM python:2.7-slim

ENV RASA_NLU_DOCKER="YES" \
    RASA_NLU_HOME=/app \
    RASA_NLU_PYTHON_PACKAGES=/usr/local/lib/python2.7/dist-packages

VOLUME ["${RASA_NLU_HOME}", "${RASA_NLU_PYTHON_PACKAGES}"]

# Run updates, install basics and cleanup
# - build-essential: Compile specific dependencies
# - git-core: Checkout git repos
RUN apt-get update -qq && apt-get install -y --no-install-recommends \
  build-essential \
  git-core && \
  apt-get clean && \
  rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*

WORKDIR ${RASA_NLU_HOME}

COPY ./requirements.txt requirements.txt

# Split into pre-requirements, so as to allow for Docker build caching
RUN pip install $(tail -n +2 requirements.txt)

COPY . ${RASA_NLU_HOME}

RUN python setup.py install

RUN pip install spacy==1.8.2 gunicorn==19.7.1 sklearn-crfsuite==0.3.5 scikit-learn==0.18.1 scipy==0.19.0

RUN pip install https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-1.2.0/en_core_web_sm-1.2.0.tar.gz --no-cache-dir > /dev/null \
    && python -m spacy link en_core_web_sm en

RUN ls /app

EXPOSE 5000

ENTRYPOINT ["./entrypoint.sh"]
CMD ["help"]

Entrypoint.sh

#!/bin/bash

set -e
w
function print_help {
    echo "Available options:"
    echo " start commands (rasa cmd line arguments)  - Start RasaNLU server"
    echo " wsgi -w [workers] -t [threads]            - Start RasaNLU with multithreaded gunicorn server"
    echo " download {mitie, spacy en, spacy de}      - Download packages for mitie or spacy (english or german)"
    echo " start -h                                  - Print RasaNLU help"
    echo " help                                      - Print this help"
    echo " run                                       - Run an arbitrary command inside the container"
}

function download_package {
    case $1 in
        mitie)
            echo "Downloading mitie model..."
            python -m rasa_nlu.download -p mitie
            ;;
        spacy)
            case $2 in 
                en|de)
                    echo "Downloading spacy.$2 model..."
                    python -m spacy."$2".download all
                    echo "Done."
                    ;;
                *) 
                    echo "Error. Rasa_nlu supports only english and german models for the time being"
                    print_help
                    exit 1
                    ;;
            esac
            ;;
        *) 
            echo "Error: invalid package specified."
            echo 
            print_help
            ;;
    esac
}

case ${1} in
    start)
        exec python -m rasa_nlu.server "${@:2}" 
        ;;

    wsgi)
        exec gunicorn ${@:2} -k gevent -b :5000 rasa_nlu.wsgi --timeout 120
        ;;
    run)
        exec "${@:2}"
        ;;
    download)
        download_package ${@:2}
        ;;
    *)
        print_help
        ;;
esac

Then you should be able to run:

docker run -p 5000:5000 mrbotai/rasa_nlu wsgi -w 4 -t 6 (4 workers, 6 threads)

Now to load my specific model I use another image built on this one: that avoids having to rebuild everything every time I deploy.

FROM mrbotai/rasa_nlu

ENV RASA_PIPELINE="nlp_spacy,ner_crf,ner_synonyms" \
    RASA_SPACY_MODEL_NAME=en \
    RASA_EMULATE=parse \
    RASA_STORAGE=gcs \
    RASA_BUCKET_NAME="my_bucket_name" \
    RASA_SERVER_MODEL_DIRS="current_en" \
    GOOGLE_APPLICATION_CREDENTIALS=/app/my_cgs_creds.json

COPY my_cgs_creds.json /app/

RUN pip install google-cloud-storage==1.1.1
CMD ["wsgi","-w", "2","-t","6"]

which runs with docker run -p 5000:5000 rasa_nlu_yp

If this doesn't work you can try using my public image: mrbotai/rasa_nlu . If this still doesn't work, the problem probably lies somewhere else in your system. Let me know how it goes.

@znat thanks for the help! It will take a while until i can try it. I will give an update if i have tried it.

Ok. I got some little informations. I compared your versions and commands with the one of mine.
There are some difference but for the moment i am not sure that there is the problem.

The biggest news are that i can launch the gunicorn with 1 worker and 1 thread and reach it.
Following command worked:

gunicorn -w 1 --threads 1 -k gevent -b :5000 --log-level=DEBUG --timeout 120 rasa_nlu.wsgi

instead of:

gunicorn -w 1 --threads 1 -k gevent -b 127.0.0.1:5000 --log-level=DEBUG --timeout 120 rasa_nlu.wsgi

i have only left the ip blank....

It worked also with more Threads but when i try to set up more workers i get a ton of these logs:

INFO:rasa_nlu.data_router:Logging requests to '/app/rasa_nlu_test/logs/rasa
INFO:rasa_nlu.data_router:Loading model './model_Default'...
INFO:rasa_nlu.utils.spacy_utils:Trying to load spacy model with name 'de'
[2017-07-07 05:52:38 +0000] [XXX] [INFO] Booting worker with pid: XXX

Again it seems that the application never finishes.
Maybe got some time today to try your docker image @znat .

Greatings ;)

Ok i got some news.

  • If i use the image from @znat it works (with the models and config from image)
  • If i use the Dockerfile and try python 2.7 or 3.6 with my MODELS and CONFIG it do not work
  • If i use my own Image and switch to spacy model 'en' it starts all my 4 workers with 6 threads

The command i run was:
gunicorn -w 4 --threads 6 -k gevent -b :5000 --log-level=DEBUG --timeout 120 rasa_nlu.wsgi

So ... whats the step i have to do to run with 'de' ? Some ideas ?

I think i found a fix for this behavior.
Seems like the 'de' model loading needs to much time so i add the preload flag:
" --preload
Load application code before the worker processes are forked. [False]"

And now all works for me.

Was this page helpful?
0 / 5 - 0 ratings