Rasa: RuntimeError: Unable to initialize persistor

Created on 19 Jun 2018  Â·  12Comments  Â·  Source: RasaHQ/rasa

Rasa NLU version: master branch

Operating system (windows, osx, ...): Ubuntu 18.04 (local), Docker image

Content of model configuration file:

language: "fa"
pipeline:
  - name: "tokenizer_whitespace"
  - name: "ner_crf"
    features: [["low", "title"], ["bias", "suffix3"], ["upper"]]
  - name: "intent_featurizer_count_vectors"
  - name: "intent_classifier_tensorflow_embedding"
    intent_tokenization_flag: true
    intent_split_symbol: "_"

Issue: When to specify a project name to load with --pre_load option, it will throws error below:

RuntimeError: Unable to initialize persistor

I put a print statement in _read_model_metadata method of project module (https://github.com/RasaHQ/rasa_nlu/blob/master/rasa_nlu/project.py#L200):

    def _read_model_metadata(self, model_name):
        print(model_name) # prints fallback
        if model_name is None:
            data = Project._default_model_metadata()
            return Metadata(data, model_name)
        else:
            if not os.path.isabs(model_name) and self._path:
                path = os.path.join(self._path, model_name)
            else:
                path = model_name

            # download model from cloud storage if needed and possible
            if not os.path.isdir(path):
                self._load_model_from_cloud(model_name, path)

            return Metadata.load(path)

And it will prints fallback which doesn't to be my passed project name to --pre_load option:

vagrant@vagrant:~$ python -m rasa_nlu.server --path projects/ --pre_load bot    
fallback
2018-06-19 08:58:00 WARNING  rasa_nlu.project  - Using default interpreter, couldn't fetch model: Unable to initialize persistor
Traceback (most recent call last):
  File "/usr/lib/python2.7/runpy.py", line 174, in _run_module_as_main
    "__main__", fname, loader, pkg_name)
  File "/usr/lib/python2.7/runpy.py", line 72, in _run_code
    exec code in run_globals
  File "/home/vagrant/.local/lib/python2.7/site-packages/rasa_nlu/server.py", line 389, in <module>
    router._pre_load(pre_load)
  File "/home/vagrant/.local/lib/python2.7/site-packages/rasa_nlu/data_router.py", line 176, in _pre_load
    self.project_store[project].load_model()
  File "/home/vagrant/.local/lib/python2.7/site-packages/rasa_nlu/project.py", line 140, in load_model
    interpreter = self._interpreter_for_model(model_name)
  File "/home/vagrant/.local/lib/python2.7/site-packages/rasa_nlu/project.py", line 197, in _interpreter_for_model
    metadata = self._read_model_metadata(model_name)
  File "/home/vagrant/.local/lib/python2.7/site-packages/rasa_nlu/project.py", line 213, in _read_model_metadata
    self._load_model_from_cloud(model_name, path)
  File "/home/vagrant/.local/lib/python2.7/site-packages/rasa_nlu/project.py", line 251, in _load_model_from_cloud
    raise RuntimeError("Unable to initialize persistor")
RuntimeError: Unable to initialize persistor

Most helpful comment

I've clarified problem source:

--pre_load option behaviour:

  • if you have to specify a project name, it will load all models of project
  • if you have to specify a model name, it will accepts it but doesn't load any subsequent model, you should load it programmatically or with a HTTP request:
    ex. curl -XPOST localhost:5000/parse -d '{"q":"hello there"}'
    Loads all models under default project
    ex. curl -XPOST localhost:5000/parse -d '{"q":"hello there", "project":"hotels"}'
    Loads all models under hotels project

My command to train a model named botmodel under project bot is:

python -m rasa_nlu.train \
         --config /chatbot/interpreter/config/fa.config.yaml \
         --data /chatbot/interpreter/data/fa.training-data.json \
         --path /chatbot/nlu \
         --project bot \
         --fixed_model_name botmodel \
         --debug \
         --verbose

so in this case trained model will save under directory /chatbot/nlu/bot/botmodel and when I'm going to run server with command below:

python -m rasa_nlu.server \
         --config /chatbot/interpreter/config/fa.config.yaml \
         --path /chatbot/nlu \
         --pre_load bot \
         --debug \
         --verbose

It will throws the error which is the current issue points on:

RuntimeError: Unable to initialize persistor

Or with specifying botmodel as model name:

python -m rasa_nlu.server \
         --config /chatbot/interpreter/config/fa.config.yaml \
         --path /chatbot/nlu \
         --pre_load botmodel \
         --debug \
         --verbose

rasa_nlu HTTP server launched but with a HTTP request like below throws an error:

$  curl -XPOST localhost:5000/parse -d '{"q":"hello there","project":"bot"}'
{
  "error": "Unable to initialize persistor"
}

When I train without specifying model or project name:

python -m rasa_nlu.train \
         --config /chatbot/interpreter/config/fa.config.yaml \
         --data /chatbot/interpreter/data/fa.training-data.json \
         --path /chatbot/nlu \
         --debug \
         --verbose

It will save a model with name default and a model name like model_20180619-102632.
When running server with or without specifying a model or project name, it works fine!!

python -m rasa_nlu.server \
         --config /chatbot/interpreter/config/fa.config.yaml \
         --path /chatbot/nlu \
         --debug \
         --verbose
python -m rasa_nlu.server \
         --config /chatbot/interpreter/config/fa.config.yaml \
         --path /chatbot/nlu \
         --pre_load default \ # or --pre_load model_20180619-102632 \
         --debug \
         --verbose

I'm also testing with renaming default project directory to default1 and see it works fine, but the problem raised from changing model directory name to another one! it will throws the error!

By the way the hard coded model name that will generate a model name like model_20180619-102632 is the problem source, I have also replaced botmodel directory (above case that throws error) to model_20180619-102632 that generated from another model name (which works without error), it will works fine!

All 12 comments

You also have to specify model name.

I've clarified problem source:

--pre_load option behaviour:

  • if you have to specify a project name, it will load all models of project
  • if you have to specify a model name, it will accepts it but doesn't load any subsequent model, you should load it programmatically or with a HTTP request:
    ex. curl -XPOST localhost:5000/parse -d '{"q":"hello there"}'
    Loads all models under default project
    ex. curl -XPOST localhost:5000/parse -d '{"q":"hello there", "project":"hotels"}'
    Loads all models under hotels project

My command to train a model named botmodel under project bot is:

python -m rasa_nlu.train \
         --config /chatbot/interpreter/config/fa.config.yaml \
         --data /chatbot/interpreter/data/fa.training-data.json \
         --path /chatbot/nlu \
         --project bot \
         --fixed_model_name botmodel \
         --debug \
         --verbose

so in this case trained model will save under directory /chatbot/nlu/bot/botmodel and when I'm going to run server with command below:

python -m rasa_nlu.server \
         --config /chatbot/interpreter/config/fa.config.yaml \
         --path /chatbot/nlu \
         --pre_load bot \
         --debug \
         --verbose

It will throws the error which is the current issue points on:

RuntimeError: Unable to initialize persistor

Or with specifying botmodel as model name:

python -m rasa_nlu.server \
         --config /chatbot/interpreter/config/fa.config.yaml \
         --path /chatbot/nlu \
         --pre_load botmodel \
         --debug \
         --verbose

rasa_nlu HTTP server launched but with a HTTP request like below throws an error:

$  curl -XPOST localhost:5000/parse -d '{"q":"hello there","project":"bot"}'
{
  "error": "Unable to initialize persistor"
}

When I train without specifying model or project name:

python -m rasa_nlu.train \
         --config /chatbot/interpreter/config/fa.config.yaml \
         --data /chatbot/interpreter/data/fa.training-data.json \
         --path /chatbot/nlu \
         --debug \
         --verbose

It will save a model with name default and a model name like model_20180619-102632.
When running server with or without specifying a model or project name, it works fine!!

python -m rasa_nlu.server \
         --config /chatbot/interpreter/config/fa.config.yaml \
         --path /chatbot/nlu \
         --debug \
         --verbose
python -m rasa_nlu.server \
         --config /chatbot/interpreter/config/fa.config.yaml \
         --path /chatbot/nlu \
         --pre_load default \ # or --pre_load model_20180619-102632 \
         --debug \
         --verbose

I'm also testing with renaming default project directory to default1 and see it works fine, but the problem raised from changing model directory name to another one! it will throws the error!

By the way the hard coded model name that will generate a model name like model_20180619-102632 is the problem source, I have also replaced botmodel directory (above case that throws error) to model_20180619-102632 that generated from another model name (which works without error), it will works fine!

On a similar note, I would like to load all projects, without having to specify the project to be loaded. Is that possible currently? Simply passing all to pre_load doesn't seem to work.

@parthsharma1996 I think No, you can load all models under a project. but cannot load all projects. if you don't specify any project name, it will loads the default project otherwise you should specify a project name to load.

so if you have a customized project name, then specify the model name too, it's worked from my side.

sounds like the issue has been solved! 😄 can we close this or do you need more help?

@akelad We always need more help on rasa_core and rasa_nlu.

@akelad This bug still exists, it gives me RuntimeError: Unable to initialize persistor error when setting project_name and fixed_model_name with python based training or command-line training.

Ok given that our SDK branch will be merged soon, and the code in the server script will be rewritten, I'd say we close this issue for now. If the issue still persists once this is merged and the new core version is released, please create a new issue

I had the same problem:

➜ rasa curl -XPOST localhost:5000/parse -d '{"q":"hola queria saber sobre un plan", "model": "default"}'
{
"error": "Unable to initialize persistor"
}

>>> rasa_nlu.__version__
'0.14.0a1' 

The error

➜  rasa curl -XPOST localhost:5000/parse -d '{"q":"hola queria saber sobre un plan", "model": "current"}'
{
  "error": "Unable to initialize persistor"
}

When i passed the model and project name it works.

➜  rasa curl -XPOST localhost:5000/parse -d '{"q":"hola queria saber sobre un plan", "project": "default", "model": "renobot"}'
{
  "intent": {
    "name": "plan_ahorro",
    "confidence": 0.47679711036612765
  },
  "entities": [
    {
      "start": 27,
      "end": 31,
      "value": "plan",
      "entity": "plan_ahorro",
      "confidence": 0.8629572222041435,
      "extractor": "ner_crf"
    }
  ],
  "intent_ranking": [
    {
      "name": "plan_ahorro",
      "confidence": 0.47679711036612765
    },
    {
      "name": "compra_auto:",
      "confidence": 0.37989383247607295
    },
    {
      "name": "saludo",
      "confidence": 0.14330905715779946
    }
  ],
  "text": "hola queria comprar un plan",
  "project": "default",
  "model": "renobot"
}%

How i generated my models (note the fixed_model_name parameter):

python -m rasa_nlu.train -c config.yml --data nlu.md --path projects --fixed_model_name renobot 

Still got the problem now :(
When I try to use --fix_model_name param, I always get this error

Same here

Was this page helpful?
0 / 5 - 0 ratings