Rasa: Exception during finetune request

Created on 11 Sep 2018  路  16Comments  路  Source: RasaHQ/rasa

Rasa Core version:
0.11.3
Python version:
3.6
Operating system (windows, osx, ...):
Ubuntu 18.04
Issue:
When sending events for model finetune I get this error:

2018-09-11 08:54:19 ERROR    rasa_core.server  - Caught an exception during prediction.
Traceback (most recent call last):
  File "/home/tymoteusz/bot/venv/lib/python3.6/site-packages/rasa_core/server.py", line 387, in continue_training
    batch_size=batch_size)
  File "/home/tymoteusz/bot/venv/lib/python3.6/site-packages/rasa_core/agent.py", line 425, in continue_training
    **kwargs)
  File "/home/tymoteusz/bot/venv/lib/python3.6/site-packages/rasa_core/policies/ensemble.py", line 192, in continue_training
    self.training_trackers.extend(trackers)
AttributeError: 'NoneType' object has no attribute 'extend'

It seems like this variable is set here https://github.com/RasaHQ/rasa_core/blob/master/rasa_core/policies/ensemble.py#L64, but this line is never executed in my setup (runing server with rasa_core.run).

Content of domain file (if used & relevant):


Most helpful comment

I see, so the problem is that in this case continue_training is called without train, so ensemble doesn't have original training trackers. @tmbo could you please look into it?

All 16 comments

Thanks for raising this issue, @Ghostvv will get back to you about it soon.

Could you please give an exact command that you use, so I could reproduce the error

@Ghostvv, sure:

python -m rasa_core.run --enable_api -d models/default/dialogue -p 3010 --endpoints endpoints.yml --cors '*' -c 'rest'

but this command doesn't call continue_training

Yes, this command starts a server and I'm making POST request to /finetune endpoint with array of events. I'm creating Web interface for interactive learning backend, that's why I have to use this endpoint.

I see, so the problem is that in this case continue_training is called without train, so ensemble doesn't have original training trackers. @tmbo could you please look into it?

We do need the trackers though right, otherwise we can't sample for the finetune, @Ghostvv ?

In that case I am not sure how we get around this. We somehow need to get the training data again from somewhere.

@tmbo I'd say even more so, I'm not sure whether it is also true for keras_policy, but for embedding_policy a policy has to be ready for continue training, which means it has to be pretrained, we simply do not persist enough information for this. In other words continue_training has to be called on the same instance on which train was called beforehand.

In general, this kind of "fine-tuning" is not really a good strategy and it is better to fully retrain the model. It was introduced for interactive learning.

Possible solutions would be:

  1. persist all necessary info
  2. block continue_training if it was not trained on the same instance

Same issue when call /finetune with events array, so what the solution is ?

before calling finetune the policy class has to be trained with its train(...) method, not loaded from disk with load(...)

Same issue when call /finetune with events array, so what the solution is ?

Don't call /finetune. :) That is what I did. It is not essential to make interactive learning useful. In the next Rasa release will be an option to disable finetune during interactive learning.

Hi @Ghostvv , checked the source code, but how to trigger policy train(...) method before calling /finetune ? There is no other endpoints which can access train(...) method, or we can regard this as a bug and need Rasa team to fix.

As said, there is no clear path forward yet on how to fix this. As @Ghostvv said, the model needs some information from the training that is not persisted. So it can not be finetuned without that information.

One next step should be though to fix the exception and reject the finetuning request if the model is not ready for finetuning.

The only way to get around this exception right now is to start the server with a model that has just been trained, as it is done during the online learning cli

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

This issue has been automatically closed due to inactivity. Please create a new issue if you need more help.

This issue has been automatically closed due to inactivity. Please create a new issue if you need more help.

Was this page helpful?
0 / 5 - 0 ratings