Description of Problem:
Having to retrain the Core ML model every time templates are changed is unnecessary and can be very expensive for a large project with lots of stories.
Overview of the Solution:
Create a separate, more granular fingerprint for the templates section in the domain and if only that has changed, simply replace that section in the new model and not retrain Core.
Examples (if relevant):
Blockers (if relevant):
Definition of Done:
This is the default behavior; the way to avoid retraining with template changes is to use an External CMS.
If there are no changes to the domain or the training data Rasa already prevents redundant training.
This is the default behavior; the way to avoid retraining with template changes is to use an External CMS.
I don't think the choice of using an external CMS or not should penalize you with unnecessary training time. A change in the text or triggers of the template does not change the outcome of the core model training and is therefore unnecessary, don't you think?
@msamogh this should be in progress, shouldn't it?
Yes. Thanks for noticing!
@btotharye additionally to what it says in the issue description here, someone mentioned recently that sometimes NLU retrains if some parts of the domain are changed as well. So we should look into what's going on there
So @btotharye is gonna finish up https://github.com/RasaHQ/rasa/pull/4251 ?
So @btotharye is gonna finish up #4251 ?
Hrm didn't know that was already out there, should I start with that PR code?
Why isn't @msamogh finishing this up and you @btotharye tackle the NLU part from @akelad 's comment?
I'm good with that if everyone agrees
haha my bad - i assumed this wasn't being tackled since @msamogh had unassigned himself
@btotharye have you looked into the other part yet?
I close this issue now since the nlu part is covered in #4691
Most helpful comment
I don't think the choice of using an external CMS or not should penalize you with unnecessary training time. A change in the text or triggers of the template does not change the outcome of the core model training and is therefore unnecessary, don't you think?