I think Rasa Intent Classification Component should output the name of the intent(s) it warns about that lack enough training data. This would enable easy identification of problem intent(s)
Hi There.
I interested working on this issue. Could anyone describe more? Should I add new endpoint to lists all intents that lack of training examples?
Hello, I'd like to work on this as well-- has the situation changed at all since the issue was posted?
@madimov I haven't seen any work on this, so I'd be glad if you give it a try 馃憤
@tmbo Great, I'm on it-- brainstorming an approach / solution and will comment back here soon
@tmbo One approach I can see so far is to:
UndefinedMetricWarning (mentioned in issue #288 ) issued by sklearn/metrics/classification.py when the F-score is ill-defined due to insufficient training examples.precision_recall_fscore_support function for it to provide such information.In a few discussions about catching out of scope intents (e.g. issue #387 ), confidence thresholds have been suggested. At first glance, this seemed an (incomplete) solution to this issue as well, because in most cases of low confidence, more training examples are likely to help. That is larger issue, however, and not foolproof.
Do you have any thoughts on a preferred approach?
@tmbo @madimov Is this issue solved yet?
@nahidalam Not yet-- didn't get feedback for my suggestions, but feel free to take on the task!
unable to add more intents in rasa_nlu even using rasa-nlu trainer ? need help
I would really like to see this feature, as I currently get a series of 12 UndefinedMetricWarning: F-score is ill-defined warnings which I don't know how to respond to.
@nahidalam or anyone who wants to take this one (sorry I wasn't aware you are waiting for a reply here). I guess catching the warning and tryng to figure out which intent caused it is a good approach :+1:
This may be of some use if anyone wants to implement this: https://stackoverflow.com/questions/43162506/undefinedmetricwarning-f-score-is-ill-defined-and-being-set-to-0-0-in-labels-wi
this question was also asked on SO, here's my answer: https://stackoverflow.com/a/51738593/1408476 it may help anyone who decides to tackle this.
this functionality exists now
Most helpful comment
@tmbo One approach I can see so far is to:
UndefinedMetricWarning(mentioned in issue #288 ) issued by sklearn/metrics/classification.py when the F-score is ill-defined due to insufficient training examples.precision_recall_fscore_supportfunction for it to provide such information.In a few discussions about catching out of scope intents (e.g. issue #387 ), confidence thresholds have been suggested. At first glance, this seemed an (incomplete) solution to this issue as well, because in most cases of low confidence, more training examples are likely to help. That is larger issue, however, and not foolproof.
Do you have any thoughts on a preferred approach?