I'm executing the /data/example/rasa example as described on tutorial.
I'm trying to understand why I'm not getting 100% of confidence when I send the same text that I used on during the training.
curl -XPOST localhost:5000/parse -d '{"q":"I am looking for mexican indian fusion"}' | python -mjson.tool
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 364 100 318 100 46 4694 679 --:--:-- --:--:-- --:--:-- 4676
{
"entities": [
{
"end": 38,
"entity": "cuisine",
"extractor": "ner_mitie",
"start": 17,
"value": "mexican indian fusion"
}
],
"intent": {
"confidence": 0.9046218833485684,
"name": "restaurant_search"
},
"text": "I am looking for mexican indian fusion"
}
The confidence of it should be 1, right? Why it's 0,9?
I have trained it which the exact same words....
When using Wit or API, I get 1 for those scenario.
What I'm doing wrong?
Thanks
To quote @wrathagom from GItter:
It's not just a straight lookup, so no the confidence shouldn't be 1.
Don't try to compare too much RASA with others services from a technical point of view: if they follow similar core ideas, they're not backed by the same technologies so you can only expect them to perform differently on the same dataset.
I really advise you to read @amn41 blog post : it will give you better insights about how everything works.
I have a similar question about this. Is it possible to have a balance confidence? I mean if a entity appears in two different intents, the confidence will be 50% per each intent? In my training data, in each intent there is the same entity example number but the model always gives a high confidence for one intent.
The intent classification doesn't rely on the entities present - they are two separate things.
@oximer I'm closing because I think you're question has been answered, if not let us know.