I have implemented the onDefault action to handle all the utterances that are not matched elsewhere.
I have added a grammar spell checker, so that I could fix typos by the user like
quute by adele that is not understood by LUIS, by a simple grammar can turn this in the right one that is quote by adele.
At this point, I'm going to suggest the user if the correction is the right one prompting it:
var msg = "You typed " + result.text + "\nDid you mean \""+result.correction+"\"?"
BotBuilder.Prompts.text(session, msg );
and as soon as the user type yes I would like to reenter the dialog with this query, but I do not see a way to do this:
function (session, results)
{
var response=results.response;
if ( response == 'yes' ) {
//@TODO redo query
session.beginDialog('/');
}
else {
session.send( self.getLocalizedString('RESPONSE_DISCARD','RESPONSE_DISCARD') );
}
}
For my understanding I can define redirects like
.on('DeleteTask', '/deleteTask')
Using the Session userData object I can add parameters like
session.userData.correct = {
text : result.text,
correct : result.correction
};
in a session, and that's ok, but how to forward the user utterance as well properly in order to force to the right redirect i.e. force LUIS Dialog to the right route programmatically or to express this differently:
Given an utterance programmatically, is in the botframework api a way to get the matching Intent objects back from LUIS dialog model?
Here is the code callbacks in the onDefault.
self.dialog.onDefault(
[
function(session, args, next)
{
// Check query grammar
var message = session.message.text;
console.log("Checking grammar on %s", message);
self.languageBot.spellChecker(message, function(result,error) {
if(error) { // grammar error
self.logger.error("Failed to check grammar for %s", message);
session.send( self._options.locale['RESPONSE_NOTUNDERSTOOD'] )
} else { // grammar passed
if( result.text != result.correction ) { // misspelling
var msg = "You typed " + result.text + "\nDid you mean \""+result.correction+"\"?"
session.userData.correct = {
text : result.text,
correct : result.correction
};
BotBuilder.Prompts.text(session, msg );
}
else {
session.send( self._options.locale['RESPONSE_NOTUNDERSTOOD'] )
}
}
});
}
, function (session, results)
{
var response=results.response;
if ( response == 'yes' ) {
//@TODO redo query
session.beginDialog('/');
}
else {
session.send( self.getLocalizedString('RESPONSE_DISCARD','RESPONSE_DISCARD') );
}
}
]);
Interesting... First let me answer your core question and then I have one other suggestion...
To answer your question you'd want to use session.replaceDialog() instead of session.beginDialog() that will unload and reload the current dialog. To apply the correction just assign it to session.message.text before calling replaceDialog() That should do what you're trying to do.
My other suggestion is to use Prompts.confirm() instead of Prompts.text(). This prompt already knows to look for several variants of the yes & no. The returned result.response will be a simple Boolean so way easier to check.
So I guess I have one more suggestion. You can teach LUIS to recognize these misspellings. When you first train your LUIS model it's going to pretty much suck. The idea is you have to regularly go back and update your model by labeling all of the utterances it's been seeing, including the ones with misspellings, and then retrain and deploy a new model . If you do this it will quickly get better and eventually it'll will handle pretty much any misspellings the user throws at it. Honestly you shouldn't need to add your own spell checking logic as a way to deal with unrecognized intents. Just be religious about updating your model every day the first few weeks/months your bot is deployed.
Its also useful to just let a few people kick the tires by throwing your bot some messages you didn't think of. It's going to suck at first but it will quickly get better.
@Stevenic thanks I'm going to changing the current implementation using the replaceDialog and overwriting session.message.text first.
Yes, absolutely, that was my second option, do retrain of misspellings, so this leads me to a new question. When deployed it could be hard to keep it completely "supervised", I mean labeling misspelling to the right Intent on the LUIS dashboard and doing this manually.
So, let's supposed I would like to keep grammar check, and let the users to crowd label those misspelling like quote by neil yoong or what song says snait peter won't call by name, etc.
At that point my problem turns into: can I submit to LUIS programmatically the utterance association to a certain Intent? If so, I could do an automatic labeling of error corrected by an automatic tool , and human reviewed by real users, so I will be sure to have a more than acceptable accuracy and get something like saint peter won't call my name fixed then automatically since LUIS will have it retrained.
Of course this means something like a auto publishing feature as well...
Thank you.
So I believe LUIS has an API for publishing label data. Unfortunately they don't have a direct way of letting you add people to a project so they can help retrain the model. What you can do is export your model which people can then add their labeled utterances into. But they'll need to either hand label them or import them into LUIS themselves, label them, and then export them back out. All very nasty sounding tasks...
To be honest I don't think the retraining tasks will be that bad. LUIS does a good job of recommending to you the utterances that it thinks it had issue with to start with just check this list every day and then things should stabilize pretty quickly. Once they do check it once a week for a month or two and after that you probably have pretty good coverage and you shouldn't really have to even think about it anymore.
@Stevenic thanks for your help. Your answer makes sense.
I have also posted the question to the LUIS forum https://social.msdn.microsoft.com/Forums/azure/en-US/99ed8e2d-2b08-472f-a098-bae5ad1d5f16/how-to-submit-new-labels-prain-and-publish-the-model-programmatically?forum=LUIS (funny thing: syntax error in prain that is train )
Regarding the replaceDialog, it perfectly works, I was able to overwrite the value of message.text in the session and then restart the new dialog based on it (very useful!). I have also replaced the Prompts.text() with the Prompts.confirm(), in order to evaluated results.response in the waterfall callbacks as a boolean.
About this, I have seen that the docs here doese not specify which kind of utterances it supports. Since I have build a a amazon-echo skill, I wonder if there is something similar to the built-in (pre-built) YesIntent that is trained on well-known confirmation utterances like
and if there is a way to extend this one with custom ones like
etc.
The same would work for a NoIntent etc.
Thank you.
No... We don't have any sort of pre-built intents yet. I completely agree they would be useful but there's an issue for us as to where does the LUIS model run? LUIS is a subscription based service so it potentially costs money to run models. The Prompts class lets you replace the recognizer used and my thought was to potentially provide a LuisPromptRecognizer class which would use a LUIS Model to drive the built-in prompts. You would need to deploy a provided model to LUIS though and run it from your subscription. So there would be a workflow needed to set it up.
Got it, thanks! So you are referring to the Prompts options here.
This would be a good opportunity to have built-in intents for this new LUIS "prompt" model. Given that, using the compareConfidence would be easier to handle the dialog.onDefault handler.
My aim, in this "natural language understanding bot" scenario, is not to leave the end-user without any assistance or without having LUIS to have a minimal understanding of the wrong utterances, that is exactly what is suggested in the Prompt docs here like handling things like "quit" or "what can I say?".
Yes... We're planning to add built in ways of providing answers for "what can I say?" and so forth but all good feedback.
Closing this for now.
@Stevenic ok, thank you.
For our records, some cases of type in a text bot:
You typed qquote by queen
Did you mean "Quote by queen"?
yes
{ firstRun: true,
corrections: { text: 'qquote by queen', result: 'Quote by queen' } }
I have found this quote in "Voodoo" by Queen
"And it's all absurd to me"
and
get my favourites
You typed get my favourites
Did you mean "Get my favorites"?
yes
{ firstRun: true,
corrections: { text: 'get my favourites', result: 'Get my favorites' } }
I have found 367 favorites. Your last favorite songs is "脠 tutto un attimo" by Anna Oxa. Do you want get the lyrics?
Typos are more frequent as we could think if your user interface is a keyboard rather than a mouse, so I strongly suggest a simple api to send new tuples (badUtterance, intentName) to LUIS like:
("get my favourites", "UserFavoritesIntent")', ("qquote by queen","SnippetIntent"), etc.
This is basically a manually labeled system assisted by a automatic syntax correction system.
hi @Stevenic. session.replaceDialog does not seem to trigger the intent match again.
mine is a similar. but instead of spell check, I am trying the following sequence.
on a local prompt question instead of answering the prompt question, the user chooses to do one of the global utterances ( on which Luis is trained )..
in the prompts results we notice that user has not chosen the standard answer but given a general utterance.
I did a session.replaceDialog('/');
it is supposed to pass the current message to Luis and invoke the rest of the intent matching waterfall sequences.
But the waterfall sequence is not being invoked.
but these sequences are invoked if User directly provides these utterances out of the prompt loop.
c= builder.EntityRecognizer.findBestMatch(globals.commodities, input, 0.2);
if (!c) {
session.send(prompts.notPicked);
session.replaceDialog('/');
// now the dialog intent matching / default waterfall sequence should be invoked,
}
the call to luis api is not getting invoked.
how to invoke the call to the luis api again?
Sorry about that... This was a behavior change to fix a bug that was tripping everyone up. When you create your IntentDialog make sure you set the recognizeMode to RecognizeMode.onBegin so:
var dialog = new builder.IntentDialog({ recognizeMode: builder.RecognizeMode.onBegin, recognizers: [] });
Thanks Steve. this fixes the problem.
btw, would you know how long the session object will be available in memory. How can we clean up the memory if they are kept in memory for those chats that gone idle?
@Stevenic hi steve I am implementing one food bot using LUIS, so I want to take input from user & give answer, if user enters incomplete info. then I want to reply him with another dialog which I am getting from LUIS json, but how can I get those response dialog in my Nodejs code? I can not get those dialogs in botbuilder node's intent's function response, my sample json is here.
{
"query": "get 5 with jumbo size ",
"topScoringIntent": {
"intent": "getOrder",
"score": 0.9999304,
"actions": [
{
"triggered": false,
"name": "getOrder",
"parameters": [
{
"name": "Food Name",
"type": "foodName",
"required": true,
"value": null
},
{
"name": "Size",
"type": "size",
"required": true,
"value": null
},
{
"name": "quantity",
"type": "quantity",
"required": true,
"value": [
{
"entity": "5",
"type": "quantity",
"resolution": {}
}
]
}
]
}
]
},
"intents": [
{
"intent": "getOrder",
"score": 0.9999304,
"actions": [
{
"triggered": false,
"name": "getOrder",
"parameters": [
{
"name": "Food Name",
"type": "foodName",
"required": true,
"value": null
},
{
"name": "Size",
"type": "size",
"required": true,
"value": null
},
{
"name": "quantity",
"type": "quantity",
"required": true,
"value": [
{
"entity": "5",
"type": "quantity",
"resolution": {}
}
]
}
]
}
]
},
{
"intent": "None",
"score": 0.07642237
},
{
"intent": "Help",
"score": 9.138411E-07
}
],
"entities": [
{
"entity": "5",
"type": "quantity",
"startIndex": 4,
"endIndex": 4,
"score": 0.7467653,
"resolution": {}
}
],
"dialog": {
"prompt": "which food do you want?",
"parameterName": "Food Name",
"parameterType": "foodName",
"contextId": "ae5de259-6a9b-476c-bbb8-1be7fceba761",
"status": "Question"
}
}
Most helpful comment
So I guess I have one more suggestion. You can teach LUIS to recognize these misspellings. When you first train your LUIS model it's going to pretty much suck. The idea is you have to regularly go back and update your model by labeling all of the utterances it's been seeing, including the ones with misspellings, and then retrain and deploy a new model . If you do this it will quickly get better and eventually it'll will handle pretty much any misspellings the user throws at it. Honestly you shouldn't need to add your own spell checking logic as a way to deal with unrecognized intents. Just be religious about updating your model every day the first few weeks/months your bot is deployed.
Its also useful to just let a few people kick the tires by throwing your bot some messages you didn't think of. It's going to suck at first but it will quickly get better.