I have several LUIS instances that use composite entities. Occasionally, LUIS will not return any composite entities but it returns their children. Oddly enough, it works while trained, but even if you publish the trained model, it still doesn't return the composite entity itself.
_None_, this is only interfacing LUIS.
Entities should be tagged appropriately, like in training.
The composite entities are not tagged.
Note that this sometimes works, and sometimes doesn't. For instance, now it works on the published version but not on the trained version.
Can you provide some screen captures of the behavior that you're seeing? Additionally, can you provide the JSON model for your LUIS app?
I can't add the JSON for the Luis model, but here is a screenshot: http://i.imgur.com/DB8nCDL.png
Clearly, it uses the same model for the trained one and the published one (see the times posted).
@MathBunny Your composite entity is supposed to encompass your Security and FixedIncomeSecurityType entities? Just to check via a different route, have you tried plugging in your utterances directly against your LUIS endpoints?
No, the Filter Item (composite) is supposed to encompass the Argument. However, the published model just labels the Argument, but does not show that it's part of the composite. In the trained it does it correctly.
The curly braces around [$Argument ] mean that the Argument entity is wrapped inside of a composite entity. If you press Ctrl+E, you'll be able to shift between the labels. The default label will show the entities except for the composite entities. The first Ctrl+E will show the composite entities only, and the next Ctrl+E will show the tokens (actual words).
If you want to select the view manually, you can access a dropdown menu over the right side of that training window (over the scores). Below is a screencap of what the menu looks like:
The view doesn't matter. I showed you in the screenshot that one shows the entity prediction as being argument and filter, and another is just argument. The published model doesn't show me the composition type.
This isn't a matter of view, I get that, but if you look at the tooltip it clearly outlines the difference.
My problem is that while the trained model and the published model are the exact same, they're returning different results.
My mistake @MathBunny, @DeniseMak do you have any idea why this might be happening?
@MathBunny if you train your model with more utterances eventually the red underline (and tooltip) should go away as the two models (your trained and published model) become more aligned.
As to _why_ the models aren't providing the same predictions when you train and publish the app within the span of a few seconds I can't say. I'll send an inquiry to the LUIS team and see if they've received any previous reports about this issue.
Closing this issue and I'll update this thread when I get more info. If you've trained and trained the models but are still running into this problem, feel free to reopen this issue.
@MathBunny can you type into the labeling component (the area when you're inside an intent and entering new utterances) one of the queries that show a difference in prediction in the training screen and provide a screen capture of what the results are?
It works fine in the training screen, it only fails in the testing screen (and when requests are made to it).
Before you used the testing tool, was this utterance already labeled and part of your model?
Yes
More info on this: This happens even if it isn't labelled. Sometimes I have the exact phrase as an utterance in the intent, sometimes not, it doesn't change anything. :/
:/ I'll pass this onto the LUIS dev team and see if we have any answers or need additional information. Sorry about the hassle.
Edit: Are you able to convey your App ID in case the team would like to use test your app? If you're able to but don't wish to share it in this public forum I can contact you via email.
Sorry, I don't have access to the project anymore (it was an internship project). I'll try my best to answer questions though. Thanks!
@MathBunny do you by chance know if your former advisors/bosses might be interested in receiving additional assistance? What could be done is to add someone from the LUIS team as a collaborator to the LUIS app and provide them with the utterances throwing discrepancies. It would allow us to start investigate their app.