Botframework-sdk: [LUIS] Composite entities are not returned by published model, but children are?

Created on 11 Aug 2017  路  19Comments  路  Source: microsoft/botframework-sdk

System Information

  • SDK Language: Node.js
  • SDK Version:3
  • Development Environment: localhost

Issue Description

I have several LUIS instances that use composite entities. Occasionally, LUIS will not return any composite entities but it returns their children. Oddly enough, it works while trained, but even if you publish the trained model, it still doesn't return the composite entity itself.

Code Example

_None_, this is only interfacing LUIS.

Steps to Reproduce the Issue

  1. Build LUIS instance that uses composite entries.
  2. Train LUIS, such that composite entities work on the web debugger.
  3. Publish LUIS, try the composite entity detection.

Expected Behavior

Entities should be tagged appropriately, like in training.

Actual Results

The composite entities are not tagged.

All 19 comments

Note that this sometimes works, and sometimes doesn't. For instance, now it works on the published version but not on the trained version.

Can you provide some screen captures of the behavior that you're seeing? Additionally, can you provide the JSON model for your LUIS app?

I can't add the JSON for the Luis model, but here is a screenshot: http://i.imgur.com/DB8nCDL.png


Screenshot


Clearly, it uses the same model for the trained one and the published one (see the times posted).

@MathBunny Your composite entity is supposed to encompass your Security and FixedIncomeSecurityType entities? Just to check via a different route, have you tried plugging in your utterances directly against your LUIS endpoints?

No, the Filter Item (composite) is supposed to encompass the Argument. However, the published model just labels the Argument, but does not show that it's part of the composite. In the trained it does it correctly.

The curly braces around [$Argument ] mean that the Argument entity is wrapped inside of a composite entity. If you press Ctrl+E, you'll be able to shift between the labels. The default label will show the entities except for the composite entities. The first Ctrl+E will show the composite entities only, and the next Ctrl+E will show the tokens (actual words).

If you want to select the view manually, you can access a dropdown menu over the right side of that training window (over the scores). Below is a screencap of what the menu looks like:

labels view

The view doesn't matter. I showed you in the screenshot that one shows the entity prediction as being argument and filter, and another is just argument. The published model doesn't show me the composition type.

This isn't a matter of view, I get that, but if you look at the tooltip it clearly outlines the difference.

My problem is that while the trained model and the published model are the exact same, they're returning different results.

My mistake @MathBunny, @DeniseMak do you have any idea why this might be happening?

@MathBunny if you train your model with more utterances eventually the red underline (and tooltip) should go away as the two models (your trained and published model) become more aligned.

As to _why_ the models aren't providing the same predictions when you train and publish the app within the span of a few seconds I can't say. I'll send an inquiry to the LUIS team and see if they've received any previous reports about this issue.

Closing this issue and I'll update this thread when I get more info. If you've trained and trained the models but are still running into this problem, feel free to reopen this issue.

@MathBunny can you type into the labeling component (the area when you're inside an intent and entering new utterances) one of the queries that show a difference in prediction in the training screen and provide a screen capture of what the results are?

It works fine in the training screen, it only fails in the testing screen (and when requests are made to it).

Before you used the testing tool, was this utterance already labeled and part of your model?

Yes

More info on this: This happens even if it isn't labelled. Sometimes I have the exact phrase as an utterance in the intent, sometimes not, it doesn't change anything. :/

:/ I'll pass this onto the LUIS dev team and see if we have any answers or need additional information. Sorry about the hassle.

Edit: Are you able to convey your App ID in case the team would like to use test your app? If you're able to but don't wish to share it in this public forum I can contact you via email.

Sorry, I don't have access to the project anymore (it was an internship project). I'll try my best to answer questions though. Thanks!

@MathBunny do you by chance know if your former advisors/bosses might be interested in receiving additional assistance? What could be done is to add someone from the LUIS team as a collaborator to the LUIS app and provide them with the utterances throwing discrepancies. It would allow us to start investigate their app.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

bluekite2000 picture bluekite2000  路  4Comments

daveta picture daveta  路  3Comments

hailiang-wang picture hailiang-wang  路  3Comments

maba4891 picture maba4891  路  3Comments

stijnherreman picture stijnherreman  路  3Comments