Mixedrealitytoolkit-unity: Mark KeywordManager as Obsolete (duplicate functionality)

Created on 21 Feb 2017  路  10Comments  路  Source: microsoft/MixedRealityToolkit-Unity

New SpeechInputSource should be the defacto keyword handler.
It's better integrated into the InputManager, and facilitates all the same functionality.

Most helpful comment

Yeah, @aalmada added that over Christmas and new years. https://github.com/Microsoft/HoloToolkit-Unity/pull/415

All 10 comments

Hello Hodgson, I appreciate the continuous efforts of your team in improving this toolkit. I wish there is a way for backward compatibility with this toolkit... As I write more and more codes based on one version, I find it very difficult to keep up with all the changes.

For example, I heard from a customer today that the removal of GestureManager caused a simple project to stop functioning... On the other hand, we just finished building an extensive project using KeywordManager, and now I see this request to mark it obsolete....

I also agree with maintaining backward compatibility in the toolkit. Is there a reason why the original components can't live side by side?

In addition, I disagree that KeywordManager is obsolete. It allows for the assignment of methods to keywords in the Inspector, and I think that's invaluable for prototyping and ease of use (in addition to being a loss of functionality). This was the reason KeywordManager was left in, and it hasn't been added to SpeechInputSource yet.

I think backwards compatibility got thrown out the window with the major input changes we did last fall, and I never suggested outright removal (I would have attached the _breaking change_ label to this issue), just adding the obsolete attribute. I think this was one of the classes that wasn't completely converted over to the new input pattern until a little after.

@keveleigh, the assignment of methods to keywords is supported by the SpeechInputSource.
They Essentially do the same things, but the KeywordManager does not register itself with the InputManager (Which is the main reason why I'd like to mark it as obsolete).

I'm suggesting this is because I'd like to make some additions to the speech & dictation capabilities but we need a way to switch between them. We can't use both at the same time.

If you want to use both phrase recognition and dictation in your app, you'll need to fully shut one down before you can start the other.

In order to do that we need to know if there are any instances of either recognizers in the scene, which the new input system helps achieve, otherwise we'd have to search the whole scene with GameObject.FindObjectsOfType<KeywordManager> which is expensive.

While I also agree that backwards compatibility is good, we're still in a young development phase for the HoloToolkit. People are still coming up with better and clever ways of doing things that are good for the toolkit overall. Sometimes those are breaking changes. I think in the past we haven't handled that properly, but hopefully going forward we'll do better. Sometimes backwards compatibility can hamstring innovation and improvements that can benefit everyone. If a project requires a certain version, then leave it there, otherwise put in the work to make your product better. I've had to face that particular dilemma many times in the past just with the Unity engine itself! (Although now they've got a nice little feature that updates most of the stuff for you).

@HodgsonSDAS Maybe I missed that, but in the most recent version, I could only register keywords and keycodes in the Inspector. Any method calling and keyword handling had to happen in a speech input responder, as opposed to dragging a GameObject into the Inspector and selecting a method on it with the KeywordManager. This also has the unfortunate side effect of needing to keep track of your keywords in two places (the Inspector and the script that actually handles the keyword).

On your second point, what about querying PhraseRecognitionSystem, like in the section you linked? It controls all the KeywordRecognizers and GrammarRecognizers, so you don't have to worry about searching the entire scene for them. (We added it specifically to prevent the scenario you described above 馃槂)

It just seems like the input manager is a lot of extra "stuff" if all I want to do is quickly add some keywords to my scene. I don't think we should force people into using the input manager in order to use the HoloToolkit. Something in the README like:

Getting started with Voice: If you're already using the HoloToolkit's InputManager, great! Simply use SpeechInputSource. Otherwise, feel free to take a look at KeywordManager.

Yeah, @aalmada added that over Christmas and new years. https://github.com/Microsoft/HoloToolkit-Unity/pull/415

It just seems like the input manager is a lot of extra "stuff" if all I want to do is quickly add some keywords to my scene.

What extra stuff? It handles all the inputs.
Well do you wanna do it quickly or correctly?
I understand the tradeoff between fast prototyping to show a proof of concept, but when it comes to shipping a production level product, it needs to be done correctly.

On your second point, what about querying PhraseRecognitionSystem, like in the section you linked? It controls all the KeywordRecognizers and GrammarRecognizers, so you don't have to worry about searching the entire scene for them.

Thanks for the info, I'll have to go back and read through that a bit more. That information does change the way I need to approach the dictation capabilities I was prototyping. (It makes it much simpler!)

Hey Stephen, could you please point me to the documentation for the change in input pattern? Hopefully it shows how I can update all my codes to the new input pattern, without having to start from scratch.

@markingram7 Unfortunately, I think we still need to write a proper document explaining all the required changes which was brought up with issue https://github.com/Microsoft/HoloToolkit-Unity/issues/467.

Here's a blog post someone put together that addresses some of the changes.

@keveleigh In the case where the active set of words depends on the focus object, you'll find that the event manager version is much simpler. Please check the SpeechInputSource.unity test scene. It may be confusing to developers but I believe both solutions are valid. I tried to explain that in the documentation.

I agree with @HodgsonSDAS that InputManager is the way to go. The legacy code is a bunch of singletons that in Update() query data from other singletons that may or may not have been updated already. The InputManager allows data to be handled just after it's generated or changed. I would have liked to see it implemented using reactive extension (UniRx) but I understand the reasons not to.

I changed job recently and I don't work with the Hololens anymore. Unfortunately, it will be much harder to contribute with code.

@HodgsonSDAS Thank you for the blog post! Looking forward to better documentation.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

nuernber picture nuernber  路  3Comments

StephenHodgson picture StephenHodgson  路  3Comments

Alexees picture Alexees  路  3Comments

overedge picture overedge  路  3Comments

chrisfromwork picture chrisfromwork  路  3Comments