Mixedrealitytoolkit-unity: TextToSpeechManager.cs why Monobehavior instead of Singleton?

Created on 14 Jul 2016  路  4Comments  路  Source: microsoft/MixedRealityToolkit-Unity

I didn't want to have to drop a TextToSpeechManager script throughout my project, so I converted it to a Singleton instead of a MonoBehavior. This seems far more intuitive to me, being able to bring up an instance of TextToSpeechManager throughout any script and just call SpeakText(). What was the reasoning behind Monobehavior instead?

Question

Most helpful comment

Yeah, I agree with your explanation. Thank you!

All 4 comments

When @jbienz brought the TextToSpeechmanager(TST) it was originally built as a Singleton. The design issue was that the TST encapsulates the audio source, but there are compelling situations where you want to organize TST through many independent audio sources and the Singleton by design confuses this architecture. By encapsulating the audio source we also simplify the interface and keep responsibility for controlling AudioSource objects in the Asset configuration data of the Unity scene rather than in the script.

On the plus side for your situation, our implementation of Singleton caches the first instance of <T> it finds in the scene so you only have to drop an instance in the editor once, You can then go ahead and declare a Singleton in your script and use it. The binding will happen automatically on first access.

Let me know if any of that is unclear!

I'm starting to think maybe we should drop the Manager from this type name.

Yeah, I agree with your explanation. Thank you!

Ok to close this issue?

Was this page helpful?
0 / 5 - 0 ratings

Related issues

nuernber picture nuernber  路  3Comments

provencher picture provencher  路  3Comments

dustin2711 picture dustin2711  路  3Comments

StephenHodgson picture StephenHodgson  路  3Comments

Alexees picture Alexees  路  3Comments