The service locator system groups everything affected by input into the same service.
This results in one profile dictating focus handler, cursor, pointer and input sources.
There is no separation of Action(Raw Input) and Reaction(Cursor movement, Focus handling, Pointer casting).
A better organization would be to have an Input profile (Action), separated from an interaction profile (Reaction).
This would allow someone to customize how the input is processed, without needing to modify all the waterfall consequences of UX interactions.
In order to change how input is processed, it is necessary to modify how Cursor, Pointer and Focus are handled.
Try and modify how gestures or buttons are handled before being based onto the UX interaction systems. The root input profile carries too much responsibility.
Unity 2018.3.8f1
Beta2 - MRTK development branch - commit f664a417a11756c3d545f7a6c9b31f1608e56a1e
@provencher do you feel that this is something that you would like to implement or is this a proposal that you would like to see picked up by another community member?
Given that I'm still familiarizing myself with these systems, I don't think I'm the best person to go ahead and design this refactor, let alone implement in a way that works well for all supported platforms, without breaking support for existing users.
I'd very much prefer to see this done by someone more experienced working with this project.
I'm happy to discuss proposals on how best to go about this however.
The way I went about this in a similar system, was to create scriptable objects for all the different inputs.
On one side, the platforms can trigger events in these scriptable objects.
On the other side, the individual buttons, or interaction receivers can have a serialized field with the inputs they'll bind to.
This grants the flexibility of easily repurposing which inputs trigger which actions, on a granular per object level.
The issue with this system, as far as I can tell, reading through the MRTK codebase, is that the events are passed down from the focus handling system, at least in the cases I read through. This has the benefit of only triggering input events for objects that are in focus, but it makes it difficult to program a catch all event for a non focused entity in the scene. It also makes it difficult to repurpose inputs reactions without fundementally changing a behavior script.
Splitting up this behavior is something I'm not sure how best to approach in a non destructive way.
The service locator system groups everything affected by input into the same service.
That's not entirely true, while yes, the input system handles most of the routing of the events to different places in the application, there are separate data models or providers that are actually bubbling up the raw data from the sdk/api.
@SimonDarksideJ any other comments here?
btw love the idea about the action/reaction profiles. I'd love to hear more.
If it's felt the Gaze / pointer and Cursor implementations are too tightly interwoven in to the input system, then there is certainly a call to add further abstraction.
I also like the idea of being able to manage the routing input/output with the InputSytem to allow any provider to hook in to any specific part of the pipeline. Enables more advanced scenarios, like @provencher suggests to be able to hook any other system in to the pipeline easily.
I'm not too sure I understand correctly but I see three different concerns expressed by @provencher:
I agree it is cumbersome and I'll like to see it changed. This itself does not imply any sort of functional dependency between the systems each profile configures though.
Did you try registering as a global listener in the input system? Would you like to be able to do it in a different way? How?
Pointers help you abstracting away from the specific inputs. If pointers dont' work for you, you can still build your reactions based on input actions. Did you try either? How do you see yourself consuming inputs instead?
You can find more details about input events, input actions and pointers in the, work in progress, documentation here: https://microsoft.github.io/MixedRealityToolkit-Unity/Documentation/Input/Overview.html. Feedback is welcome!
@luis-valverde-ms
The biggest issue I'm facing right now is number 2.
I've looked that the global listener, simply extending the HandleEvent systsem, but things are too generic to easily extend at those levels.
Instead, I implemented a profile for global input event processing. It is null by default, and allows users of the MRTK to implement a child of the MixedRealityGlobalInputEventSystemProfile, and override the event responses that suit them in order make use of that functionality without passing through the focus system.
This is described in PR https://github.com/Microsoft/MixedRealityToolkit-Unity/pull/3887
This allows the inputs to be used without relying on the focus interaction system of the MRTK. As it stands, that PR resolves the bulk of my issues with this PR
I'd like to understand better your use case and the difficulties that you're facing using global listeners. More infomation on the specific problem that you're trying to solve would help. I've got a few questions based on what you've told us so far:
without passing through the focus system
Currently global events are raised before focus handling (see MixedRealityInputSystem.HandleEvent), what did you mean there?
things are too generic to easily extend
There are handler interfaces for listening to each event type (IMixedRealityInputHandler, IMixedRealityFocusHandler, ...). Are they too broad for your use case? Do you need finer granularity?
@luis-valverde-ms It is well possible that I'm misunderstanding how to use that system properly.
Part of the issue I have with global listeners, as I understand them, is that they are fed BaseInputEventData baseInputEventData, ExecuteEvents.EventFunction<T> eventHandler which requires casting to determine the correct type for the eventdata and eventhandler on the receiving end of the handler. I can't seem to find examples of classes making use of this, as everything is so indirect via generic interfaces.
Do you have a link to an example hooking into the click action generated from all platforms (air tap, button press, mouse click), for example?
The ideal goal is to be able to easily identify an input as it comes in from the source, have it's resulting action identified, (air tap | hand gesture tap | controller press) -> (Hold / click gesture) .
It also seems as though the input system implements actions at a platform level, and forwards them down through the input system, without offering much possibility of listening for actions rather than input event sources.
Fundamentally, an application only cares about event actions, and listening for those to fire makes the most sense in a world where input events can be repurposed at the end user level.
Have a look at ControllerPoseSynchronizer::OnInputChanged(InputEventData
Alright this seems to do what I'm after at first glance. I'm gonna iterate on my system and try and work with it.
I suppose the correct way to go then is to listen to events using the global listener, then check in the callback if the passed in action maps to the one I'm after from the Standard Action mapping profile.
From there I can do what I want. Correct?
I'm still working on this system, though my current problem is having a common definition for input actions.
It seems that all the input action bindings are done with strings that are hard-coded in the editor.
This is something I'm not a fan of as there is it is very error prone, and there is no way to avoid comparing strings in the editor.
One solution I propose here is to define an action and its axis constraint as a scriptable object, and then when you want to work with the Select action, for instance, it is a single scriptable object instance, which can be passed in a serialized field. This makes it easier to avoid defining actions in multiple places, and makes it simpler to seek out a particular one, without retaining a hardcoded string.
I think you're on point there: input actions would work better as assets (scriptable objects). The current solution is a bit fragile. Will open an issue for that.
Fantastic. I'm happy to review whatever solution is implemented when the time comes
Closing this issue and continuing discussions in #3887
@SimonDarksideJ and I did not set them up as strings. The strings are just descriptors 馃槅
Although I do like the idea of having them as a scriptable objects in and of themselves. Great idea @luis-valverde-ms
Thanks @StephenHodgson . Credit to @provencher as well. Can I ask you guys to have a look at #4083 ? That's probably the issue we'll use to move forward the actions as scriptable objects changes.