Mixedrealitytoolkit-unity: vNext: Defining MRTK Customer personas

Created on 2 Nov 2018  Â·  7Comments  Â·  Source: microsoft/MixedRealityToolkit-Unity

Having a small (3-5) set of personas that represent typical developer customers can be a helpful way to describe the audience for an issue and/or feature.

Sometimes it is fun to assign names (ex: Hawking) to them, other times just a 1-3 work description (ex: hackathon team member) works better.

I will seed the conversation with descriptions of the ones that I have in mind:

  • Hackathon Team Member - Not necessarily a developer. Interested in seeing results quickly. Uses the Toolkit components as-is.
  • Experience Creator - Fully customized presentation layer. Uses the default system implementations.
  • Advanced / Enterprise Developer - Wishes to customize and control every aspect of their project, builds custom system implementations.

Please add comments with your suggestions / proposals and we will ratify the set in a future shiproom meeting.

Documentation

Most helpful comment

I would add a beginner 'Creators' who started exploring the mixed reality with Unity. Building blocks for the common interaction patterns, UX controls can accelerate their explorations and learning. Along with the Mixed Reality Academy, beginners will look for the common interactions and concepts.

Some of the basic elements from my own experience as a beginner:

Setting up

  • How to set up a new scene/camera/environment for the HoloLens and immersive headset?
  • How can I place an object in the scene? (proper size, y position, distance...)
  • How can I build and run in Unity editor?
  • How can I build, deploy and run on the device?
  • How can I preview on the device (Holographic Remoting)

Input, basic event handling

  • How can I make an object respond to a gaze cursor?
  • How can I make an object respond to an air tap gesture?
  • How can I make an object respond to the motion controllers? (ray/buttons/thumbstick...)
  • How can I get default teleportation/locomotion?

Interactions

  • How can I grab/move/release an object? (DragAndDropHandler)
  • How can I resize an object? (Bounding Box)
  • How can I rotate an object? (Bounding Box)
  • How can I create a button? (Holographic Button)
  • How can I add new objects dynamically into the space? (Instantiating object on input event)
  • How can I show/hide objects dynamically?
  • How can I delete objects dynamically?
  • How can I add audio feedback on the object?
  • How can I play background audio?

Spatial Mapping

  • How can I start/stop spatial mapping?
  • How can I turn on/off visual meshes?
  • How can I place object on the meshes?
  • How can I turn on physics? How can I make an object fall?

Scene management

  • How can I switch between the scenes?
  • What is recommended way to switch between the scenes?

All 7 comments

I would add a beginner 'Creators' who started exploring the mixed reality with Unity. Building blocks for the common interaction patterns, UX controls can accelerate their explorations and learning. Along with the Mixed Reality Academy, beginners will look for the common interactions and concepts.

Some of the basic elements from my own experience as a beginner:

Setting up

  • How to set up a new scene/camera/environment for the HoloLens and immersive headset?
  • How can I place an object in the scene? (proper size, y position, distance...)
  • How can I build and run in Unity editor?
  • How can I build, deploy and run on the device?
  • How can I preview on the device (Holographic Remoting)

Input, basic event handling

  • How can I make an object respond to a gaze cursor?
  • How can I make an object respond to an air tap gesture?
  • How can I make an object respond to the motion controllers? (ray/buttons/thumbstick...)
  • How can I get default teleportation/locomotion?

Interactions

  • How can I grab/move/release an object? (DragAndDropHandler)
  • How can I resize an object? (Bounding Box)
  • How can I rotate an object? (Bounding Box)
  • How can I create a button? (Holographic Button)
  • How can I add new objects dynamically into the space? (Instantiating object on input event)
  • How can I show/hide objects dynamically?
  • How can I delete objects dynamically?
  • How can I add audio feedback on the object?
  • How can I play background audio?

Spatial Mapping

  • How can I start/stop spatial mapping?
  • How can I turn on/off visual meshes?
  • How can I place object on the meshes?
  • How can I turn on physics? How can I make an object fall?

Scene management

  • How can I switch between the scenes?
  • What is recommended way to switch between the scenes?

Is this a task or just a general discussion?

Is this a task or just a general discussion?

It is a discussion. I believe it best to have it on GitHub for transparency and to cast a wider feedback net than just shiproom.

[Fixed spelling errors]

These are great as a base for topics going forward. The subjects above, Setting Up, Inputs and Handling, and Interactions are the basis for any application once we have documentation for those we can build off them. These are a great to focus on first.

One persona distinction I’d make is between developers with enterprise dev experience and developers with game dev experience.

In @davidkline-ms's initial persona breakdown above, there is one bucket called “Advanced / Enterprise Developer”, which describes a broad set of developers who want to customize every last detail. However, my observation has been that actual LOB enterprise developers generally have traditional desktop/mobile dev experience, and as they dive into VR/AR app development, they find the typical idioms of game engine development an intimidating shift in mindset. Developers with years of game dev experience will definitely come in with their own preferred way to think about 3D input, focus and so on and will want to customize lots of details. In contrast, LOB devs brand new to 3D development will be focused on shipping their app quickly and at high quality, and are likely to want strong guidance, an opinionated control library, and to stay on the rails to align with how other experiences on the device work today.

An analogy is to the 2D apps and games we see on mobile today. Generally 2D apps will make direct use of controls and UI idioms from the native platform they're running on, both because this is often the path of least resistance for getting an app authored quickly and with corner cases handled, and because users benefit from an app whose UI patterns feel familiar to the other apps they use. In contrast, 2D games are often fully skinned, with custom buttons, lists and other UI elements all designed from scratch. Game developers expect to style and customize their UI as part of any given project to fit the overall feel of the experience, and so that styling and customization ends up leaning on skills developed with game dev experience.

To ensure that MRTK can enable the broadest set of devs to succeed in building VR/AR apps, we should be careful to balance the needs of those with years of 3D development experience with those who’ve only touched 2D UI toolkits throughout their career. As we all move MRTK forward, we should keep in mind that large pool of underserved LOB devs out there waiting for VR/AR development to become as accessible as desktop/mobile development.

That's a great distinction @thetuvix!

Here's a quick summary of developer persona from how they plan to use MRTK

https://github.com/Microsoft/MixedRealityToolkit-Unity/wiki/Overview

Was this page helpful?
0 / 5 - 0 ratings