I've been plugging away on my AR app for a while, was getting close to done, but now I'm worried about compatibility with all the "Mixed Reality" devices coming next month. I've been designing with the Hololens in mind, and being able to see your room and real surfaces and mapping on top rather than a VR house, but the more I look into the Mixed Reality Portal, it seems like our Unity apps will run in the Lake House environment rather than what the camera sees in front of you. I don't have my glasses until the 17th, but would like to know how this gets handled and can't find the info anywhere.
Do we have to code something special to project the right and left cameras as the video background for the two eyes and align it up to the spatial mapped surfaces in real world, or is that automatic to simulate how a Hololens works? My app would not be the same in a psudo MR world without the Augmented part of it.
I found that Vuforia supports the camera projection to left and right eye with some setup, but would rather not have to go that route if I can do it with the MixedRealityToolkit, or if there's something that I'm missing.. I prefer the Hololens way of AR much better, but I know that the majority of users will be on the cheaper consumer devices, so we gotta make it work. Thanks..
@Skquark Unfortunately there is confusion with the terminology of Mixed Reality and how the immersive headsets compare to the HoloLens. These new $300 headsets are basically VR headsets. There is no passthrough capability. The cameras on the front are depth sensors like the HoloLens (so you don't run into a wall), but you won't be able to see through. Experiences on these devices are totally virtual. The Mixed Reality terminology refers to the "spectrum" of realities from physical to purely virtual and everything in between. Microsoft is set on using this term. While it is accurate, there are too many expectations that these cheaper headsets have similar functionality to the $3,000 HoloLens and that just is not the case. They share the sensor technology, much like the kinect, but the immersive headsets do not have AR capabilities.
Damn, that's what I was afraid of... So to make our Hololens applications function on the Mixed Reality headsets, we would have to emulate a fake room like in the editor's SpatialMapping component's Room Model, and we build a 3D set to match the room mesh surfaces? Or are we limited to only use Microsoft's Lake House in the portal to place our objects into? Trying to figure out how to adapt to this in-between fake AR and still provide a functional experience.. When I first saw the headsets I was all excited thinking they would give a similar experience to the Hololens by faking the real world in the black transparency with the camera view aligned, and I hope we'll have a workaround in the near future.
So what parts of the Toolkit won't work with the consumer headsets? Can we still use the SpatialProcessing to create surfaces from scanned or fake mesh? Are the Anchor Manager locations based on physical points or virtual points? Are we forced to use the Boundaries to fence our workspace in VR? Will there be a way to port our MRTK development project to Tango or Apple ARKit since those are made for real-world augmentation with the camera? Sorry for all the questions, but I haven't found these dev issues covered clearly anywhere.. Just when I thought I was in the home stretch for releasing my app, I gotta worry about being compatible with every other device since most people are not going to invest $3000 vs ~$300, and I would hate to see all the hard work I've put in only work for the rich.....
it seems like our Unity apps will run in the Lake House environment rather than what the camera sees in front of you.
This is not true. The HoloLens workflow and experiences are not changing. We're just adding the VR headset support to the toolkit. You shouldn't see much of a change as far as the look and feel of HoloLens apps (at least that's the goal).
That's reassuring to hear, glad the VR headset features will be integrating soon.. What should I be planning for to adapt my app (which relies on the generated surface planes) to work best on the Mixed Reality headsets? When the VR support is implemented, will we still be scanning the room and generating the world mesh with access to the cameras that we can project in the background view? Or do we skip the spatial scanning and load up a decorated room or house environment with the Surface Planes set static?
Just trying to get a jump on the workflow since millions of people are about to have a new toy in their hands in a few weeks, and we're all trying to give them something tangible to play with.. Can't wait to see everyone else's projects here out in the wild when the time comes, a new paradigm is being born..,
Didn't Microsoft just announce an in-library method for determining the hardware capabilities to allow developers to understand what type of platform they are running on? Immersive or see-through?
Unity did.
HolographicSettings.isOpaque