Three.js: ArrayCamera

Created on 3 Mar 2017  ยท  36Comments  ยท  Source: mrdoob/three.js

For WebVR we now use VREffect for stereo render. A camera is passed as a parameter that gets duplicated and then the scene is rendered twice, once per eye. There's redundant work that we could avoid. We have superficially discussed in the past the idea of introducing an ArrayCamera that receives a list of cameras and then the render work is organized in a more efficient way than we do now with VREffect. Before getting into the implementation weeds I would like to better understand the problem and see the solutions people have in mind. @mrdoob Can you quickly sketch out the API you had in mind and what optimizations we would do inside to kick off the conversation?

Enhancement

Most helpful comment

Ok. For a project I'm working on this has become a bottle neck. Planning on giving it a go tomorrow โœŒ๏ธ

All 36 comments

So the idea of ArrayCamera would basically be extending Camera and add an array of cameras. StereoCamera could then extend ArrayCamera and populate it with the info WebVR gives us. We'll probably have to add something like viewport to PerspectiveCamera so we can control where they render (this may make renderer.viewport redundant). Lets keep in mind that there are more uses to this than just stereo, https://www.leia3d.com/ for instance renders a grid of 4x4 per frame.

WebGLRenderer.render() will then check isArrayCamera. It will then do all the projecting using the main camera and then it will render the scene using the cameras in the array. @toji suggested that the best way would be to alternate viewport for each object being rendered (apparently changing the viewport is cheap).

Viewport is frequently a cheap change to make, may not always be the cheapest thing. That's an area that can be tweaked pretty easily though. ArrayCamera would also allow for some interesting instancing tricks in some cases, as well as enabling OVR_multiview-style extensions if/when WebGL gets them.

One thing I'd like to propose in conjunction with this is instead of (or in addition to) a StereoCamera a more explicit VRCamera that automatically sets the viewports and such from WebVR and explicitly includes head movement (so no need to attach it to a VRControls.) This is because the current split between VREffect and VRControls necessitates some ugly matrix math to try and negate the pose transform from the view matrices that WebVR provides. (VRControls is still useful, it just maybe shouldn't be the recommended way of handling camera motion.)

I agree with @toji, let's hope that WEBGL_multiview will be implemented soon, as it would improve a lot the current render.
With the current proposal we could speed up the current implementation but still we'll have the same number of drawcalls, we'll just save context/state changes as we'll be rendering the same object twice on different viewport.
Still it's nice like an MVP as we could implement it directly in the current webgl renderer without need to use webgl2.0.
Once we get the webgl2 renderer we could go for a more advanced approach like using instancing to reduce drastically the number of drawcalls.

I'll open another issue with the proposals for a VR renderer with webgl2 to keep the discussion separated from here.

What @toji said ๐Ÿ‘

Ok. For a project I'm working on this has become a bottle neck. Planning on giving it a go tomorrow โœŒ๏ธ

Ok. I've got something working. API is definitely not final.

There is currently a huge hack. WebVR give us 2 projection matrices. How do we combine them so we can frustum cull the scene once?

Right now I'm just using the left eye projection matrix for the main camera. This results in objects disappearing in the right eye.

https://github.com/mrdoob/three.js/blob/1e85cff6c217e34ae47269cb5750d7e04386e5c9/examples/js/vr/WebVRCamera.js#L122-L125

Currently this is the only example that uses the new API: http://rawgit.com/mrdoob/three.js/dev/examples/webvr_daydream.html

Nice start! Sorry if I ask stupid questions. I'm trying to understand the design not criticizing the approach. The current gain of the new design comes just from a unique frustum culling vs 1 per camera on the VREffect approach right?

We also reduce program state changes by half.
API-wise, I have also unified VRControls and VREffect into just WebVRCamera (likely to be renamed to VRCamera).

I need people to test it on Oculus and Vive though. I can only test on Daydream myself.

Firefox + Vive
2017-03-10 23_26_40-apcent

(if you need to debug further, it does the same with the WebVR extension)

@spite Fixed!

@mrdoob nice start! :) I can't wait to arrive home to take a look at the code and try it. In the meantime did you take a look at the Cass Everitt diagram for common frustum culling for stereo cameras? https://www.facebook.com/photo.php?fbid=10154006919426632&set=a.46932936631.70217.703211631&type=1&theater

Very nice!

I know the positioning of the cameras here is based on what was being done in VREffect but I think you have an opportunity to handle it much more nicely now. You should be able to use the frameData.leftViewMatrix and frameData.rightViewMatrix in a very similar fashion to the projection matrices to set the camera position/orientation/etc. (They're already inverted, so you might need to re-invert them. I forget if three does that internally or not.)

Oh! Wasn't aware. I'll implement it tonight ๐Ÿ‘Œ

With the latest changes we lost the ability to scale the IPD as introduced in https://github.com/mrdoob/three.js/pull/10667
It would be great if we could have a IPD scale factor available in the WebVR API itself, so we don't need to decompose the matrices do the stuff and build a new one again. I've opened an issue to dscuss about it https://github.com/w3c/webvr/issues/204

Btw I've also opened another issue as we're moving to use just the proj/view matrices exposed by the API, so it makes sense that it could provide also the combined frustum as it's a feature that most of the engines will require https://github.com/w3c/webvr/issues/203

@toji Done!

Hey guys, this is wonderful. I've been looking forward to this for a while and am very happy to see it coming together, so thanks.

I should point out one case that you might want to consider, which is drawing to the screen. It would be great to maintain the performance benefits of this approach to drawing the two eyes and the screen all in one shot. I realize this is not possible right now, since drawing to the screen can't happen until after you've submitted the two eyes to the VRDisplay and then cleared the canvas. But @toji and I once speculated about the possibility of rendering the stereo view to a buffer (with WebGL2 so you can keep anti-aliasing) and submitting from there. So it's worth keeping in mind for flexibility for the future.

Also, I can imagine this could be helpful for a picture-in-picture preview on the screen, which I do all the time.

Been thinking about this a lot lately, especially on the WebVR spec side. Turns out that generalized frusta unions aren't trivial. Go figure! Anyway, we'll keep looking at it from that end but I wanted to propose a slightly-less-hacky approach to culling as an interm step.

Basic idea is to have the camera supply the renderer with a generic "Culling Volume" rather than the renderer computing the frustum from the camera each frame, and intersection tests would be done against it. For normal cameras it would return a Frustum object, so that case should have no performance difference. For an ArrayCamera it could return an object that tests against a Frustum for each camera in the array and returns true if the object is in any of them.

There's some (very much untested) code here for reference: https://github.com/toji/three.js/commit/a8042b518197b25b7c3ee66c105cbbdb4bbe14f6

Obviously this isn't the ideal method for WebVR camera, but it would be more correct in the meantime and could be swapped out for a better method once we work out the math. Also it may actually be the "right" answer for a more generic array camera that's not always guaranteed to have largely similar frusta. (Like if you had two cameras facing 180deg apart.)

Thoughts? I can clean up my code a bit more and submit a PR if you'd like.

@toji I like the idea, as you said, it's not the optimal for WebVR but eventually would help in other more generic use cases where is not as "easy" (hehe) as VR camera.
It could be used as the default failback in case that the specific camera doesn't implement a optimised frustum, and we could at least have something working until we get a correct implementation for the combined frustum in three.js or in the API itself.

@toji I like this idea as well. Definitely better to get something imperfect going than to leave this issue to rot.

Just for clarity, you say there's a culling volume object, but it sounds to me like there should be a culling callback function so you could test against arbitrary shapes. Maybe supply a default convenience function for testing a single frustum.

Isn't it a bit easier to do a frusta union if the two are offset along the X axis the way a pair of eyes would be?

Now where does this leave us with depth sorting? Do we need a custom callback function for that too?

Just for clarity, you say there's a culling volume object, but it sounds to me like there should be a culling callback function so you could test against arbitrary shapes. Maybe supply a default convenience function for testing a single frustum.

That's kind of what this does? It can't be a single callback because the renderer at least tests sprites and objects. Also, the basic camera object just returns a normal Frustum if you don't override it.

Isn't it a bit easier to do a frusta union if the two are offset along the X axis the way a pair of eyes would be?

Yes, but that's not something that WebVR can guarantee. (You'd think it would be, but it's not!)

Now where does this leave us with depth sorting? Do we need a custom callback function for that too?

Ah, didn't consider that. Yes, we'd probably need to allow depth sorting to be overridden as well.

@toji it looks a good short term compromise. PR! PR! PR! PR!

Here's my current thinking...

renderer = new THREE.WebGLRenderer();
renderer.setVRDisplay( vrDisplay );
renderer.onFrame( render );
renderer.animate();

function render() {
    renderer.render( scene, camera );
}

So WebGLRenderer would have a WebVRCamera internally that will use if isPresenting is true. The renderer will then take care of everything.

I'm thinking that if isPresenting is true, the WebVRCamera could be added as child of the passed camera in renderer.render() so the user could do raycasting with it for gaze based navigations.

The renderer.onFrame() stuff is basically to hide all the ( vrDisplay.isPresenting ? vrDisplay : window ).requestAnimationFrame() (and WebVR 2.0's commit()) stuff.

How does this sound?

Actually, simpler...

renderer = new THREE.WebGLRenderer();
renderer.setVRDisplay( vrDisplay );
renderer.animate( render );

function render() {
    renderer.render( scene, camera );
}

So renderer.animate would repeatedly call render using whatever the appropriate scheduling mechanism is? Doesn't sound bad to me! (I assume there would be a way to pause the animation loop too?)

So would a pull request of my above idea still be valuable, or are you planning to handle that differently under your proposed changes?

The proposal sounds good to me, I'm just not sure about the WebVRCamera being a children of the current camera and you said that you want also to hide it from the user right?
Just some quick thoughts: So the user should take care at app level of the transformations applied to the camera that shouldn't be applied if presenting, let's say rotating the camera with the mouse vs using the vrPose of the HMD. If you want to add a HUD you're going to add it to the main camera, so it won't be visible on the VRCamera? Also you could have objects aligned to the main camera but if you move your head you won't get it updated as it's child of the one you use.

Yeah for head locked elements (as HUDs) we would need the pose of the headset to be applied to the passed camera instead of the invisible VR one

@dmarcos lets discuss this tomorrow ๐Ÿ‘Œ

@toji So you're suggesting to check in camera2 if the frustum check fails in camera1, correct?

As a temporary solution, yes. More generally I'd suggest allowing a bit of abstraction to the frustum check depending on camera type. (For example: a cubemap renderer could just return true ๐Ÿ˜‰.)

@toji FYI...

https://github.com/mrdoob/three.js/blob/dev/examples/js/controls/VRControls.js#L40-L44

Nexus 5, Chrome 57 without Google VR Services installed resolves the promise with a [] (empty array), instead of executing the catch().

That's the expected behavior.

On Tue, Apr 4, 2017, 5:15 PM Mr.doob notifications@github.com wrote:

@toji https://github.com/toji FYI...

https://github.com/mrdoob/three.js/blob/dev/examples/js/controls/VRControls.js#L40-L44

Nexus 5, Chrome 57 without Google VR Services installed resolves the
promise with a [] (empty array), instead of executing the catch().

โ€”
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/mrdoob/three.js/issues/10927#issuecomment-291685341,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAxJmQ2w-v7XYQ2rA9TL2FuoTNjWj_JKks5rst0GgaJpZM4MSuEW
.

@toji oks! good to know!

@mrdoob Continuing the suggestion from @toji the frustum tests for all cameras could be performed to setup a bitmask per drawable. That mask would then control if each drawable should render in the camera loop in renderObjects. That means a limit of 32 cameras but I think that's reasonable

@toji looking at your code (https://github.com/toji/three.js/commit/a8042b518197b25b7c3ee66c105cbbdb4bbe14f6) I think it's a nice starting point to kick off the implementation.

Would you mind pushing a PR so we could keep the discussion there?

Was this page helpful?
0 / 5 - 0 ratings

Related issues

seep picture seep  ยท  3Comments

fuzihaofzh picture fuzihaofzh  ยท  3Comments

filharvey picture filharvey  ยท  3Comments

clawconduce picture clawconduce  ยท  3Comments

Horray picture Horray  ยท  3Comments