Mapbox-gl-js: CustomLayer + three.js // need for camera position

Created on 10 Oct 2018  路  24Comments  路  Source: mapbox/mapbox-gl-js

mapbox-gl-js version: v0.50.0-beta.1

Hi I'm experimenting with the new CustomLayer and using three.js to augment the map. I'm now facing multiple situations (raycasting, dynamic shader, etc) where I would need the camera position & rotation (its transformation matrix). If I understand the projectionMatrix (that gets feed to the render function) correct it is somehow included in there. But I don't know how to retrieve it from the 4x4 matrix (if it is even possible).

Do you have a solution for this problem?

It seems to me that interoperability between MB-GL-JS and three.js is a bit limited because you are working with a complete projection matrix while three.js keeps a reduced projectionMatrix and a viewMatrix sepereated and multiply them in the shader. Is this correct?

needs discussion

Most helpful comment

I had success with decoupling the provided viewProjectionMatrix into a projectionMatrix and a viewMatrix. Three.js also provides a pleasent "decompose" function to get position, rotation and scale from the viewMatrix.

Thats the code:

render(gl, viewProjectionMatrix) {

    const transform = this.map.transform;
    const camera = this.camera;

    const projectionMatrix = new Float64Array(16),
      projectionMatrixI = new Float64Array(16),
      viewMatrix = new Float64Array(16),
      viewMatrixI = new Float64Array(16);

    // from https://github.com/mapbox/mapbox-gl-js/blob/master/src/geo/transform.js#L556-L568
    const halfFov = transform._fov / 2;
    const groundAngle = Math.PI / 2 + transform._pitch;
    const topHalfSurfaceDistance = Math.sin(halfFov) * transform.cameraToCenterDistance / Math.sin(Math.PI - groundAngle - halfFov);
    const furthestDistance = Math.cos(Math.PI / 2 - transform._pitch) * topHalfSurfaceDistance + transform.cameraToCenterDistance;
    const farZ = furthestDistance * 1.01;

    mat4.perspective(projectionMatrix, transform._fov, transform.width / transform.height, 1, farZ);
    mat4.invert(projectionMatrixI, projectionMatrix);
    mat4.multiply(viewMatrix, projectionMatrixI, viewProjectionMatrix);
    mat4.invert(viewMatrixI, viewMatrix);

    camera.projectionMatrix = new THREE.Matrix4().fromArray(<any>projectionMatrix);

    camera.matrix = new THREE.Matrix4().fromArray(<any>viewMatrixI);
    camera.matrix.decompose(camera.position, camera.quaternion, camera.scale);

    // console.log(camera.position)

    this.renderer.state.reset();
    this.renderer.render(this.scene, camera);
  }

the code to calculate projectionMatrix and projectionMatrixI could propably be moved out of the render function and only has to be recalculated when the pitch or the screen size changes. And the Float64Arrays could be reused.

After all I would still prefer to get projectionMatrix and viewMatrix passed to the render function as two seperated values; they could be mutiplied by the user (if needed).

All 24 comments

Thanks for the feedback @indus !

Would you want the camera position and rotation as separate values or a transformation matrix that incorporates them both? So position would be the mercator coordinates of the camera and rotation would be a euler angle? Is there anything else you would need that we aren't providing?

Do you have a solution for this problem?

Not yet, but we should. We don't currently store the position of the camera anywhere because we're specifying the camera as what it's looking at instead of where it's looking from. https://github.com/mapbox/mapbox-gl-js/pull/6093 implements a way to convert that to a position.

Is there anything else that we should expose or provide?

If #6093 is useful for this, I'd be happy to pick the PR up and get it merged. Should those methods be exposed on camera/map?

@ansis It is hard to say what would be most usefull but I've looked into the code a bit and I would guess when it comes to three.js a separation of the perspective and the transformation would be nice. I'm not sure I want mercator coordinates for the position but world coordinates as a vec3 based on the gl context.

In three.js for example the two matrices get multiplied in the shader: https://github.com/mrdoob/three.js/blob/34dc2478c684066257e4e39351731a93c6107ef5/examples/js/shaders/BasicShader.js#L15

I mean you don't have to do it like three.js does, but it seems pretty common to have three separate matrices (model-, view- and projection-matrix): http://www.opengl-tutorial.org/beginners-tutorials/tutorial-3-matrices/#the-model-view-and-projection-matrices
According to this article what the render function gets as a property (and is stored in the map.transform) isn't a projectionMatrix but a viewProjectionMatrix.

If you would provide them seperated it would be pretty easy to get the position and the rotation of the camera (from the viewMatrix).

Right now it is not so easy, and until now I haven't found a way how to do it. An idea I have right now is to multiply the inverse projectionMatrix (like so; the values are in map.transform) to the provided viewProjectionMatrix to get the viewMatrix (just an idea - I'm not good with this stuff ;-)

@ryanhamley while the feature proposed in the PR sounds helpful in general and would bring a reasonable addition. I would guess that it won't be a big benefit for the sort of implementation problems I tried to describe. In my case I'm not dealing with coordinates but only gl-units (there would be additional transformation steps involved that require more helper functions).
In my understanding the splendid CustomLayer implementation should be pretty low-level but easily accessible for WebGL enthusiasts.

I had success with decoupling the provided viewProjectionMatrix into a projectionMatrix and a viewMatrix. Three.js also provides a pleasent "decompose" function to get position, rotation and scale from the viewMatrix.

Thats the code:

render(gl, viewProjectionMatrix) {

    const transform = this.map.transform;
    const camera = this.camera;

    const projectionMatrix = new Float64Array(16),
      projectionMatrixI = new Float64Array(16),
      viewMatrix = new Float64Array(16),
      viewMatrixI = new Float64Array(16);

    // from https://github.com/mapbox/mapbox-gl-js/blob/master/src/geo/transform.js#L556-L568
    const halfFov = transform._fov / 2;
    const groundAngle = Math.PI / 2 + transform._pitch;
    const topHalfSurfaceDistance = Math.sin(halfFov) * transform.cameraToCenterDistance / Math.sin(Math.PI - groundAngle - halfFov);
    const furthestDistance = Math.cos(Math.PI / 2 - transform._pitch) * topHalfSurfaceDistance + transform.cameraToCenterDistance;
    const farZ = furthestDistance * 1.01;

    mat4.perspective(projectionMatrix, transform._fov, transform.width / transform.height, 1, farZ);
    mat4.invert(projectionMatrixI, projectionMatrix);
    mat4.multiply(viewMatrix, projectionMatrixI, viewProjectionMatrix);
    mat4.invert(viewMatrixI, viewMatrix);

    camera.projectionMatrix = new THREE.Matrix4().fromArray(<any>projectionMatrix);

    camera.matrix = new THREE.Matrix4().fromArray(<any>viewMatrixI);
    camera.matrix.decompose(camera.position, camera.quaternion, camera.scale);

    // console.log(camera.position)

    this.renderer.state.reset();
    this.renderer.render(this.scene, camera);
  }

the code to calculate projectionMatrix and projectionMatrixI could propably be moved out of the render function and only has to be recalculated when the pitch or the screen size changes. And the Float64Arrays could be reused.

After all I would still prefer to get projectionMatrix and viewMatrix passed to the render function as two seperated values; they could be mutiplied by the user (if needed).

Could this be related to why I'm unable to get the THREE.EffectComposer to work correctly with the new custom layers functionality? As soon as I utilise the composer the depth information seems to be lost and the Three.js objects appear on top of the Mapbox objects.

Example with a simple THREE.RenderPass: https://jsfiddle.net/robhawkes/bn49y0oj/
Example using THREE.UnrealBloomPass (doesn't even show up): https://jsfiddle.net/robhawkes/zvjofaxw/

I'm hesitant to open an issue here about it as I'm not sure it's a bug with Mapbox per se, though I'm unsure what the fix is (whether with my Three.js code or otherwise).

Happy to move this to a separate issue if appropriate.

You might be interested in trying out threebox, which provides helpers for doing these camera projections/synchronization with Three.js.

Thanks a lot to @indus and Threebox to try and provide solutions to this problem. I'd really need a camera position as well to do some raymarching and display volumetric shaders over a map.

A problem I am facing at the moment is that @indus's solution creates a camera that is scaled with very small values (0.00006103515625 in my current test) and an axis scaled in the negatives. This makes handling rays in glsl trickier than it already is.

I tried to un-scale the camera, but this impacts the near clipping plane and objects disappear very quickly. I then tried to provide a different near than 1 to the call mat4.perspective(projectionMatrix, transform._fov, transform.width / transform.height, near, farZ), but this seems to impact the position of the camera.

I also tried to use Threebox, but it comes bundled with its own version of threejs, so doesn't make it easy to integrate with an existing project.

Also both of those solutions seem to use code extracted from mapbox libraries, as well as map.transform, which isn't documented, so it feels like this could change anytime.

If that adds anything to the conversation, I could somehow get some raymarching working when displaying a cube as large as a continent. But when working at the street level, the raymarching became unstable.

In short, at the moment, my understanding of projection matrices isn't deep enough to properly solve the problem. So it would be great if there was a stable way to get a non-scaled camera, with a projection matrix as well as a transform one.

@frading thanks for the feedback! This is something we're interested in fixing. I'll make sure to pull you in on reviewing any work on this. If possible, a full example of working content-sized cube or the broken street-level cube would be helpful.

Thanks for your reply! And yes, I'd love to provide you with an example. There are lots of moving parts to extract but I'll give it a go.

If someone else is having trouble getting the Three js Raycaster to work with custom layers, I've used this code successfully:

    itemsAtPosition(pos) {
        const cameraPosition = new THREE.Vector3(0, 0, 0)
            .unproject(this.camera)
            ;

        const mousePos = new THREE.Vector3(pos[0], pos[1], 0.99)
            .unproject(this.camera)
            ;

        const direction = mousePos
            .clone()
            .sub(cameraPosition)
            .normalize()
            ;

        this.raycaster.near = -1;
        this.raycaster.far = 5;
        this.raycaster.ray.set(mousePos, direction);

        return this
            .raycaster
            .intersectObjects(this.scene.children)
            .map(o => o.object)
            ;
    }

    // camera and raycaster are created like so (in constructor):

    this.raycaster = new THREE.Raycaster();
    this.camera = new THREE.Camera();

    // my render method looks like this:

    render(gl, matrix): void {
        this.camera.projectionMatrix.elements = matrix;
        this.renderer.state.reset();
        this.renderer.render(this.scene, this.camera);
    }

    // itemsAtPosition is called from outside the layer, using this code (in a mouse move event handler):

    const rect = this.mapContainer.getBoundingClientRect();

    const pos = [
        -1 + 2 * (e.clientX - rect.left) / this.mapContainer.clientWidth,
        1 - 2 * (e.clientY - rect.top) / this.mapContainer.clientHeight
    ];

    const items = this.layer.itemsAtPosition(pos);

EDIT: removed some TypeScript-isms, added camera and raycaster creation an updates.

If someone else have trouble getting the Three js Raycaster to work with custom layers, I've used this code successfully:

    itemsAtPosition(pos: [number, number]) {

        if (!this.cursor)
            return;

        const cameraPosition = new THREE.Vector3(0, 0, 0)
            .unproject(this.camera)
            ;

        const mousePos = new THREE.Vector3(pos[0], pos[1], 0.99)
            .unproject(this.camera)
            ;

        const direction = mousePos
            .clone()
            .sub(cameraPosition)
            .normalize()
            ;

        this.raycaster.near = -1;
        this.raycaster.far = 5;
        this.raycaster.ray.set(mousePos, direction);

        return this
            .raycaster
            .intersectObjects(this.scene.children)
            .map(o => o.object)
            ;
    }

    // with pos calculated like this:

    const rect = this.mapContainer.getBoundingClientRect();

    const pos: [number, number] = [
        -1 + 2 * (e.clientX - rect.left) / this.mapContainer.clientWidth,
        1 - 2 * (e.clientY - rect.top) / this.mapContainer.clientHeight
    ];

@markusjohnsson
hi, could you please show all the code of your example? Thanks.

@yuanzhaokang not really as it is part of a larger proprietary application. I've updated the code with some missing parts. I can elaborate further if you have problems.

@yuanzhaokang not really as it is part of a larger proprietary application. I've updated the code with some missing parts. I can elaborate further if you have problems.

@markusjohnsson Thanks very much.

@yuanzhaokang check out threebox for functional raycasting out of the box. Here's an example of it to highlight a cube on hover

@yuanzhaokang check out threebox for functional raycasting out of the box. Here's an example of it to highlight a cube on hover

@peterqliu Thanks. But threebox maybe lost the depth test of the cube.
top
front

@yuanzhaokang that is the intended behavior -- the custom layer draws within the context of the 2D layer stack, regardless of whether the layers themselves are 3D. For proper depth testing, you would put all 3D elements into the same custom layer

@yuanzhaokang that is the intended behavior -- the custom layer draws within the context of the 2D layer stack, regardless of whether the layers themselves are 3D. For proper depth testing, you would put all 3D elements into the same custom layer

@peterqliu Thanks. Give me a lot of help !!!

that is the intended behavior -- the custom layer draws within the context of the 2D layer stack, regardless of whether the layers themselves are 3D. For proper depth testing, you would put all 3D elements into the same custom layer

@peterqliu Is it possible to set it to use the same depth space? Like in an example here .

@yuanzhaokang not really as it is part of a larger proprietary application. I've updated the code with some missing parts. I can elaborate further if you have problems.

@markusjohnsson Thanks very much.

@yuanzhaokang that is the intended behavior -- the custom layer draws within the context of the 2D layer stack, regardless of whether the layers themselves are 3D. For proper depth testing, you would put all 3D elements into the same custom layer

@peterqliu Thanks. Give me a lot of help !!!

Did you use the above code to solve the problem? I have used the customer layer by three.js, the Raycaster have some problem. The three.js's camera can not follow the mapbox camera. Can u show me some code about it. thx

I use the babylon as the webgl 3d engine and create a custom layer but i got the error in the callback function render "camera.js?a778:37 Uncaught TypeError: Cannot read property 'transform' of undefined
at Camera.update (camera.js?a778:37)
at E.render (babylon.js?98a3:1)
at BabylonLayer.render (BabylonCustom.js?ef62:60)
at Object.custom (mapbox-gl.js?d5ed:33)
at ro.renderLayer (mapbox-gl.js?d5ed:33)
at ro.render (mapbox-gl.js?d5ed:33)
at r._render (mapbox-gl.js?d5ed:33)
at eval (mapbox-gl.js?d5ed:33)" , the matrix has no the transform function? please help me !

This thread has drifted a bit, but given the lack of consensus around what's most useful for camera settings, I'm inclined to funnel more development toward threebox and similar plugins. It feels like the most expedient way to solve this challenge in a standardized way, and get developers up and running with custom layers.

closing for now, barring better ideas

@indus do you have a complete example of that render() method working? I like the approach but I'm having trouble getting it to work in practice.

Context is this Stack Overflow question (if others on this thread have ideas, please answer it!)
Raycast in Three.js with only a projection matrix

@danvk Answered :) TL;DR - a full working example. After messing around a bit with coordinate systems, it tracks object picking and returns distance in meters!

Thanks to @markusjohnsson for inspiration!

Was this page helpful?
0 / 5 - 0 ratings