Originally submitted to StackOverflow here: http://stackoverflow.com/questions/37733259/logarithmic-depth-buffer-orthographic-camera
The issue occurs when combining the logarithmic depth buffer option with an orthographic camera.
When using the standard linear depth buffer, both perspective and orthographic cameras work as expected.
When using a logarithmic depth buffer, only the perspective camera works as expected. Using an orthographic camera results in geometry bleeding through.
A jsfiddle highlighting the problem (also linked in the StackOverflow post) can be found here: http://jsfiddle.net/TheJim01/05up96m0/
[x] Tested back as far as r71
[x] Chrome
[ ] IE11 (IE does not support the EXT_frag_depth extension)
[x] Windows 7 (only one tested)
I've seen the issue on two desktops with NVidia Quadro K600 (4GB), and a laptop with Quadro NVS 160M (2GB),
I'm experiencing exactly the same problem, I have a rather complex application that allows switching between perspective and orthographic cameras, my WebGLRenderer has logarithmic depth buffer set to true since this works perfectly with a perspective camera (default) but whenever I change it to orthographic my 3d model becomes a mess, showing glitches and typical z-fighting behavior. I cannot reset or renew my renderer on camera change, since I've many views sharing the same renderer and renewing it would generate other troubles. Is there any chance that this issue may deserve some attention? This bug is giving me a lot of problems. Thanks in advance
This is not a bug. It it a feature request, if anything.
It is not even clear to me if a logarithmic depth buffer makes sense when using an orthographic camera.
Perhaps it's a usage/understanding error.
I stumbled on this property because, like @davdtm , I was trying to switch between cameras types. In my naive understanding, the logarithmic depth buffer was a means of eliminating z-fighting when viewing a large scene (even though my test scene was very small). That led to an assumption that what's good for one camera must be good for the other.
Perhaps this isn't the place for this kind of discussion, but I'd be interested to know why this doesn't "make sense" for an orthographic camera (to correct my understanding of the intent). I am by no means fluent in the more mathy side of GL, but I'm never afraid of expanding my understanding, even if I can't immediately apply it. 馃槃
@TheJim01 Refer to the depth precision plots in this post.
For a perspective camera, there is significantly more depth precision close to the near plane than there is near the far plane. By setting logarithmicDepthBuffer: true
, the response is made more linear, and precision is more uniform across all depths.
For an orthographic camera, the response is linear to begin with.
I understand. Thanks for the excellent reference.
If I'm reading the code correctly, this isn't actually a context property (so it wouldn't need a new WebGL context), but is a flag to enable the logarithmic depth buffer calculations in the shaders. Would you agree with the following?
Workaround:
When switching from PerspectiveCamera
to OrthographicCamera
:
Disable the option (renderer.state.disable("logarithmicDepthBuffer");
)
When switching from OrthographicCamera
to PerspectiveCamera
:
Enable the option (renderer.state.enable("logarithmicDepthBuffer");
)
In both cases, follow-up by looping through the scene and setting all materials to recompile (node.material.needsUpdate = true;
)
Fix:
I don't think there could be a proper "fix" for this, because even if you could respond nicely to a change in camera type (for example passing the camera mode as a uniform), you'd still need to update all the shaders in the scene.
If that sounds correct, then I'm okay with closing this as no-fix.
Actually, I am now not certain if the reasoning in my previous comment is correct...
If you want to take the time to understand the code detail, I think that would be beneficial.
Hi,
I had the same issue (and also no clue why the depth values were ignored) but could fix it by changing some bits. It now works without recompiling the shaders and without changing the state/capabilities.
(I'm sorry for not testing it in a more general way, but I have no time right now testing it for side effects).
if ( capabilities.logarithmicDepthBuffer )
{
if ( camera.isOrthographicCamera )
{
p_uniforms.setValue( _gl, 'logDepthBufFC', -1 );
}
else
{
p_uniforms.setValue( _gl, 'logDepthBufFC', 2.0 / ( Math.log( camera.far + 1.0 ) / Math.LN2 ) );
}
}
#if defined( USE_LOGDEPTHBUF ) && defined( USE_LOGDEPTHBUF_EXT )
if ( logDepthBufFC > 0.0 )
{
gl_FragDepthEXT = log2( vFragDepth ) * logDepthBufFC * 0.5;
}
else
{
gl_FragDepthEXT = gl_FragCoord.z;
}
#endif
Cheers,
Josef
I have encountered the same problem
I just ran into this, as well. I don't think I fully understand the details of the source of the problem but I assume it has to with the fact that z doesn't get divided by large w values when using orthographic projection. Would it be acceptable to check if the projection matrix is orthographic in the vertex shader and skip applying logarithmic depth if it is?
The PointsMaterial does this in the points_vert
chunk:
#ifdef USE_SIZEATTENUATION
bool isPerspective = ( projectionMatrix[ 2 ][ 3 ] == - 1.0 );
if ( isPerspective ) gl_PointSize *= ( scale / - mvPosition.z );
#endif
That's kind of a ugly... We should probably define PERSPECTIVE_CAMERA
and ORTHOGRAPHIC_CAMERA
when constructing the shaders?
That's kind of a ugly...
Ha I don't disagree!
We should probably define
PERSPECTIVE_CAMERA
andORTHOGRAPHIC_CAMERA
when constructing the shaders?
If we do this then materials will have to be recompiled / updated every time the camera changes, right? Would that happen automatically or would we need to set needsUpdate
on every material manually like we do for other #define
settings?
I am using the proposed solution since I posted it, and it is actually working well. We are using it in an 3D editor where the users can render a scene in 1, 2 or 4 camera view ports simultaneously. Recompiling wouldn't be an option, it took to long since we are using the same scene graph for all view ports. I think there are 3 possible solutions, that can work depending on the use case: 1. Rare Camera Switching: Recompile when needed, use only one shader/material 2. Only 1 camera needed: Use 2 shaders/materials and switch depending on camera 3. Multiple different cameras: Use one shader/material that performs both depending on a unform
That's kind of a ugly...
One of us doesn't think so.
That's kind of a ugly...
One of us doesn't think so.
You mean that you don't find it ugly?
I don't. But then again, I wrote it. :-)
I don't mind using that piece of code, as long as we only do it in one place.
Dang! Two places...
I don't mind using that piece of code, as long as we only do it in one place.
So presumably moving the code to a function in the common shader chunk would suffice to abstract it away? I think this is a better approach than using #defines, personally.
bool isPerspectiveMatrix( mat4 projectionMatrix ) {
return projectionMatrix[ 2 ][ 3 ] == - 1.0;
}
Just saw this was closed with a fix merged in, and I have verified it fixed my demo! Thanks, @gkjohnson , @WestLangley , and @mrdoob ! Looking forward to trying it full-scale! 馃槃
Just saw this was closed with a fix merged in, and I have verified it fixed my demo! Thanks, @gkjohnson , @WestLangley , and @mrdoob ! Looking forward to trying it full-scale! 馃槃
Not to my.I think i will be killed by my boss in few days.You guys will never see me again (good news).
Should this be right performance? I afraid not.
Most helpful comment
Just saw this was closed with a fix merged in, and I have verified it fixed my demo! Thanks, @gkjohnson , @WestLangley , and @mrdoob ! Looking forward to trying it full-scale! 馃槃