Three.js: Depthtextures should not need to be explicitly constructed

Created on 10 Mar 2020  路  11Comments  路  Source: mrdoob/three.js

Depthtextures explicitly require construction and assignment
https://github.com/mrdoob/three.js/blob/b11f897812a8a48bcd81e9bd46785d07939ec59e/examples/webgl_depth_texture.html#L117 which is bizarrely inconsistent with how the color texture is constructed automatically.

It would make sense only if https://github.com/mrdoob/three.js/issues/15440 were on master, however attempting to assign a depthtexture to multiple rendertargets instead results in silent undefined behavior.

Suggestion

Most helpful comment

opinion- my difficulty figuring out how to use three's render targets has far exceeded the amount of effort of using raw webgl framebuffers
granted this is my perspective as a desktop GL engine developer

All 11 comments

Depthtextures explicitly require construction and assignment

It was implemented like this on purpose since the respective depth texture extension is not supported by all devices. The default of generating depth textures is still RGBA depth packing.

The default of generating depth textures is still RGBA depth packing

wait, so a Three DepthTexture is not meant for use as a framebuffer depth attachment?
My understanding of the spec is that any depth attachments must be a special depth or depth-stencil internalformat? Or is that only required for shadow samplers?

opinion- my difficulty figuring out how to use three's render targets has far exceeded the amount of effort of using raw webgl framebuffers
granted this is my perspective as a desktop GL engine developer

Yes, you use DepthTexture so save depth information when rendering to a render target. In other words, an instance of DepthTexture represents the depth attachment of a custom framebuffer. However, there are some issue with WebGL 1 and the respective WEBGL_depth_texture extension: Not all devices support it and the floating point precision (at least in WebGL 1) is limited.

Hence, the mentioned RGBA depth packing is more reliable although this approach requires a separate render pass. Which is not always necessary e.g. when rendering a DoF effect. With DepthTexture, you can produce the beauty and depth pass with a single render call.

Because of this circumstance, the depth texture configuration has to be done by the developer.

Is it against Three's standard to alter its behavior based on available extensions? Would an auto-configuring depth setup be candidate for three's 'examples'?

I don't think there is "defined standard" but other features like WebGL 2 are also not automatically enabled. In this case, devs have to create the WebGL 2 rendering context by themselves and pass it into the renderer.

I see. A higher level library would handle this all automatically, but that would require pipeline modifications and hidden behavior, which don't fit with three's mid-level role.

But I would still like to see a syntax improvement here, since having to mutate a field for an object I just constructed, instead of the initial constructor doing so, feels unwieldy. And I would have never figured out how to do this if not for the depth texture example code.

I suggest this be done using an opt for the render target constructor, to avoid mutation.
such as boolean "depthRenderTexture" default=false which determines whether to use a depth render texture or render buffer. If false, RenderTarget.depthTexture is null, otherwise it is constructed with the RenderTarget.

I suggest this be done using an opt for the render target constructor, to avoid mutation.
such as boolean "depthRenderTexture" default=false which determines whether to use a depth render texture or render buffer. If false, RenderTarget.depthTexture is null, otherwise it is constructed with the RenderTarget.

The problem is that you might still want to configure the depth texture, for example its type. In three.js, it's typical that users compose complex objects out of more basic building blocks. E.g. you create a material and a geometry in order to create a mesh. The current depth texture approach just follows this style. And at least from my point of view, I see no need for a change right now. The example code together with the documentation should be sufficient to figure out how develop with this API.

Let's see how others evaluate your suggestion.

Is that not inconsistent with how the color buffer is configured?
Ie If the developer needs WEBGL_draw_buffers then they need to use raw webgl.

Yes, WEBGL_draw_buffers is not yet supported.

Just to mention maybe, the other bit here are stencil buffers, which are not used the same way as depth buffers, represented as textures would (packing, formats, precision etc). They're sort of coupled with the depth, the way i understand it, but can always be guaranteed at 8 bits?

Was this page helpful?
0 / 5 - 0 ratings