Three.js: CubeCamera washes out the colors

Created on 21 Mar 2020  路  36Comments  路  Source: mrdoob/three.js

go to the codepen from #18840 and paste following code in (sorry, dont have an account and cant save codepens):

import {
  AxesHelper,
  Color,
  CubeCamera,
  HemisphereLight,
  LinearEncoding,
  MeshStandardMaterial,
  Mesh,
  PerspectiveCamera,
  PMREMGenerator,
  sRGBEncoding,
  SphereGeometry,
  Scene,
  UnsignedByteType,
  WebGLRenderer
} from "https://cdn.jsdelivr.net/npm/[email protected]/build/three.module.js";

import { OrbitControls } from "https://cdn.jsdelivr.net/npm/[email protected]/examples/jsm/controls/OrbitControls.js";

import { RGBELoader } from "https://cdn.jsdelivr.net/npm/[email protected]/examples/jsm/loaders/RGBELoader.js";

const envMapURL =
  "https://rawcdn.githack.com/mrdoob/three.js/3a7b71e0f47fb105e1ecd63b152f1c09fac6d015/examples/textures/equirectangular/royal_esplanade_1k.hdr";

async function init() {
  const renderer = new WebGLRenderer();
  renderer.setSize(window.innerWidth, window.innerHeight);
  renderer.setPixelRatio(window.devicePixelRatio);
  renderer.outputEncoding = sRGBEncoding;

  document.body.appendChild(renderer.domElement);

  const pmremGenerator = new PMREMGenerator(renderer);
  pmremGenerator.compileEquirectangularShader();

  const scene = new Scene();
  scene.background = new Color(0x4285f4);

  const camera = new PerspectiveCamera(
    40,
    window.innerWidth / window.innerHeight,
    1,
    10000
  );
  camera.position.set(20, 20, 20);

  new OrbitControls(camera, renderer.domElement);

  const geometry = new SphereGeometry(5, 12, 8);

  const material = new MeshStandardMaterial({
    // create all colors this way
    // if you are trying to match CSS or photo colors
    color: new Color(0x4285f4).convertSRGBToLinear(),
    metalness: 1, roughness: 0
  });

  const mesh = new Mesh(geometry, material);

  const envMap = await loadEnvironment();

  const envMapPMREM = createPMREM(pmremGenerator, envMap);
  scene.background = envMapPMREM;
  scene.environment = envMapPMREM;

  const cubeCamera = new CubeCamera(1, 2, 512); 
  cubeCamera.update(renderer, scene);
  scene.background = cubeCamera.renderTarget;

  scene.add(mesh);
  scene.add(new AxesHelper(20));

  renderer.setAnimationLoop(() => {
    renderer.render(scene, camera);
  });

  window.addEventListener("resize", () => {
    camera.aspect = window.innerWidth / window.innerHeight;
    camera.updateProjectionMatrix();
    renderer.setSize(window.innerWidth, window.innerHeight);
    composer.setSize(window.innerWidth, window.innerHeight);
  });
}

async function loadEnvironment() {
  const loader = new RGBELoader().setDataType(UnsignedByteType);

  return new Promise((resolve, reject) => {
    loader.load(envMapURL, data => resolve(data), null, reject);
  });
}

function createPMREM(pmremGenerator, texture) {
  const envMap = pmremGenerator.fromEquirectangular(texture).texture;
  texture.dispose();
  pmremGenerator.dispose();

  return envMap;
}

init();

you will see this:
Screen Shot 2020-03-21 at 18 50 18

expected result:
Screen Shot 2020-03-21 at 18 49 18

(what you get by commenting scene.background = cubeCamera.renderTarget; out)

Suggestion

Most helpful comment

```js
cubeCamera.update( renderer, scene );

cubeCamera.renderTarget.texture.encoding = sRGBEncoding;

scene.background = cubeCamera.renderTarget;
```

All 36 comments

```js
cubeCamera.update( renderer, scene );

cubeCamera.renderTarget.texture.encoding = sRGBEncoding;

scene.background = cubeCamera.renderTarget;
```

@WestLangley ok, why cant it do it by itself though? cubeCamera.update() has renderer passed in with its renderer.outputEncoding set, right? if PMREMGenerator can figure it out, why cant cubeCamera

renderer.outputEncoding is the desired encoding when rendering to the default drawing buffer.

renderTarget.texture.encoding is the assumed encoding when using a render target as a texture.

Which of those two properties should specify the desired encoding when rendering to a render target?

I guess the one that would prevent me from opening this ticket? I am a stupid user that does not want to think abount encodings (obviously) and expects three.js to just work.

I suppose the renderer could override outputEncoding with renderTarget.texture.encoding when renderTarget is set and then this case would "just work", right? If you're rendering into a renderTarget with LinearEncoding set then it seems like a reasonably assumption that you'd want the contents be encoded as linear? It might cause other points of confusion, though.

I think @gkjohnson suggestion is perfectly reasonable.
This is pretty much what PMREMGenerator does and I'm in favor of keeping things consistent.

Plus, this is 3 liner. Save current encoding, use renderTarget encoding and revert encoding back to previous state.

I was doing some experiments and noticed something a little more troublesome.

In the webgl_loader_gltf.html example, just after setting scene.background to use the PMREM envMap, include the following line:

scene.background = envMap;
scene.environment聽=聽envMap;

var聽cubeCamera聽=聽new聽THREE.CubeCamera().update(聽renderer,聽scene聽); // include this line.

You're gonna notice that, even though we are not actually using the cube rendertarget texture as the background, just the fact of generating it causes the background encoding to change.

~This is happening because, when we generate the CubeRenderTarget with update(), WebGLBackground gets generated for the first time, but since we are rendering to a render target, its encoding (LinearEncoding) get used instead of the renderer.outputEncoding. When we go back to the canvas default render target to render the scene. The scene.background is still the same, so it doesn't update the encoding.~

~IMO, this just reinforces the need for us to save the renderer settings and re-apply them later, which would solve both of these problem. We might even enforce LinearEncoding as default for CubeCamera given that its render target is always gonna be used as an intermediary step.~

Edit: This information was just wrong, sorry about that.

This is pretty much what PMREMGenerator does and I'm in favor of keeping things consistent.
Plus, this is 3 liner. Save current encoding, use renderTarget encoding and revert encoding back to previous state.

I was actually imagining it could be built into WebGLRenderer for consistency. It might also help alleviate some of the confusion / misuse of WebGLRenderer.outputEncoding when rendering with EffectComposer, too.

This is happening because, when we generate the CubeRenderTarget with update(), WebGLBackground gets generated for the first time, but since we are rendering to a render target, its encoding (LinearEncoding) get used instead of the renderer.outputEncoding.

I'm not sure I follow this. Is the active render target encoding already being used to override outputEncoding somewhere?

I'm not sure I follow this.

Don't worry, I managed to even confuse myself this time. I gave the correct information, but I reached the wrong conclusion. Here's the revised version:

Is the active render target encoding already being used to override outputEncoding somewhere?

Yes, we already do that.

https://github.com/mrdoob/three.js/blob/7c1424c5819ab622a346dd630ee4e6431388021e/src/renderers/webgl/WebGLPrograms.js#L195

So, it turns out the source of the problem is not really with saving and restoring the renderer.outputEncoding after rendering to a render target.

The real problem here is that background can get generated using the render target encoding, but the WebGLProperties saved for the background program doesn't consider that fact. So we save the renderer encoding ( wrong ) information and the background doesn't get updated afterwards. The problem is this line:

https://github.com/mrdoob/three.js/blob/7c1424c5819ab622a346dd630ee4e6431388021e/src/renderers/WebGLRenderer.js#L1523

In other words, we need to make sure the properties we save are the same as program data. Doing so fixes the issue and we never need to worry about saving and restoring the renderer encoding here or anywhere else 馃榿

This affects every material, I'm surprised it wasn't noticed before.

I suppose the renderer could override outputEncoding with renderTarget.texture.encoding when renderTarget is set and then this case would "just work", right

Nope. The user is free to use any shader he wants to when rendering to a render target.

I asked a rhetorical question in https://github.com/mrdoob/three.js/issues/18942#issuecomment-602148402, and here is my answer:

.outputEncoding is a property of the renderer, not the drawing buffer, so it should apply regardless of the render target. And of course, it will only apply for built-in materials -- and custom materials having the proper shader chunks.

renderTarget.texture.encoding should be used by the renderer when reading the render target texture, not when writing to it.

I think this convention is just about as simple as one can make it.


In other words, .outputEncoding is for output only. That is where it gets its name from.

.encoding is for input only, and is used for proper decoding in the shader.

Nope. The user is free to use any shader he wants to when rendering to a render target.

Right but when we're talking about the built in materials the input and output encodings respected and could "just work".

.outputEncoding is a property of the renderer, not the drawing buffer, so it should apply regardless of the render target. And of course, it will only apply for built-in materials -- and custom materials having the proper shader chunks.

renderTarget.texture.encoding should be used by the renderer when reading the render target texture, not when writing to it.

Based on @sciecode's comment it looks like this is not actually how it works right now. When rendering into a render target the renderTarget.texture.encoding overrides outputEncoding, which wasn't documented, either.

Guys, it has been like this for 3 years at least. The design does not work.

In fact, programCacheKey specifies output encoding twice -- using two different definitions. It is a mess.

Based on @sciecode's comment it looks like this is not actually how it works right now

I specified how it _should_ be implemented. Not how it _is_ implemented.

https://github.com/mrdoob/three.js/blob/7c1424c5819ab622a346dd630ee4e6431388021e/src/renderers/webgl/WebGLPrograms.js#L195

A variation of this line dates back to 2016 from #8232

But the check for recompiling based on encoding was only added in 2019 in (#18127)

https://github.com/mrdoob/three.js/blob/48072f925b7e5a5968695abc01482eea17256e48/src/renderers/WebGLRenderer.js#L1677-L1681

https://github.com/mrdoob/three.js/blob/7c1424c5819ab622a346dd630ee4e6431388021e/src/renderers/WebGLRenderer.js#L1523

Unfortunately, it failed to account for the render target encoding. I think that's exactly why the design didn't work, because we were doing it wrong.

".outputEncoding is a property of the renderer, not the drawing buffer, so it should apply regardless of the render target."

OutputEncoding is meant to be used primarily when rendering to the screen where there is no render target. I believe we also added an "encoding" that can be set on a RenderTarget which should be the encoding used when writing to the RenderTarget.

Thus if you are rendering to a RenderTarget, you should use the encoding of that render target when encoding your output. And use that encoding when reading from it.

"renderTarget.texture.encoding should be used by the renderer when reading the render target texture, not when writing to it."

It should be used for both writing and reading, no?

Once thing we could do is have a RenderTarget that is actually the screen and we can set parameters on it directly. When we go to render to the screen we use this specially setup RenderTarget and its parameters. This would get rid of special globals on the WebGLRenderer.

I don't think it makes sense to keep two separated encoding parameters for reading and writing from/to a render target. This would unnecessarily complicate things and open up space for wrong behavior.

There's no scenario where we want write to one encoding and read from a different one, Right?

With that being said, I'm not particularly fond of having to control said encoding with the current API renderTarget.texture.encoding, I think it's worth investing time into changing this into something consistent across all usages.

renderTarget.setOutputEncoding() and renderer.setOutputEncoding() feels correct to me.
Or as @bhouston suggested, maybe something like renderer.screenTarget.setOutputEncoding().

I don't think it makes sense to keep two separated encoding parameters for reading and writing from/to a render target.

But then why do you suggest an explicit output only encoding later on in your comment?

[renderTarget.texture.encoding] should be used for both writing and reading, no?

@bhouston That is what you and I were thinking three years ago, but for the reasons I stated above, I think the so-called spec should be changed.

And why should renderer.outputEncoding apply to only the drawing buffer, while other properties such as clear color and auto-clear do not?

Also, renderer.outputEncoding impacts code-injection for the built-in _materials_. Why should a material render differently when the render target is changed?

But then why do you suggest an explicit output only encoding later on in your comment?

I had @Mugen87 suggestion https://github.com/mrdoob/three.js/pull/19129#issuecomment-613297301 in mind when I wrote it. I just used the same name, but it would be probably better if I wrote setEncoding()

@bhouston That is what you and I were thinking three years ago, but for the reasons I stated above, I think the so-called spec should be changed.

I like the original formulation, and I do not think it should be changed. I find it elegant.

I think that the encoding of the destination, either the render target encoding or if rendering to the screen, outputEncoding, should be exposed to the shader, probably under the name "outputEncoding" because this is the encoding the output of the shader should undergo.

It is up to the shader implementation to do the specified encoding or not. If it doesn't do the encoding as specified, well it may ignore the encoding, but so be it. It is no different than a shader ignoring the viewMatrix -- you can but it likely is wrong, unless you really know what you are doing.|

And why should renderer.outputEncoding apply to only the drawing buffer, while other properties such as clear color and auto-clear do not?

I hate auto-clear and clear color on the renderer so much. Have you seen our post effects? It has to constantly save and restore clear color and auto-clear state and I have spent so much time tracking down bugs in that area: https://github.com/mrdoob/three.js/blob/dev/examples/js/postprocessing/SAOPass.js#L196 https://github.com/mrdoob/three.js/blob/dev/examples/js/postprocessing/SAOPass.js#L232 https://github.com/mrdoob/three.js/blob/dev/examples/js/postprocessing/SAOPass.js#L315 https://github.com/mrdoob/three.js/blob/dev/examples/js/postprocessing/SAOPass.js#L335 https://github.com/mrdoob/three.js/blob/dev/examples/js/postprocessing/SAOPass.js#L352

If there was a way to move clearColor, and autoClear to render targets or material or anything, that may be really nice. I hate them on WebGLRenderer -- it is basically crappy global state. I never thought that there was a solution to this problem until now. But what is the solution, I am not sure.

Also, renderer.outputEncoding impacts code-injection for the built-in materials. Why should a material render differently when the render target is changed?

I think that is appropriate behavior. Just like the material may be recompiled if there is a different number of lights in a scene right? Materials are compiled in a context dependent fashion.

If this has time impacts we could make the code injection independent from the specific encoding via changing to a switch statement driven by a incoming uniform rather than a define.

If you want to propose a major change to the design along any of those lines, I am fine with it.

We are having a related problem of materials recompiling every frame when render targets are alternated, so we have to fix that issue anyway.

And we probably should decouple encoding from color-space.

And we need to support a proper HDR workflow.

So the sooner we can nail this down, the better.

I would like to get rid of code-injection, TBH.

If you want to propose a major change to the design along any of those lines, I am fine with it.

I am not sure what design we should do on the clearColor, clearAlpha. But I think that encoding on or renderTarget texture should stay there.

We are having a related problem of materials recompiling every frame when render targets are alternated, so we have to fix that issue anyway.

If they have the same settings, they shouldn't recompile no?

And we probably should decouple encoding from color-space.

Yes, that is true. AdobeRGB, sRGB, gamma, linear are color spaces. RAW, RGBM, RGBE, RGBE, LogUV are encodings.

And we need to support a proper HDR workflow.

If you encode with linear and you use Floating Point textures it should work as is. What is the limitation Three.js has with HDR?

I would like to get rid of code-injection, TBH.

I think switching from code-injection to switch statements is fine, as long as there isn't a significant speed impact.

What is the limitation Three.js has with HDR?

Post-processing should be in linear space. So it must occur prior to tone mapping and colorspace.

We should model the ACES Imaging Pipeline, I think.

If they have the same settings, they shouldn't recompile no?

Every time a material is rendered with a different output encoding it recompiles first. This can happen when rendering to linear space prior to rendering to the screen. It can happen with mirrors or cube cameras.

Post-processing should be in linear space. So it must occur prior to tone mapping and colorspace.

This is sort of okay. I find we can get the quality we need with the current pipeline, but then again we have made modifications to three.js that are not in mainline three.js.

I like the original approach of @WestLangley outline here considering no major API changes.

However, when thinking about @bhouston's suggestion, always defining the output encoding on renderer level would go in the wrong direction. When I understand your goal correctly, we should not configure the WebGL state via WebGLRenderer, right? It's less error prone if you can define properties on material or render target level and then let the renderer automatically configure the state based on these values.

@Mugen87 Did I edit your post to quote me as you intended? :-) If not, sorry. I am trying to understand what you are saying...

we should not configure the WebGL state "behind-the-back" of WebGLRenderer, right?

I've meant over an instance of WebGLRenderer^^. So stuff like:

renderer.setClearColor( color );

We should model the ACES Imaging Pipeline, I think.

We could have this as an option. Remember that floating point buffers are really slow on mobile and even now only roughly 60% of devices support rendering into a floating point buffer: https://webglstats.com/webgl/extension/EXT_color_buffer_half_float https://webglstats.com/webgl/extension/WEBGL_color_buffer_float

The current approach that renders in HDR internally and then tone maps immediately into an an 8-bit output buffer was developed primarily to get good looking stuff while working around device limitations, and it has the side benefit in that it is wickedly fast (because it uses less memory bandwidth than floating point textures, and because it skips separate post processing passes.)

We should model the ACES Imaging Pipeline, I think.

We could have this as an option.

I would not remove the possibility of the current pipeline.

(Credit to @bhouston for suggesting the current pipeline. It involves applying tone mapping, color-space correction, and encoding in the shader itself, and avoids the need for a post-processing pass in those cases.)

However, we need to think long-term. I intend to provide an example so the design issues can be discussed. I hope it is well-received.

I believe we should open a new issue to discuss the possibility of changing the current encoding pipeline, if it is worth moving the encoding to uniforms instead of defines to alleviate the necessity of recompilation.

It might be worth to consider finally separating encodings from color-space and making changes in the API.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

yqrashawn picture yqrashawn  路  3Comments

jack-jun picture jack-jun  路  3Comments

zsitro picture zsitro  路  3Comments

ghost picture ghost  路  3Comments

fuzihaofzh picture fuzihaofzh  路  3Comments