I've harped on this before. It seems like setPixelRatio
should never have been added to three.js and further it should not have been looked at in EffectComposer
as it breaks stuff.
As an example there is the rgb halftone example
Before the EffectComposer
change a lo-dpi render and a hi-dpi render matched. The user chooses a radius and the radius is resolution independent. After the change the results are no longer independent
Here's low-dpi
Here's hi-dpi before r105
Here's hi-dpi after r105
Now maybe you think that's how it should be but I'd argue the user chooses a look. Imagine the pixelRatio was 8, there'd be no more halftone at all. The dots would be so small as to not be able to see them. I doubt that's what someone who choses to use halftone wanted. I don't have a Samsung Note8+ to test but it's devicePixelRatio is 4. My iPhoneX has a dpr of 3 and basically you can't see the halftone effect at all, at least not on the default settings.
A similar issue happens on the scanline postprocessing effect in the advanced example. There are more scanlines if the dpr is higher after r105.
What is means is a developer/artist/designer choose a look and has no idea what that will look like on someone else's machine. Maybe they wanted there to be 100 scanlines because that fit their look but somone on a 3x hd-dpi display will get a completely different look.
The procedural example with random noise has the same issue.
Trying to auto handle pixelRatio is not an easily solvable issue, especially for postprocessing where shaders look at gl_FragCoord
which will change based on the size of the render target.
This is the reason I've argued before that setPixelRatio
should be removed. Only the developer knows if the pixel ratio matters to their app and their effects/shaders so by forcing them to deal with it manually they are aware what sizes are being used since they'd have to calculate and deal with them themselves. By having setPixelRatio
instead they are fooled into thinking things just work when they don't actually just work.
There are lots more similar issues.
In any case the change to EffectComposer
IMO broke things and seems like it should be reverted?
In any case the change to EffectComposer IMO broke things and seems like it should be reverted?
Are you aware that the EffectComposer
also used the drawing buffer size from the renderer when it was created before the mentioned change? I've added 1dfbba57a8efdbfcee0a885dce6804d09ed6ef6e since it fixed a lot of other issues related to resizing. I suggest you revert the PR on a local branch and check how the examples break when resizing the window.
I think a simple revet is the wrong approach. You have to provide a solution that avoids the resolution getting out of sync when resizing. Besides, I've mentioned in another issue that setPixelRatio()
can also be considered as a scale factor that allows you to vary the resolution on the EffectComposer
s internal render targets.
You have to provide a solution that avoids the resolution getting out of sync when resizing
Or you could just make the user pass the size they want. If they want double resolution they ask for double size.
There's all kinds of issues related to trying to do this automagically. Another example, calling WebGLRenderer.readRenderTargetPixels
. What size and position do I pass in? I tell three.js one size then I try to use that size and it fails. If it wasn't trying to apply magic then those problems disappear
I've mentioned in another issue that setPixelRatio() can also be considered as a scale factor that allows you to vary the resolution on the EffectComposers internal render targets.
Why is that good thing? Why not just pass in the size you want? Again, if you want double width pass in width * 2. Then you get width * 2 instead of a magic number
Why is that good thing? Why not just pass in the size you want? Again, if you want double width pass in width * 2. Then you get width * 2 instead of a magic number
It seems you are criticizing the pixel ratio API in general since EffectComposer
just mimics the API of WebGLRenderer
in order to stay in sync with it.
If that's the case, I suggest you rephrase the title of your issue or create a new one.
Please no, I need this to control aggressively high device pixel ratio defaults on android phones.
Unless an alternative for renderer.setPixelRatio( isMobile ? Math.min(3,window.devicePixelRatio) : window.devicePixelRatio );
is made available.
@titansoftime , I don't understand how your comment is related to this suggestion. I'm not suggesting THREE.js do anything automatically. In fact I'm suggesting the opposite. It should do less than it's doing now because what it's doing now is opaque and error prone.
Perhaps I read the subject line wrong, but you are proposing to remove setPixelRatio
from the API correct?
If that is the case how can I limit the pixel ratio on devices that default to 4 similar to the way I do in the code snippet I provided?
how can I limit the pixel ratio on devices that default to 4
By asking specifically for whatever resolution you want.
If you ask for 400x300 you get 400x300. Period, end of story. I'm suggesting there should be no pixelRatio knowledge in three.js at all so three.js would never make something higher resolution than you specifically ask for.
If you do renderer.setSize(400, 300)
you get 400x300 period.
If you want to rendering to a higher resolution then compute a different width and height and then call setSize
with size.
Here's the problems with the current system
Assume I have some size defined by width and height
const width = window.innerWidth;
const height = window.innerHeight;
And I use them like this
renderer.setSize(width, height);
rendere.setPixelRatio(window.devicePixelRatio);
Now I want to create a render target the same size as the canvas. I do
const rt = new THREE.RenderTarget(width, height); // WRONG! 馃槩
That might or might not be the same size as the canvas
Or I want to read all the pixels in the canvas. Okay
const pixels = new Uint8Array(width * height * 4);
renderer.context.readPixels(
0, 0, width, height, gl.RGBA, gl.UNSIGNED_BYTE, pixels); // WRONG! 馃槩
or I create a shadertoy like fragment shader
uniform vec2 resolution;
void main() {
vec2 uv = gl_FragCoord.xy / resolution;
}
I then set my uniforms
uniforms = {
resolution: { value: [width, height] }, // WRONG! 馃槩
};
Or I take screenshot
renderer.domElement.toBlob((blob) => {
const a = document.createElement('a');
document.body.appendChild(a);
a.style.display = 'none';
const url = window.URL.createObjectURL(blob);
a.href = url;
a.download = `screencapture-${width}x${height}.png`; // WRONG! 馃槩
a.click();
});
or I want to render 1x1 pixel
const camera = THREE.OrthographicCamera(0, width, height, 0, -1, 1); // WRONG! 馃槩
or I want to copy the canvas into a 2D canvas
const ctx = document.createElement('canvas').getContext('2d');
ctx.canvas.width = width; // WRONG! 馃槩
ctx.canvas.height = height; // WRONG! 馃槩
ctx.drawImage(renderer.domElement, 0, 0); // WRONG! 馃槩
In all of those cases width
and height
don't match what size three.js actually made the canvas.
Now let's imagine three.js gets rid of setPixelRatio
and instead you do this to setup
const width = Math.floor(window.innerWidth * window.devicePixelRatio);
const height = Math.floor(window.innerHeight * window.devicePixelRatio);
renderer.setSize(width, height, false);
Suddenly all the code that was wrong before is now correct.
const rt = new THREE.RenderTarget(width, height); // this now works! 馃槏馃帀鉂わ笍
const pixels = new Uint8Array(width * height * 4);
renderer.context.readPixels(
0, 0, width, height, gl.RGBA, gl.UNSIGNED_BYTE, pixels); // correct now! 馃槏馃帀鉂わ笍
uniform vec2 resolution;
void main() {
vec2 uv = gl_FragCoord.xy / resolution;
}
uniforms = {
resolution: { value: [width, height] }, // this is correct now 馃槏馃帀鉂わ笍
};
renderer.domElement.toBlob((blob) => {
const a = document.createElement('a');
document.body.appendChild(a);
a.style.display = 'none';
const url = window.URL.createObjectURL(blob);
a.href = url;
a.download = `screencapture-${width}x${height}.png`; // this is also correct now 馃槏馃帀鉂わ笍
a.click();
});
const camera = THREE.OrthographicCamera(0, width, height, 0, -1, 1); // this is correct now 馃槏馃帀鉂わ笍
const ctx = document.createElement('canvas').getContext('2d');
ctx.canvas.width = width; // this is correct now 馃槏馃帀鉂わ笍
ctx.canvas.height = height; // this is correct now 馃槏馃帀鉂わ笍
ctx.drawImage(renderer.domElement, 0, 0); // this is correct now 馃槏馃帀鉂わ笍
By removing setPixelRatio
three.js stops doing magic behind the scenes. You ask for a certain width and height you get that width and height.
renderer.setSize( window.innerWidth, window.innerHeight );
renderer.setPixelRatio( window.devicePixelRatio );
const size = new Vector2()
renderer.getDrawingBufferSize( size );
const width = size.width;
const height = size.height;
If you set width
and height
this way then I think they will be correct in all the 'wrong' examples above.
It's makes no sense to have to ask three.js what it did rather than just telling it what to do. All that does is make it more convoluted than it already is. It's patching over the problem by making it worse, not better.
It's makes no sense to have to ask three.js what it did rather than just telling it what to do
renderer.getDrawingBufferSize()
is not asking three.js how it arrived at the size of the drawing buffer, it's just asking what the current size is. Whether or not renderer.setPixelRatio()
gets removed, it's still good practice to query the current drawing buffer size whenever you need to use it, rather than relying on the user to maintain separate width
and height
variables.
Then the current method is
renderer.setSize( window.innerWidth, window.innerHeight );
renderer.setPixelRatio( window.devicePixelRatio );
const size = new Vector2();
renderer.getDrawingBufferSize( size );
Under your proposal this would become:
renderer.setSize(
Math.floor(window.innerWidth * window.devicePixelRatio),
Math.floor(window.innerHeight * window.devicePixelRatio),
false
);
const size = new Vector2();
renderer.getDrawingBufferSize( size );
You would also need to set the styles on the renderer.domElement
somehow as well.
I don't have a strong preference for either of these, and I do agree that the current setPixelRatio
API leads to some complications. But your "WRONG! 馃槩" examples above are easily solved without changing the API so I don't see them as a strong argument either way.
Late to this discussion, but today I discovered a great use for setPixelRatio. Experimenting with Three.js and my Moto G7 phone hooked up to a large screen and was getting bad artifacts on the grid helper looking like lines were doubling up on pixels. My phone has a window.devicePixelRatio of 2, but setting the renderer at that or zero didn't change anything. By just experimenting, I found that setting setPixelRatio at 1.3 and then setting renderer.setSize to width/1.3 and height/1.3 fixed the issue. I surmise transferring from phone to big screen skewed stuff a little. That sorted it out.
This issue can be closed. I don't think there are enough arguments for the proposed alternative. Better to invest the time in other stuff. In this way, we also avoid (probably unwanted) migration effort in the user base.
interesting issue. learned alot.
Most helpful comment
interesting issue. learned alot.