Comparing MetalRoughSpheres in a few different engines:
| engine | screenshot |
|---|---|
| three.js | |
| babylon.js | |
| qtek | |
THREE.MeshStandardMaterial is noticeably more reflective at moderate roughness values compared to other engines, and perhaps smoother/shinier than content artists would intend.
@emackey brought this up on another thread, and mentioned that Blender and Substance Painter also look similar to the other engines.
Agreed. Perceptual roughness is a problem.
Since your demos are using different environment maps, it is hard to compare brightness.
So, in your view, is the reflection too bright at certain roughness levels, not blurry enough at certain roughness levels, or both?
Certainly one place to look is the shader code that selects mipmap from roughness. /ping @bhouston
These comments would be helpful
https://github.com/KhronosGroup/glTF-Blender-Exporter/issues/143#issuecomment-355831214
https://github.com/KhronosGroup/glTF-Blender-Exporter/issues/143#issuecomment-357468319
This one?
// TODO, replace 8 with the real maxMIPLevel
vec3 radiance = getLightProbeIndirectRadiance( /*specularLightProbe,*/ geometry, Material_BlinnShininessExponent( material ), 8 );
I replaced 8 with 11(=Math.log2(2048), width and height of the envMap texture used in that Three.js example are 2048) and the result looks closer to other engines'.
I believe the issue is a square or a sqrt() to the roughness value or something like this. I do not follow where @takahirox's fix comes from unfortunately, the logic seems incorrect as our maps are packed into 2048x2048, they are not actually that size.
Just my $0.02, but it looks to me like @takahirox hit the mark exactly. I always have to remind myself that the MIP levels are "upside-down" in that higher numbers correspond to smaller MIPs. So if you're using an environment with 11 levels, and you hard-code a maximum level of 8, then your roughest spheres aren't getting down to the smallest MIP levels.
In @donmccurdy's first screenshot, if you look closely at the farthest upper-right sphere, you can see a distinct image of a tree, with leaves, branches, and a trunk all visible. This sort of reflection should be completely blurred out before you get to the high roughness spheres. I think using the full range of available MIP levels is likely the correct solution.
Just my $0.02, but it looks to me like @takahirox hit the mark exactly. I always have to remind myself that the MIP levels are "upside-down" in that higher numbers correspond to smaller MIPs.
Instead of repeating myself, can either @emackey or @takahirox point to me in the code where the top level cube map size in the mipmap chain used in the PBR implementation of roughness is 2048x2048 so that your argument of 11 mipmaps makes sense?
( PS. https://github.com/mrdoob/three.js/blob/dev/examples/js/pmrem/PMREMGenerator.js#L17 )
@bhouston They are not using PMREM -- just standard cube maps.
@bhouston They are not using PMREM -- just standard cube maps.
Oh, okay. We are back to that original method. I guess this inconsistency bug I reported back in 2015 no longer applies? https://bugs.chromium.org/p/chromium/issues/detail?id=479753 (I had tried to implement cubemap-based PBR back in early 2015 but couldn't get it consistent so then we pursued the PMREM route: https://github.com/mrdoob/three.js/issues/5847)
You are correct that you need to match the cubemap size. The proper theory to follow is documented here: http://casual-effects.blogspot.ca/2011/08/plausible-environment-lighting-in-two.html
Updated link to the lines @takahirox mentioned above, since the old link is broken now:
Sorry I don't have answers to @bhouston's questions.
@takahirox do you want to do a PR with that change so we can test it?
@mrdoob Yup, I will!
@takahirox ping 馃槆
I will do this weekend! Sorry, finding an apartment in Berlin is much harder than I thought and I've spent time for it this week..
We're a good bit closer with r91, but still showing somewhat clearer reflections at mid-low roughness than the other two engines mentioned in this thread:
I've added an HDR environment map (select "Footprint Court (HDR)" in lighting menu) to https://gltf-viewer.donmccurdy.com/ that matches the one in http://sandbox.babylonjs.com/. Would be glad to have more comparisons to other engines, see if we should align, but this is good progress. 馃憤
Hey there,
just want to mention, that theres are still differences to other engines (here substance painter - exported directly with the intern gltf exporter). Is there a way around for this today? (some constants you can change easily)
@ekermer can you test your glTF file in the babylonjs sandbox and see how it looks there too?
@ekermer Also please test with the same lights & reflection environment in Substance Painter vs Three, if possible.
@looeee @emackey
Looks much better on the babylon preview. The Lightsetup was the same, but I had no hdri.
With hdri it looks better, but not as good as in the babylon preview. Which renderer is used in the babylon sandbox?
If I set the roughness factor in the gltf importer from 1 to 1.8 it looks very close, but this is just my dirty workaround for now :D
@ekermer Any chance you can share a live link so we can test?
Any related files you can share (glTF, Substance, and/or HDRI files) would also be quite helpful. You can drag .ZIP archives into GitHub comments.
Of cause, here is the glb that comes directly from substance painter.
Best regards (and thank you all for the fast reply. It is amazing how active the webgl/three.js community is)
test_spheres.zip
I think we'll need to compare these results across a few viewers. Just to confirm in your screenshots above, which is Substance Painter? On the left? BabylonJS has its own WebGL renderer. If it looks close to Substance Painter that would be great to know.
I'll also try to compare Filament and another renderer if possible.
Has this been confirmed a bug yet? Has anyone looked into where it's coming from. I am starting to investigate the lighting shaders to find out. But I'm just getting started with Three.js/WebGL. I'm a long time graphics programmer for games and apps but breaking into the web space now.
The original bug, and the dramatic difference in my first screenshot above, has been fixed.
What remains, let's call a "confirmed inconsistency". ThreeJS PBR shaders predate the glTF PBR spec, but if we can identify the cause of the difference we may be willing to update to match it. Moreover, the other renderers do not give a clear consensus on what this asset _should_ look like yet.
There is an official Khronos glTF reference renderer in progress, but it isn't complete enough to be considered a source of truth yet. Comparisons below:
| engine | screenshot |
|---|---|
| khronos reference | |
| three | |
| babylon | |
| filament | |
^Due to time constraints I've used the default lighting in various viewers above, without attempting to match it other than adding the same IBL. I wasn't able to change the Khronos render's IBL at this time. So the screenshots above show more inconsistency than actually exists, but it is enough to demonstrate the difference in roughness handling I think.
@ekermer could you share a Substance Painter preview, or identify which image if you've already posted it?
@donmccurdy - I am still finding a specularity/reflectivity issue with Three.js, very similar to your first screenshots where the smooth metallic values are a bit off (too high too fast). Is there a version with the fix you're mentioning? (If so I'm very curious what the solution was and where I can get it). The ThreeJS ggx code looks to be a direct copy from UE4's white paper on an optimized version of the ggx function for mobile. But the issue seems to ripple out from that roughness value, as it seems more correct using the sqrt of the roughness value for both the ggx function AND the cube uv index; I am finding that if I perturb the input roughness in the bsdfs BRDF specular ggx env && cube_uv_reflection_fragment textureCubeUV by pow(roughness,0.5), it evens out the roughness curve near the specularly blown out ranges (lower roughness) and helps smooth out the final result as if the surface texels are actually smoother near the lower range values (high specularity). To me, this is very similar to an internal change in the ggx and/or cube uv roughness index value algorithms, as it should influence the final result in the same way. Anyone have thoughts on this? It sort of begs the question, to me, what is UE4's explanation for squaring the roughness on input (see BRDF_Specular_GGX and reference in bsdfs.glsl), which is the exact opposite of what I'm doing here.
the smooth metallic values are a bit off (too high too fast)
@RyanFavale compared to what tools or renderers? The fix for the original issue has been in since r91+. It's possible there are remaining issues, but (per screenshots in previous post) it's less clear what "ground truth" output is.
@donmccurdy - I'm comparing to Substance Painter's latest renderer as a "ground truth". Whether or not it is, it's what we are authoring our content in, so we want it to render out a comparable result. I've also compared with Unity's latest rendering and an earlier version of theirs (one of which matches Painter very closely). Based on the results I've seen coming from ThreeJS, they appear more reflective near the upper metallic/smoothness range, compared to what I'm seeing from painter which seems comparable with your other results in babylon and qtek. Khronos' http://gltf.ux3d.io/ is rendering similar to painter as well, given the same source files.
@RyanFavale Any chance you could share screenshots of your test scene in every renderer?
@mrdoob - I will have to either get a different asset or get approval. Internal asset atm. But I will try.
One immediate issue is the different ENV maps
[Edit 1: but besides that, the reflective AMOUNT is really the concern, as opposed to color, assuming the envMaps aren't TOO different in intensity]
[Edit 2: I am getting a better reference now, with sphere's like your screenshots - might take a while as I need to get multiple file formats for the different apps fbx glb obj or whatever]
[Edit 3: I'll have to post something later, as I'm running into install issues to run the bsdf using the same asset]
Got sidetracked on a couple things. Coming back to this in a bit.
Although one thing I've noticed is the doubled pow2 on the roughness in the specular ggx function. Roughness is squared in the function, but then it's additionally squared per specular component (eg. D_GGX). Is this intended?
It is squared in BRDF_Specular_GGX()
to make roughness "perceptually linear".
It is squared in D_GGX()
because that is the correct formula.
@WestLangley - Yes, but it is also squared in BRDF_Specular_GGX( ), which then calls the functions you mentioned with the pre-squared roughness. The comment on the first square is "UE4's roughness".
[Edit: Ah, I see in the reference "First let me define alpha that will be used for all following equations using UE4's roughness: 伪=roughness2". :) ]
@RyanFavale yes, @WestLangley is correct. Outside of ThreeJS, the glTF reference PBR shader does this as well, for the same reasons.
Here's a comparison between the default ThreeJS rendering vs Substance Painter vs a pow(r,.5) in the ibl lookup index of ThreeJS. [The slight offset makes a big difference across larger flat reflective surfaces, mostly near the third row to the right in roughness]
Quick observations:
Our PMREM (our convolved environment map) is a hack right now and is not properly calibrated. The perceived roughness can change very easily. For example this minor tweak by @mrdoob affected the perceived roughness of all scenes in Three.JS -- it made everything more biased towards rougher surfaces: https://github.com/mrdoob/three.js/commit/18fef30159b8ce5c60e8e558b3ca8cb89257ced8#diff-3778d4e48bf0b6191c066c52a48ca6c9 The issue isn't that @mrdoob's change is right or wrong, but that the whole PMREM is a hack in terms of its blurred results -- it is uncalibrated.
You need to ensure two things match between Painter and Three.JS and they should be done separately. The first is light reflections. This uses the glsl ggx shader. Match Painter and Three.JS with just lists and no environment. Get it right. Then as a separate step you need to calibrate the environment map convolution/blur to match Painter at the correct roughnesses -- this is a mess because our roughness/blur is uncalibrated and easy to tweak depending on minor settings.
We should be rewriting the PMREM to use a proper importance sampling approach so that it is proper calibrated and then we can reliably match references like Painter. Basically what is proposed in this PR but which it doesn't actually realize correctly: https://github.com/mrdoob/three.js/pull/15390
Yes, my envMap is one of the biggest diffs, I did strip the rendering down into layers to find the diffs (diffuse, reflection, specular, lightProbe, etc.), and most of it came down to the envMap and some default lighting values my team had setup.
@bhouston Thank you for this explanation! That helped me a ton against feeling lost with the issue, and understanding where the issue was stemming from. :D
I think for now, we will use the roughness offest (ie. roughness = pow(roughness, 0.5)) in the textureCubeUV function untill the envMap/PMREM is updated/calibrated. :)
I'm going to start taking a look at the PMREM code and try to get an estimate of what it would take to make it right. If anyone has a rough idea how big of an undertaking this would be, let me know your opinions. :)
@RyanFavale PR #15390 is theoretically right in what the submitter says, but the PR actually doesn't work as reported by @TheophileMot (a co-worker of mine) here: https://github.com/mrdoob/three.js/pull/15390#pullrequestreview-186282109 Thus I believe we just have to fix this PR and then we should be doing things in a relatively correct theoretical basis. That PR should be following the example code outlined here: https://developer.nvidia.com/gpugems/GPUGems3/gpugems3_ch20.html
@bhouston I will start there. Thanks. That happens to be one of my team members who submitted the PR. I just finished talking with him. He is siding towards swapping the PMREM with a pre-computed one we download, generated with the offline tool he mentioned. But if his changes are close, maybe I can get the rest of the way.
I've started working through this UV [lookup] shader (PMREM) - adding comments (renaming variables), but there are parts of it I'm having trouble visualizing. Anyone have any more comments they could add as to what's going on here? [Edit: looking at PMREMGenerator.js and CubeUVPacker now. This might be all the info I need.]
We have the file data we need generated from cmftStudio, but the files are split up, so I need to convert them into a similar 2D format... or I can create my own lookup uv's. I thought this was a little more straight forward at first glance.
For example: what is "s" at the end of the function?
vec2 getCubeUV(vec3 direction, float roughnessLevel, float mipLevel)
{
mipLevel = roughnessLevel > (cubeUV_maxLods2 - 3.0) ? 0.0 : mipLevel;
float texelsPerChunk = 16.0 * cubeUV_rcpTextureSize;
vec2 exp2_packed = exp2( vec2( roughnessLevel, mipLevel ) ); // (exp2(0...1)) == 1 to 2, for 1...n = (1,2,4,8,16,32...)
vec2 rcp_exp2_packed = vec2( 1.0 ) / exp2_packed; // 1 / (1...2) , 1/ (1,2,4,8,...) == (1...0.5) , (1,0.5,0.25,0.125,...)
float rcpPowScale = 1.0 / (exp2_packed.x * exp2_packed.y); // 1...n
float scale = rcp_exp2_packed.x * rcp_exp2_packed.y * 0.25; // 1/4 of (rough(0) = 1 * mip(0) = 1) = 1 [to] (rough(1) = 0.5 * mip(8) = 0.125) = 0.0625
// scale = 0...0.25
float mipOffset = 0.75 * (1.0 - rcp_exp2_packed.y) * rcp_exp2_packed.x; // 3/4 of (0...0.5) * (1,0.5,0.25,0.125,...)
// mipOffset = 0...0.75
bool isMip0 = mipLevel == 0.0;
scale = (isMip0 && (scale < texelsPerChunk)) ? texelsPerChunk : scale;
vec3 faceDir;
vec2 offset;
int face = getFaceFromDirection(direction);
if( face == 0)
{
faceDir = vec3(direction.x, -direction.z, direction.y); // -z = up
offset.x = 0.0 * scale + mipOffset; // 0...0.75
offset.y = 0.75 * rcpPowScale; // 0.75...n
offset.y = isMip0 && (offset.y < 2.0 * texelsPerChunk) ? texelsPerChunk : offset.y;
}
else if( face == 1)
{
faceDir = vec3(direction.y, direction.x, direction.z); // x = up
offset.x = 1.0 * scale + mipOffset; // 0...1
offset.y = 0.75 * rcpPowScale; // 0.75...n
offset.y = isMip0 && (offset.y < 2.0 * texelsPerChunk) ? texelsPerChunk : offset.y;
}
else if( face == 2)
{
faceDir = vec3(direction.z, direction.x, direction.y); // x = up, z/y swapped
offset.x = 2.0 * scale + mipOffset; // 0...1.25
offset.y = 0.75 * rcpPowScale; // 0.75...n
offset.y = isMip0 && (offset.y < 2.0 * texelsPerChunk) ? texelsPerChunk : offset.y;
}
else if( face == 3)
{
faceDir = vec3(direction.x, direction.z, direction.y); // z = up
offset.x = 0.0 * scale + mipOffset; // 0...0.75
offset.y = 0.5 * rcpPowScale; // 0.5...n
offset.y = isMip0 && (offset.y < 2.0 * texelsPerChunk) ? 0.0 : offset.y;
}
else if( face == 4)
{
faceDir = vec3(direction.y, direction.x, -direction.z); // x = up, z flipped
offset.x = 1.0 * scale + mipOffset; // 0...1
offset.y = 0.5 * rcpPowScale; // 0.5...n
offset.y = isMip0 && (offset.y < 2.0 * texelsPerChunk) ? 0.0 : offset.y;
}
else
{
faceDir = vec3(direction.z, -direction.x, direction.y); // -x = up, y/z swapped
offset.x = 2.0 * scale + mipOffset; // 0...1.25
offset.y = 0.5 * rcpPowScale; // 0.5...n
offset.y = isMip0 && (offset.y < 2.0 * texelsPerChunk) ? 0.0 : offset.y;
}
faceDir = normalize(faceDir);
float texelOffset = 0.5 * cubeUV_rcpTextureSize; // offset to center of texel
vec2 s = faceDir.yz / abs( faceDir.x );
s = (s + vec2( 1.0 )) * 0.5; // -1...1 to 0...1
return offset + vec2( texelOffset ) + s * ( scale - 2.0 * texelOffset );
}
Most helpful comment
Quick observations:
Our PMREM (our convolved environment map) is a hack right now and is not properly calibrated. The perceived roughness can change very easily. For example this minor tweak by @mrdoob affected the perceived roughness of all scenes in Three.JS -- it made everything more biased towards rougher surfaces: https://github.com/mrdoob/three.js/commit/18fef30159b8ce5c60e8e558b3ca8cb89257ced8#diff-3778d4e48bf0b6191c066c52a48ca6c9 The issue isn't that @mrdoob's change is right or wrong, but that the whole PMREM is a hack in terms of its blurred results -- it is uncalibrated.
You need to ensure two things match between Painter and Three.JS and they should be done separately. The first is light reflections. This uses the glsl ggx shader. Match Painter and Three.JS with just lists and no environment. Get it right. Then as a separate step you need to calibrate the environment map convolution/blur to match Painter at the correct roughnesses -- this is a mess because our roughness/blur is uncalibrated and easy to tweak depending on minor settings.
We should be rewriting the PMREM to use a proper importance sampling approach so that it is proper calibrated and then we can reliably match references like Painter. Basically what is proposed in this PR but which it doesn't actually realize correctly: https://github.com/mrdoob/three.js/pull/15390