Three.js: Soft Shadows via Light Source Sampling

Created on 12 May 2018  Â·  17Comments  Â·  Source: mrdoob/three.js

The idea here is that we use the fastest shadow algorithm possible, and then accumulate it across frames. To get accurate accumulated soft shadows, we should jitter the light origin within the volume of the light. For a lights of any type we jitter in their volume in a Poisson-like fashion. This will create physically correct penumbra if you accumulate over time.

We can either accumulate in the main frame buffer or we could accumulate in a shadow buffer.

Sketchfab is currently jittering their light sources in order to get soft shadows when the scene is static. We have also recently implemented a basic light jitter approach in Clara.io and it gives good result. We should take it to the next level and integrate it with core Three.JS.

Enhancement

Most helpful comment

Here's a quick proof of concept for this sort of thing that I put together awhile ago and basically does what's being described here:

https://raw.githack.com/gkjohnson/threejs-sandbox/0f2c762/volume-lights/

A pointlight is moved along the surface of an emission volume while the frame is merged with the previous buffer.

@bhouston

We should take it to the next level and integrate it with core Three.JS.

How are you imagining this being added to core? Would it be a separate renderer? Or an option in the current one to automatically jitter and accumulate lights over time if the camera didn't change position?

We can either accumulate in the main frame buffer or we could accumulate in a shadow buffer.

Can you elaborate on what you mean by "accumulate in a shadow buffer"?

All 17 comments

More details here: http://dee.cz/fcss/

Similar to this? http://madebyevan.com/shaders/lightmap/

Fairly similar, but it should look better faster and less messy.

it should look better faster

When we are talking about accumulating across frames, how many frames will it take before it looks good? Will there be ugly jank at the start?

@looeee

When we are talking about accumulating across frames, how many frames will it take before it looks good?

  • that depends on sampling approach. Going by standard PCF implementation with 16 taps - you would need about that to get similar quality.

Will there be ugly jank at the start?

  • Yes, it will look like ugly jank at the start :)
  • You can hold the first few samples to avoid this "jank" from being seen.

The result should look like what we have now when the scene is dynamic, but soon as it stops moving, we can apply the shadow jitting within the light volumes and it should refine to soft shadows. It is like this TAA example I Wrote a few years ago but instead of sub-pixel jittering the camera to anti-alias, it jitters the lights in their source volumes to get physically correct penumbra:

https://threejs.org/examples/webgl_postprocessing_taa.html

Here's a quick proof of concept for this sort of thing that I put together awhile ago and basically does what's being described here:

https://raw.githack.com/gkjohnson/threejs-sandbox/0f2c762/volume-lights/

A pointlight is moved along the surface of an emission volume while the frame is merged with the previous buffer.

@bhouston

We should take it to the next level and integrate it with core Three.JS.

How are you imagining this being added to core? Would it be a separate renderer? Or an option in the current one to automatically jitter and accumulate lights over time if the camera didn't change position?

We can either accumulate in the main frame buffer or we could accumulate in a shadow buffer.

Can you elaborate on what you mean by "accumulate in a shadow buffer"?

If this is still interesting I updated my volume-lights demo when migrating to modules with a few added features. It now includes random surface sampling of the light shape (thanks to @donmccurdy's MeshSurfaceSampler!), an option to fade the result in after it's had a moment to resolve to avoid the janky look at the start, and an option to use more point lights per frame:

https://raw.githack.com/gkjohnson/threejs-sandbox/bb05ead/volume-lights/

Using random sampling and more lights results in the image resolving much more quickly (~1 to 3 or 4 seconds). For the random sampling case I've overriden the Math.random function with one that can be seeded so the same initial shadow results can be reproduced when moving the camera.

@gkjohnson that is beautiful stuff. My favorite settings are:

  • light count: 3
  • fade, true
  • fade delay, 1 second
  • fade transition, 1 second

I think one issue with it is that the light/shadow used when moving around isn't very representative of the final iterative average shadow. I wonder if there is a way to fix that so that the instanteous is closer to the average.

@bhouston thanks! I think I prefer the shorter fades myself.

I think one issue with it is that the light/shadow used when moving around isn't very representative of the final iterative average shadow. I wonder if there is a way to fix that so that the instanteous is closer to the average.

Using more lights helps with this a bit because the lights are spread among the object and the shadows are more representative of where they'll wind up. Deliberately seeding lights at the extremes for the first render -- points on the shapes surface closest to the 8 corners of a bounding box, for example -- could help give a more complete impression of the file shadow extents but the discrete shadows won't look great. VSM or PCSS shadows might be better just for the initial render, as well. Temporal reprojection AA would help with the initial few renders when moving the camera, too.

We had problems implementing TRAA with jittered shadows because TRAA is
sensitive to intensity boundaries and it tries to not blend across them.

On Sun, Mar 1, 2020 at 10:39 PM Garrett Johnson notifications@github.com
wrote:

@bhouston https://github.com/bhouston thanks! I think I prefer the
shorter fades myself.

I think one issue with it is that the light/shadow used when moving around
isn't very representative of the final iterative average shadow. I wonder
if there is a way to fix that so that the instanteous is closer to the
average.

Using more lights helps with this a bit because the lights are spread
among the object and the shadows are more representative of where they'll
wind up. Deliberately seeding lights at the extremes for the first render
-- points on the shapes surface closest to the 8 corners of a bounding box,
for example -- could help give a more complete impression of the file
shadow extents but the discrete shadows won't look great. VSM or PCSS
shadows might be better just for the initial render, as well. Temporal
reprojection AA would help with the initial few renders when moving the
camera, too.

—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/mrdoob/three.js/issues/14048?email_source=notifications&email_token=AAEPV7NFCHDO6HGRB32Q2SDRFMS7LA5CNFSM4E7RPX22YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOENNZHKA#issuecomment-593204136,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AAEPV7ISR4KTS6UIDGNV5H3RFMS7LANCNFSM4E7RPX2Q
.

--

Ben HoustonCTO
*M: *+1-613-762-4113
[email protected]
Ottawa, ON

[image: threekitlogo1568384278.png] https://www.threekit.com/
Create a better visual customer experience with Threekit
https://www.threekit.com/

[image: Learn how to scale product visuals with your growing business]
http://signatures.threekit.com/uc/5d0cfd47860b6864aee40938/c_5da8cc74c90a0e0043fe0323/b_5dade3e4e85fae00caea3d76

@gkjohnson looking great! 🤩 does it require core changes?

@bhouston

We had problems implementing TRAA with jittered shadows because TRAA is
sensitive to intensity boundaries and it tries to not blend across them.

Admittedly I'm not so knowledgeable when it comes to TRAA but I would imagine some of that is due to the fact that the technique is developed for handling dynamic scenes? I would think that because we're dealing with a completely static scene in this case it could be adapted to avoid that. Is your TRAA pass available online anywhere?

@mrdoob

looking great! 🤩 does it require core changes?

Nope! All just buffer swaps and accumulation using the existing RenderTarget system, custom shaders, and a light that moves every frame so it is a bit of boilerplate. A Float or HalfFloat render target type is needed to preserve the precision needed for blending. Maybe there are other methods I'm unaware of for that, though.

If there's anything interesting here to bundle into a utility class or object for the examples folder I'd be happy to contribute it.

If you want to do it on a static scene, just modify the TAA post processing
example in three.js, which is based on SSAA. To get around the need for
Float or HalfFloat look at how I do variable weighted integer-based
accumulation to avoid biasing:
https://threejs.org/examples/#webgl_postprocessing_ssaa_unbiased
-ben

On Mon, Mar 2, 2020 at 9:04 PM Garrett Johnson notifications@github.com
wrote:

@bhouston https://github.com/bhouston

We had problems implementing TRAA with jittered shadows because TRAA is
sensitive to intensity boundaries and it tries to not blend across them.

Admittedly I'm not so knowledgeable when it comes to TRAA but I would
imagine some of that is due to the fact that the technique is developed for
handling dynamic scenes? I would think that because we're dealing with a
completely static scene in this case it could be adapted to avoid that. Is
your TRAA pass available online anywhere?

@mrdoob https://github.com/mrdoob

looking great! 🤩 does it require core changes?

Nope! All just buffer swaps and accumulation using the existing
RenderTarget system, custom shaders, and a light that moves every frame so
it is a bit of boilerplate. A Float or HalfFloat render target type is
needed to preserve the precision needed for blending. Maybe there are other
methods I'm unaware of for that, though.

If there's anything interesting here to bundle into a utility class or
object for the examples folder I'd be happy to contribute it.

—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/mrdoob/three.js/issues/14048?email_source=notifications&email_token=AAEPV7JXUP7ORR7IDR7HAHTRFRQTLA5CNFSM4E7RPX22YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOENRZBLY#issuecomment-593727663,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AAEPV7LYOKJZRDEH2YNSAA3RFRQTLANCNFSM4E7RPX2Q
.

--

Ben HoustonCTO
*M: *+1-613-762-4113
[email protected]
Ottawa, ON

[image: threekitlogo1568384278.png] https://www.threekit.com/
Create a better visual customer experience with Threekit
https://www.threekit.com/

[image: Learn how to scale product visuals with your growing business]
http://signatures.threekit.com/uc/5d0cfd47860b6864aee40938/c_5da8cc74c90a0e0043fe0323/b_5dade3e4e85fae00caea3d76

@bhouston it took me a bit to get around to it but I put together an example of what I meant by TRAA on a static scene -- I meant that while the camera is moving the geometry is not which means we should expect shadows to be in the exact same spot after moving the camera (which is not a valid assumption in a scene with moving parts). Using a velocity buffer, the previous and new frames depth buffers and camera positions we can reproject a rendered pixel into the new frame and get a "head start" on shadow rendering using the accumulated soft shadows that were rendered with a lot of iterations. We can keep reprojecting that highly resolved frame until the camera stops again or fade it out if the camera moves to some obtuse angle compared to the resolved perspective.

Here's a quick demo showing a quick proof of concept of the pixel reprojection. Disable "autoRender" to keep a single frame and reproject it into the current one. Pixels without visible data in the previous frame are dimmed:

https://raw.githack.com/gkjohnson/threejs-sandbox/e58e520/pixel-reprojection/index.html

image

Of course specular won't be handled correctly in the reprojection because it's view-dependent. If you really wanted to get fancy you could keep the accumulated shadow data in a separate buffer to reproject and use the view-dependent specular data from the current frame.

I plan to add accumulation of data over multiple frames as the camera moves, as well, but am dependent on #19447 for that.


To get around the need for
Float or HalfFloat look at how I do variable weighted integer-based
accumulation to avoid biasing:

Regarding the weighted accumulation approach you suggested I'll have to try it again. When I quickly tried it before I recall getting banding but perhaps I implemented something incorrectly. It may also be a bit of a different case considering I am not dealing with a fixed number of samples.

Screen Shot 2020-05-26 at 4 08 22 PM

😕

@mrdoob oops thanks for the catch -- I thought I put a link to a specific commit but I guess not. Looks like I broke something when messing around. I updated the link above but here's a permalink to the working commit:

https://raw.githack.com/gkjohnson/threejs-sandbox/e58e520/pixel-reprojection/index.html

Was this page helpful?
0 / 5 - 0 ratings

Related issues

jlaquinte picture jlaquinte  Â·  3Comments

danieljack picture danieljack  Â·  3Comments

seep picture seep  Â·  3Comments

clawconduce picture clawconduce  Â·  3Comments

filharvey picture filharvey  Â·  3Comments