This issue can serve as discussion for improving the preliminary RectAreaLight implementation being reviewed in #9234.
Currently, the BRDF for the RectAreaLight implementation is approximated using float textures totaling ~80kb when the data is included as discrete, raw, float values. Methods to compress that data need to be explored in order for it to be viable to include the data directly in the three.js source.
Possible methods include:
@bhouston @WestLangley @tschw
Here is an attempt using kmeans vectorization to reduce the data sets to 256 discrete values and store 1byte indices into those "codebooks". Original data and the vectorization can be seen in the screenshot below. The results are not great, resulting in some pretty serious banding artifacts.
Demo of results can be seen here: http://groovemechanic.net/three.js.compress/examples/#webgl_lights_rectarealight
I support what you are trying to do, but as I tried to explain elsewhere, your metric is in the wrong space, IMHO.
The goal is to approximate a BRDF with a particular LTC function. The LTC function has 5 parameters, which have been specified by the authors of the paper.
You are proposing an approach that results in a different set of 5 parameters. If you change the parameters, then the LTC function will change, and it may no longer approximated the BRDF correctly.
You don't want to measure how well you are fitting the 5 parameters. You need to measure how well you are fitting the BRDF with your modified method. Changes in the 5 parameters can potentially result in large changes in the BRDF -- or in an approximated BRDF that is not properly normalized.
In any event, thank you for all your efforts; they are much-appreciated. We will get there... : - )
@WestLangley Your reasoning makes sense. TBH, this is an area where I'm rather inexperienced, though it's certainly been enjoyable to experiment.
Your reasoning essentially boils down to doing exactly what the authors of the paper did: for each angle of view, fit a linear transformation matrix to the basic clamped cosine distribution, such that the resulting function matches the GGX BRDF as closely as possible. I also noticed just now that the paper authors have published their fitting code, which is great.
The real question is, is there a better method to compress the resulting fitted data that is smaller than what the authors propose? They represent the BRDF fitting for each angle as the values in the linear transform matrix, 5 vals, where 5 vals are needed for each angle of view. What other approaches could be used to represent the same fitting without requiring as much data (while not completely deviating from the model proposed by the paper)?
@abelnation OK, we are on the same page. Data compression is not my area of expertise. If you choose to go down that path, we have to make sure your metric is a reasonable one. In the near term, my personal goal is to verify that you have correctly implemented the algorithm as described in the paper. Sorry it is taking a long time for me to get to it -- it is not for a lack of interest. : - )
@WestLangley @abelnation I've been looking into compressing the two tables and I think I may have found something! So far I've been playing around with an algorithm that produces no noticeable artifacts and reduces ltc_mag from 40k bytes to roughly 4k bytes of us this includes decoding logic. What is our target size? 20kb???
Note: the compression algrothem is neural net based. See... http://codepen.io/sam-g-steel/pen/GWMXjB?editors=1011. It takes about 2.5 million iterations to get good results
I've a three.js demo that will be on code pin soon too. I'll comment when available.
Note: the compression algrothem is neural net based. See... http://codepen.io/sam-g-steel/pen/GWMXjB?editors=1011. It takes about 2.5 million iterations to get good results
404 (Page Not Found)
This sounds super good though!
@mrdoob
Cool! about the 404... It looks like my pen was private
Still working on a few things.
Todo list...
All of the code in the pen is to encode the image
I also have code that will decode it
http://codepen.io/sam-g-steel/pen/GWMXjB
To get decent results it takes 2.5million iterations ~8hrs on dell xps 1550
Fortunatly decoding happens in the single diget milliseconds :)
This is all brilliant and only possible because Three.JS is such an
attractive open source project that it brings in so many with varied
talents.
Best regards,
Ben Houston (Work: 613-422-4009, Cell: 613-762-4113, Skype: ben.exocortex,
Twitter: @exocortexcom)
http://exocortex.com | https://clara.io | http://threekit.com
On Sun, Mar 19, 2017 at 2:38 PM, Samuel Sylvester notifications@github.com
wrote:
@mrdoob https://github.com/mrdoob
Cool! about the 404... It looks like my pen was private
Still working on a few things.
Todo list...
- Compress single channel data ie... LTC_MAG
- [wip] Compress multi-channel data, LTC_MAT
- finish uncompression code... It currently works but could be more
efficient, smaller and faster- Have a demo showing that this works and how much smaller the code is
All of the code in the pen is to encode the image
I also have code that will decode ithttp://codepen.io/sam-g-steel/pen/GWMXjB
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/mrdoob/three.js/issues/9373#issuecomment-287649718,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAj6_cBVnO_w8Ijah7Vt1ydMr1wAOjm-ks5rnaBPgaJpZM4JSMWI
.
@sam-g-steel These are data textures and the data has meaning in the context of the BRDF we are using. What is the metric you are using as a measure of convergence? That fact that these images "look similar" is not sufficient.
I am assuming the uncompressed data is not the same as the original data. Is that a correct assumption?
@WestLangley
Yes, that is correct... the uncompressed data is not the same as the original table. But this could be a good thing, the table is limited to a 64x64 table of 16bit floats the uncompressed data is resolution independent and can easily be rendered to a 128, 256, or even 4096 square texture at 64bits per channel... Ok, we probably can't use that kind of precision, but you get my point... this might eventually yield more desirable results.
Also, I understand that just looking at the images and saying that they look similar is not good enough.
However, I have a THREE.js demo that I'm working on that will demonstrate the results that this kind of compression can yield. LTC_MAG compression is working well I just want to get LTC_MAT(much harder) working too before I publish the demo.
Also note, the longer I train the neural network the more accurate the results...
I'll have accuracy measures available soon, next few days...
It looks like this experiment is going to take me another week to finish...
It's also important to note that the original data is also fitted to an ideal function, so the source data should not be viewed as a "pure" source of truth.
Sent from my phone
On Mar 21, 2017, at 11:16 AM, Samuel Sylvester notifications@github.com wrote:
@WestLangley Yes, that is correct... the uncompressed data is not the same as the original table.
Also, I am aware of how the data is used so I understand that just looking at the images and saying that they look similar is not good enough.
However, I have a THREE.js demo that I'm working on a demo that will demonstrate the results that this kind of compression can yield. LTC_MAG compression is working well I just want to get LTC_MAT(much harder) working too before I publish the demo.
Also note, the longer I train the neural network the more accurate the results...
I'll have accuracy measures available soon, next few days...It looks like this experiment is going to take me another week to finish...
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
OK, the codepen now shows compression errors on an exponential scale... (1-(1-delta^10))2
I've also improved compression time and rate from 2.5 million iterations at 8hrs to 100k iterations at under 30min!
It's also important to note that the original data is also fitted to an ideal function, so the source data should not be viewed as a "pure" source of truth.
Guys, this data has meaning. LTC_MAG
is an energy-conservation constant which is derived from the four LTC_MAT
constants that encode a particular affine transformation. Hence the fitted data, if it is to model an energy-conserving BRDF at all, must be internally-consistent.
You are going to have to demonstrate that the uncompressed data remains at a minimum "reasonable" _in the context of the model_. One way to do that is to show that the maximum difference between the original data and the uncompressed data is small enough so that it does not matter. If you can do that, that would be great. :)
@WestLangley just updated my previous comment... errors introduced by compression are now displayed
have you checked out the codepen?
You might find it really interesting... Note this is still WIP
http://codepen.io/sam-g-steel/pen/GWMXjB
Below is a screenshot of the latest codepen update...
The error scale is nonlinear and goes from 0 to ~0.067 (bright yellow)
@sam-g-steel Great. We are on the same page. And yes, I have enjoyed studying your codepen. :)
The max LTC_MAG
error appears to occur at grazing angles and low roughness (unless the image is flipped). I would need to study those particular cases.
Ok, I must have misunderstood you :)
I don't remember which side corresponds to high roughness I need to go back and look at the shaders.
Yes, compared to images that @abelnation has posted the image is flipped.
The display logic must be different. Im reading the array row first left to right, top to bottom.
I ran the current version of the "Kompressor" for 3 million iterations and the errors are much improved.
Although I agree, the lower left hand side errors need to be further examined.
I suspect that some of the issue is with the original data, but I don't have proof of that right now.
13 million iterations below shows further improvement...
Ok, I the neural net solution works well with normalized data but LTC_MAT is beyond the 0-1 range and was becoming difficult to program in my spare time.
So I think I've foud another solution called LZMA compression...
I have a test bed here... https://codepen.io/sam-g-steel/pen/wdVbRR
where I found that the LTC Tables stored as 32bit floats can be compressed and base64 encoded as a ~82.4KB js file!
this includes the LZMA decoder...
I also am playing with the idea of using 16bit floats to get a ~44.2KB js file...
More coming!
Back again!
So, I've played with compressing with the neural network and LZMA together and It looks like I can get the file size down to somewhere between 11kb and 40kb. As indicated before 2 channels in LTC_MAG are proving difficult to compress.
I've updated the "Kompressor" on code pin to better show the issues that I'm running into.
@mrdoob what level of compression are we looking for? Currently, I'm trying to get the LTC tables down to ~11kb with no visible artifacts. Is that a good target?
At the moment Kompressor is running extremely slow... so the image below is from 19k iterations and it so the results are very rough.
the result of one channel being Compressed looks like this...
/* input is [x, y] where x & y are from 0 to 1 */
(input)=>{ let F = [0.02647954,0.578125,0.6875,0.8604925,3.409405,1.819387,1.917552,-2.838214,2.243433,0.1200451,0.578125,0.6875,-0.006630746,-3.970503,0.007899984,14.17811,-0.000004370584,4.735625,-0.004878909,4.39696,-0.0001475289,0.003284024,-0.003067013,0.07704439,-0.00055981,0.5071134,1.656844,0.02845545,6.249455,-6.554314,-3.537498,0.2499494,0.578125,0.6875,-0.005069648,-2.12862,6.988754,-1.300057,1.09208,3.085112,0.2782377,0.2200895,0.1160212,-1.265145,1.614736,-4.130901,-0.7152283,0.1716501,0.578125,0.6875,-0.0003199932,0.6069319,1.90144,-0.1448237,-0.8830041,2.198323,1.331211,0.02745657,-0.2127562,-3.567309,2.445215,-9.44644,-0.8018365,0.0267027,0.578125,0.6875,0.0008174214,4.684403,1.642788,-0.5556078,0.02414952,2.062356,4.94726,0.9838594,1.169851,4.110146,13.95682,-0.7287639,-13.7096,0.01588006,0.578125,0.6875,0.00009552195,3.480087,-1.271201,0.2638712,-8.803079,6.113408,4.902338,0.1230558,-0.63844,-1.963806,1.098551,-4.141821,-0.971415,0.107913,0.578125,0.6875,0.0003805696,1.195323,1.449212,0.04264924,-0.5273714,1.597599,1.571839,0.3420743,-0.1374936,-0.6540642,0.00940474,0.2250595,0.8604925,0.5071134,0.2200895,0.02745657,0.9838594,0.1230558,-3.873948,-0.009060979,0.9999496,15.62511,9.89605,-5.23944,0.00005036339,0.8604925,0.5071134,0.2200895,0.02745657,0.9838594,0.1230558,9.577438,0.179438,-1.770636,-1.520159,-5.153975,0.14724,0.8604925,0.5071134,0.2200895,0.02745657,0.9838594,0.1230558,3.656973,0.002535178,-3.575872,-5.974953,-1.39276,0.002528751,0.8604925,0.5071134,0.2200895,0.02745657,0.9838594,0.1230558,6.43867,0.04676708,-1.438766,-3.01468,-11.33403,0.04457992,0.8604925,0.5071134,0.2200895,0.02745657,0.9838594,0.1230558,7.592786,0.007537325,-3.02536,-4.880322,-10.53322,0.007480514,0.8604925,0.5071134,0.2200895,0.02745657,0.9838594,0.1230558,8.259126,0.7072759,1.85878,0.8821904,-8.459972,0.2070367,0.3420743,0.9999496,0.179438,0.002535178,0.04676708,0.007537325,0.6982149]; let func = function (input){ F[1] = input[0]; F[2] = input[1]; F[4] = F[5]; F[5] = F[6]; F[5] += F[1] * F[7]; F[5] += F[2] * F[8]; F[3] = (1 / (1 + Math.exp(-F[5]))); F[9] = F[3] * (1 - F[3]); F[10] = F[1]; F[11] = F[2]; F[26] = F[27]; F[27] = F[28]; F[27] += F[1] * F[29]; F[27] += F[2] * F[30]; F[25] = (1 / (1 + Math.exp(-F[27]))); F[31] = F[25] * (1 - F[25]); F[32] = F[1]; F[33] = F[2]; F[42] = F[43]; F[43] = F[44]; F[43] += F[1] * F[45]; F[43] += F[2] * F[46]; F[41] = (1 / (1 + Math.exp(-F[43]))); F[47] = F[41] * (1 - F[41]); F[48] = F[1]; F[49] = F[2]; F[58] = F[59]; F[59] = F[60]; F[59] += F[1] * F[61]; F[59] += F[2] * F[62]; F[57] = (1 / (1 + Math.exp(-F[59]))); F[63] = F[57] * (1 - F[57]); F[64] = F[1]; F[65] = F[2]; F[74] = F[75]; F[75] = F[76]; F[75] += F[1] * F[77]; F[75] += F[2] * F[78]; F[73] = (1 / (1 + Math.exp(-F[75]))); F[79] = F[73] * (1 - F[73]); F[80] = F[1]; F[81] = F[2]; F[90] = F[91]; F[91] = F[92]; F[91] += F[1] * F[93]; F[91] += F[2] * F[94]; F[89] = (1 / (1 + Math.exp(-F[91]))); F[95] = F[89] * (1 - F[89]); F[96] = F[1]; F[97] = F[2]; F[106] = F[107]; F[107] = F[108]; F[107] += F[3] * F[13]; F[107] += F[25] * F[35]; F[107] += F[41] * F[51]; F[107] += F[57] * F[67]; F[107] += F[73] * F[83]; F[107] += F[89] * F[99]; F[105] = (1 / (1 + Math.exp(-F[107]))); F[109] = F[105] * (1 - F[105]); F[110] = F[3]; F[111] = F[25]; F[112] = F[41]; F[113] = F[57]; F[114] = F[73]; F[115] = F[89]; F[119] = F[120]; F[120] = F[121]; F[120] += F[3] * F[15]; F[120] += F[25] * F[36]; F[120] += F[41] * F[52]; F[120] += F[57] * F[68]; F[120] += F[73] * F[84]; F[120] += F[89] * F[100]; F[118] = (1 / (1 + Math.exp(-F[120]))); F[122] = F[118] * (1 - F[118]); F[123] = F[3]; F[124] = F[25]; F[125] = F[41]; F[126] = F[57]; F[127] = F[73]; F[128] = F[89]; F[131] = F[132]; F[132] = F[133]; F[132] += F[3] * F[17]; F[132] += F[25] * F[37]; F[132] += F[41] * F[53]; F[132] += F[57] * F[69]; F[132] += F[73] * F[85]; F[132] += F[89] * F[101]; F[130] = (1 / (1 + Math.exp(-F[132]))); F[134] = F[130] * (1 - F[130]); F[135] = F[3]; F[136] = F[25]; F[137] = F[41]; F[138] = F[57]; F[139] = F[73]; F[140] = F[89]; F[143] = F[144]; F[144] = F[145]; F[144] += F[3] * F[19]; F[144] += F[25] * F[38]; F[144] += F[41] * F[54]; F[144] += F[57] * F[70]; F[144] += F[73] * F[86]; F[144] += F[89] * F[102]; F[142] = (1 / (1 + Math.exp(-F[144]))); F[146] = F[142] * (1 - F[142]); F[147] = F[3]; F[148] = F[25]; F[149] = F[41]; F[150] = F[57]; F[151] = F[73]; F[152] = F[89]; F[155] = F[156]; F[156] = F[157]; F[156] += F[3] * F[21]; F[156] += F[25] * F[39]; F[156] += F[41] * F[55]; F[156] += F[57] * F[71]; F[156] += F[73] * F[87]; F[156] += F[89] * F[103]; F[154] = (1 / (1 + Math.exp(-F[156]))); F[158] = F[154] * (1 - F[154]); F[159] = F[3]; F[160] = F[25]; F[161] = F[41]; F[162] = F[57]; F[163] = F[73]; F[164] = F[89]; F[167] = F[168]; F[168] = F[169]; F[168] += F[3] * F[23]; F[168] += F[25] * F[40]; F[168] += F[41] * F[56]; F[168] += F[57] * F[72]; F[168] += F[73] * F[88]; F[168] += F[89] * F[104]; F[166] = (1 / (1 + Math.exp(-F[168]))); F[170] = F[166] * (1 - F[166]); F[171] = F[3]; F[172] = F[25]; F[173] = F[41]; F[174] = F[57]; F[175] = F[73]; F[176] = F[89]; F[179] = F[180]; F[180] = F[181]; F[180] += F[105] * F[116]; F[180] += F[118] * F[129]; F[180] += F[130] * F[141]; F[180] += F[142] * F[153]; F[180] += F[154] * F[165]; F[180] += F[166] * F[177]; F[178] = (1 / (1 + Math.exp(-F[180]))); F[182] = F[178] * (1 - F[178]); F[183] = F[105]; F[184] = F[118]; F[185] = F[130]; F[186] = F[142]; F[187] = F[154]; F[188] = F[166]; var output = []; output[0] = F[178]; return output; }; return func(input); }
That is so damn cool. :)
You should offer NN compresson of any uploaded texture via a web service into a bit of code. You could save Petabytes in no time. :)
@mrdoob what level of compression are we looking for? Currently, I'm trying to get the LTC tables down to ~11kb with no visible artifacts. Is that a good target?
That sounds good to me 👌
I bet @takahirox has some ideas for this problem.
Looks like the compression scheme is working better than I expected...
I have 4 of the 5 channels compressed with no noticeable artifacts.
This currently compresses down to 25kb... but if we can compress all 5 channels I think the size will be around 11kb.
I hope to have something in src control soon so folks can evaluate the results themselves.
_All but LTC_MAT alpha compressed... Looking promising to me!!! :)_
ltc_mat blue needs some small tweaks... at some angles, the reflections are slightly blurred, not very noticeable.
_All channels compressed pictured below... Specular reflections are skewed!!!_
New commit can be found here... https://github.com/sam-g-steel/three.js/tree/8c1e6162239f99bd7be3696ec5619203495f7af4
Compressed LTC Data is stored here... "examples/js/lights/RectAreaLightUniformsLib-NN.js"
LTC Data has been compressed using neural networks to find an expression that represents all but one channel. The last channel remained as an array, the last channel is very sensitive to errors. From there all of the js expressions and arrays were further compressed using LZMA and base 64 encoding.
The neural network logic can be found here... http://codepen.io/sam-g-steel/pen/GWMXjB
Added a compressed version of
"examples/webgl_lights_rectarealight_compression_test.html"
So, I noticed artifacts with high roughness... I believe this is due to errors in the ltc_mat's blue channel... Im looking further into that at the moment
Still actively working on this...
Expect to have an update some time around New year's
Lately I've been working on a neural network visualizer that helps get more accurate results with less training time... Hopefully that translates into more usable data compression for LTC_Mat blue and Alpha channels.
@sam-g-steel Are you able to post up-to-date versions of the live demo? Would be cool to view it animating to see what sort of artifacts are visible.
@honorabel https://ltc-kompressor.azurewebsites.net/ has a preview of the neural network visualizer...
It's not complete but just a preview... Unfortunately, this code is hard to setup on code pin so I had to get it setup elsewhere... hence the new URL.
Note, the code is designed to run on chrome... And may not work on other browsers...
Oh, nice. I was referring to the actual RectAreaLight demo where one can view the fitted data with an actual 3d rendering.
@sam-g-steel you should write a paper on this and submit it to SIGGRAPH once the results work well. SIGGRAPH deadline is here: https://s2018.siggraph.org/conference/conference-overview/technical-papers/ or talks: http://s2018.siggraph.org/conference-overview/talks/
I think it would be very interesting, especially if you applied it to other lookup tables as well. Include how you found the NN, and how to apply it to new large tables.
There is actually a large body of work right now in capturing true BRDFs via measurement tools but they result in +100MB files. They are called scans -- on example is VR scans: https://www.chaosgroup.com/vrscans Applying NN compression methods to these may be very intersting and relevant. Would be great to get these down to ultra small sizes.
@bhouston I would love to do a white paper for SIGGRAPH! Thanks for the suggestion!
@honorabel I'm working on a preview... I hope to have it up soon! I've been redoing some logic to yield better compression and to work with the new improvements that @WestLangley added to the LTC Dataset.
Above is a link to what I currently have...
Using the newer version of LTC I was able to compress the LTC Data to ~24kb (this includes decompression code) form 301kb
That is good! I am not able to see a perceptible difference, at least when the light is moving.
Have you graphed these functions? Each channel can be graphed as a function of the two parameters, and the functions are, by-and-large, fairly smooth. It would seem that downsampling from 64 x 64 to 32 x 32 -- or even to 16 x 16 -- would be reasonable, and would reduce the data by a factor of 4 (or 16).
Also one could downsample using bspline patches or something this one could
use non uniform control points.
Best regards,
Ben Houston
http://Clara.io Online 3d modeling and rendering
On Jan 10, 2018 10:49 PM, "WestLangley" notifications@github.com wrote:
That is good! I am not able to see a perceptible difference, at least when
the light is moving.Have you graphed these functions? Each channel can be graphed as a
function of the two parameters, and the functions are, by-and-large, fairly
smooth. It would seem that downsampling from 64 x 64 to 32 x 32 -- or even
to 16 x 16 -- would be reasonable, and would reduce the data by a factor of
4 (or 16).—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/mrdoob/three.js/issues/9373#issuecomment-356816515,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAj6_XqRrRxLvQBmJzFZLp_LJfAJUSbXks5tJYSwgaJpZM4JSMWI
.
@WestLangley Awesome! I tried to avoid having any noticeable artifacts...
However, two things I should have mentioned are...
_The network that I was developing only seemed to work with the old LTC Data... I was going to have to spend a lot, months, more time to get good results for the new stuff!_
_The new compression scheme varies the precision of the values... so the right side of the image which corresponds to rougher materials is less precise than the left side. This is because less precision is needed for rougher materials_
Note, In addition to varying the precision of the data I also rearranged the data printed it in a more compact text format and compressed the data further using LZMA compression. I went with a text-based format because LZMA compresses the text data better than the binary data.
The LZMA decompress code accounts for 6.7kb of the total solutions size, ~24kb.
If three.js has any LZMA or ZIP decompression elsewhere defined we could look at using that and reducing the LTC data to as little as ~17kb.
LTC_MAT_1
LTC_MAT_2
Base 10 precision curve...
This might need to be adjusted slightly to remove very subtle artifacts.
this fork has my changes... https://github.com/sam-g-steel/three.js/tree/dev
this script is what compresses the data... https://github.com/sam-g-steel/three.js/blob/dev/examples/js/lights/LTC_Compress_LZMA.chrome.js
@mrdoob @WestLangley any feedback?
Yes, as I mentioned previously, I suggest you graph the surface data. XYZ.
IMO, this is not an image compression problem. Treat it as a downsampling problem. That's my suggestion.
Yes, as I mentioned previously, I suggest you graph the surface data. XYZ.
I agree that this is probably best.
Also we should not use LZMA because three.js when served by competent webservers will be gzipped already.
@bhouston I suggest we look further into gzip vs lzma... https://tukaani.org/lzma/benchmarks.html
Depending on the dataset lzma can be many times smaller than gzip... However, we are using ~6kb to store the lzma decode function
@sam-g-steel A quick hack produced this. I'm sure you could do better. :)
If you look at the other channels, there appears to be some edge-case estimation errors; but who knows...
Nice! I'll look further into it...
Ok, I think I'll get back on this one soon. To get the levels of compression at the quality desired I needed a more powerful machine... I think I have that now.
However I am currently working on adding real-time raytraced shadows to a threejs app. Once I finish that I'll be back on this ticket.
BTW I brought this up today on Twitter and Stephen Hill (one of the original authors) responded with some ideas: https://twitter.com/self_shadow/status/1303681760694120448
I love that this thread is still going
After #20078, a deeper integration of RectAreaLight
by making RectAreaLightUniformsLib
obsolete can hopefully easier achieved.
Most helpful comment
Ok, I think I'll get back on this one soon. To get the levels of compression at the quality desired I needed a more powerful machine... I think I have that now.
However I am currently working on adding real-time raytraced shadows to a threejs app. Once I finish that I'll be back on this ticket.