In babylon.js there's a function applyDisplacementMap which I find really useful. It draws the displacement map to a canvas and does a texture fetch on the CPU then modifies the position attribute of the mesh.
The reason I find this useful is that AFAIK some devices do not support texture lookups on the vertex shader so height mapping on the CPU might be a fix for such devices. Also, for the cases where the user do not play with displacementBias/displacementScale attributes dynamically on the run, mapping the height texture on CPU before rendering could be a performance win since no texture lookup would be performed on the GPU.
Any opinions?
Note: If we decide to implement such functionality, the canvas needs to be flipped vertically (2 lines of extra code) since the flippedY = true by default for textures in THREE.js
I think this could be a nice example.
You could just add it as an example, and keep the API similar to the current three.js API by supporting displacement scale and bias -- and probably offset, repeat.
ApplyDisplacementMap( geometry, texture, scale, bias );
Alternatively, you could add it as a feature:
geometry.applyDisplacementMap( texture, scale, bias );
Will commit as an example. Already implemented this but need to clean up the code a bit :)
I've made a PR, closing the issue.
Most helpful comment
You could just add it as an example, and keep the API similar to the current three.js API by supporting displacement
scaleandbias-- and probablyoffset,repeat.Alternatively, you could add it as a feature: