I'm not 100% sure this is a bug vs my bad code but r115 shows a yellow square (which is what I expect) and r119 shows a black square, same code
https://jsfiddle.net/greggman/42gm165d/ (r119)
https://jsfiddle.net/greggman/74w5tzub/ (r115)
First let me explain my code. I'm trying to use morph targets with both positions and vertex colors. three.js doesn't support vertex color morphing so I'm hacking those in.
Three.js, can be given a bunch of morph targets (say 100 targets), and then based on the influences of those 100 targets it will choose up to 4 of them to actually pass to the shader (I think that's how it works).
In order for me to know which color data to pass to the shader I need to know which position data three.js decided to pass to the shader so I iterate over the morphTarget0-3 attributes and check which BufferAttribute (which data) is being used
for (i = 0; i < maxLiveTargets; ++i) {
// check if three is using a certain position morph target
const bufferAttribute = geometry.getAttribute(`morphTarget${i}`);
if (bufferAttribute) {
// figure out which position data is assigned to this morphTarget
// assume the names are like position0, position1, position23, etc..
const positionTargetNdx = parseInt(bufferAttribute.name.substr(8));
// add in the corresponding color target
geometry.setAttribute(`morphColor${i}`, arrayOfColorBufferAttributes[positionTargetNdx]);
}
}
I execute the above code in onBeforeRender
and it turns out which morph targets will be used is not up to date until render 3 times in r115 but it does actually get there by the 3rd render.
On r119 it never gets there.
In the example there are cube morph targets (red, yellow, green, cyan). I set the influences to 0, 1, 0, 0 so I should see a yellow cube). I added some WebGL logging. On r115 I see this
first render
-[ render ]---------------------------------------
bindBuffer: 0target (position)
bindBuffer: normal
bindBuffer: uv
bindBuffer: color0 (color)
bindBuffer: 1target (position)
bindBuffer: 2target (position)
bindBuffer: 3target (position)
---[ on before render ]---
check attribute: morphTarget0 attributeName: (N/A) influence: 0 does NOT exist
---[end on before render]---
bindBuffer: 0target (position)
vertexAttribPointer: position 3 FLOAT false 0 0 0target (position)
bindBuffer: 1target (position)
vertexAttribPointer: morphTarget0 3 FLOAT false 0 0 1target (position)
First we see all these bindBuffer
calls. Those are the ones used when uploading the vertex data. Then you can see at onBeforeRender
there was no info about which morph targets are used so no color is set. Three then sets up 2 attributes, 'position' to the cube 0 (red) and morphTarget0
to cube 1 (yellow). With no color set it will be black
2nd render
-[ render ]---------------------------------------
---[ on before render ]---
check attribute: morphTarget0 attributeName: 1morph-target-position influence: 0 exists:
set: attrib(morphColor0) to bufferAttrib(morph-color-data1)
check attribute: morphTarget1 attributeName: (N/A) influence: 1 does NOT exist
---[end on before render]---
bindBuffer: 0target (position)
vertexAttribPointer: position 3 FLOAT false 0 0 0target (position)
bindBuffer: 1target (position)
vertexAttribPointer: morphTarget0 3 FLOAT false 0 0 1target (position)
This time we see on onBeforeRender
that my code saw 1morph-target-position
(the yellow cube's positions) was assigned to morphTarget0
so it assigns the corresponding color morph-color-data1
to morphColor0
But then we see three.js ignores that when actually setting up th attributes even though it sets up the attributes after onBeforeRender
3rd render
-[ render ]---------------------------------------
bindBuffer: color1 (morphColor1)
---[ on before render ]---
check attribute: morphTarget0 attributeName: 1morph-target-position influence: 0 exists:
set: attrib(morphColor0) to bufferAttrib(morph-color-data1)
check attribute: morphTarget1 attributeName: (N/A) influence: 1 does NOT exist
---[end on before render]---
bindBuffer: 0target (position)
vertexAttribPointer: position 3 FLOAT false 0 0 0target (position)
bindBuffer: color1 (morphColor1)
vertexAttribPointer: morphColor0 3 UNSIGNED_BYTE true 0 0 color1 (morphColor1)
bindBuffer: 1target (position)
vertexAttribPointer: morphTarget0 3 FLOAT false 0 0 1target (position)
This time we see the same onBeforeRender
behavior (the color data for cube1(yellow) is assigned to morphColor0) and then three.js actually uses it and we get a yellow cube.
Switching to r119
the first and second render are exactly the same as above so no need to repeat them but the 3rd render does not set up the color attribute
-[ render ]---------------------------------------
bindBuffer: color1 (morphColor1)
---[ on before render ]---
check attribute: morphTarget0 attributeName: 1morph-target-position influence: 0 exists:
set: attrib(morphColor0) to bufferAttrib(morph-color-data1)
check attribute: morphTarget1 attributeName: (N/A) influence: 1 does NOT exist
---[end on before render]---
In fact no attributes are setup. My guess is the caching that was added thinks nothing needs to happen so it just binds the existing VAO and so nothing changes, the cube stays black.
Maybe I'm using three wrong so sorry if this really belongs on S.O but I have 2 question
Thanks for the report. Let me investigate.
Conclusion
I think I have to say that adding/removing attributes to/from geometry in onBeforeRender
is not recommended not only in r119 but also even in r115. I'll explain the reason later.
Workaround
A workaround I came up with so far is picking up morph color attributes before calling renderer.render()
in the same way as renderer picks up morph target attributes, like this
function onBeforeRender(mesh) {
const geometry = mesh.geometry;
// remove all the color attributes
for (const {name} of colorAttributes) {
geometry.deleteAttribute(name);
}
// Find the up to eight biggest influences other than 0 influence
const influences = [];
for (let i = 0; i < mesh.morphTargetInfluences.length; i++) {
influences.push([i, mesh.morphTargetInfluences[i]]);
}
const effectiveInfluences = influences
.sort((a, b) => Math.abs(b[1]) - Math.abs(a[1]))
.slice(0, 8)
.sort((a, b) => a[0] - b[0]) // comment out this line for r115
.filter(a => !!a[1]);
// Add corresponding color attributes
for (let i = 0; i < effectiveInfluences.length; ++i) {
const ndx = effectiveInfluences[i][0];
const name = `morphColor${i}`;
geometry.setAttribute(name, colorAttributes[ndx].attribute);
}
}
function render() {
onBeforeRender(scene.children[0]);
renderer.render(scene, camera);
}
Example with r119 https://jsfiddle.net/7or8t4d1/ It looks working.
The limitation is this idea can not be used if multiple meshes share a geometry. I haven't come up with an idea for such a case yet.
Why we shouldn't add attributes in onBeforeRender
?
The reason why we shouldn't add/remove attributes to/from geometry in onBeforeRender
is WebGL buffer management of renderer, Renderer doesn't seem to expect attributes are updated by user in .render()
.
.
WebGL buffer handling and onBeforeRender()
are done in this order.
From this, I can explain how the problems you reported happen.
first render
Then you can see at onBeforeRender there was no info about which morph targets are used so no color is set.
onBeforeRender()
is fired before render sets morph attributes. Then you don't see morph attributes in geometry.
2nd render
But then we see three.js ignores that when actually setting up the attributes even though it sets up the attributes after onBeforeRender
onBeforeRender()
is fired after creating WebGL buffer. Then in this frame WebGL buffers corresponding to morph color attributes are not created even if you add morph color attributes in onBeforeRender()
And morph attributes in geometry you see in onBeforeRender
is one frame old.
3rd render in r115
WebGL buffer corresponding to morph color attribute is bound in the third render then you see it looks working. But note that morph attributes in geometry you see in onBeforeRender()
is one frame old. So if your app for example updates morphTargetInfluences each frame, you may see unexpected morph color.
3rd render in r119
In this example, geometry isn't changed since the previous render call. (In onBeforeRender()
you remove and add the yellow morph attribute.) Then skips buffer binding. Remember that in the second render WebGL buffer corresponding to morph color attribute wasn't created. In the third render we skips buffer bindings then morph color buffer won't be bound. Then you keep seeing black box.
Is this a bug?
- Is this bug in three?
WebGL resource management in render is complex. Perhaps hard to resolve this issue "adding/removing attributes in onBeforeRender()
causes problem". So it may be reasonable to think this is a limitation of onBeforeRender()
rather than this is a bug.
What do you think? @mrdoob @Mugen87
Thank you for taking a look at the code.
I kind of expected you might come to that conclusion. It doesn't seem like a good API where I have to guess what the API will do and reproduce that result outside. Instead, I should be able to ask the API what it did or what it will do, and then do the correct corresponding thing outside. That means if what it does changes my code still works.
As an example, slice(0, 8)
. Maybe that will change to 4 on low-powered devices or 16 on high powered devices. A different API would let that be fluid and let the app respond to it but the current API it's entirely opaque. You give it 100 morphs and 100 influences and whether it picks 1, 2, 4, 12, 20 no idea, Also what order it applies them is also undefined. I get that it's likely to apply them lowest first but that's certainly not a given.
Off the top of my head maybe function like mesh.getUsedMorphTargetBufferAttrbutes(renderer)
that returns an array of BufferAttribute
s of the morph targets that will be used. Then it can choose any number and an app can setup for that?
Should VAO be reverted?
(Sorry, I'm overwhelmed and frustrated with this)
Should VAO be reverted?
I don't think (and I don't hope) we should revert because the root issue of this problem is not from VAO as I wrote above. Adding/Removing attributes in onBeforeRender()
has been problematic even since before we adopt VAO.
But yeah sorry for the bugs reported in other threads and already fixed. VAO actually improves the performance. So I prefer trying to further clean up and optimize VAO related code rather than reverting.
(Even if unfortunately you end up deciding to revert, I'd be pleased if you open an issue or PR before you do that and give us a chance to discuss. Because I really want VAO in Three.js)
I also think that VAO usage is a must for a WebGL engine. We should clearly state to not support specific use cases (like adding/removing attributes in onBeforeRender()
) rather than revert VAO.
I'll suggest we mark the issue as a Won't Fix
and close it.
@greggman for motivation sake... can you show something a bit more compelling than a yellow square?
Meaning... You're hacking around an a part of the API that is hard to guarantee is not going to break.
At the end of the day, what you want is to be able to do is morph between colors and not only positions, right?
mesh.getUsedMorphTargetBufferAttrbutes(renderer)
Sounds interesting. But it would not resolve the case where a geometry is shared among multiple meshes. And I think "Renderer automatically adds/removes some of morph position and normal attributes to/from geometry depending on mesh.morphTargetInfluences
on the fly in .render()
" is a bit too our renderer implementation specific and users doesn't need (or shouldn't need) to know.
An idea in my mind so far is, renderer adds not only position
and normal
but also all type morph attributes set in geometry.morphAttributes
to geometry on the fly in .render()
. For example if geometry.morphAttributes
has position
, normal
, and foo
, the attributes of them are added to geometry depending on mesh.morphTargetInfluences
as morphTargetN
, morphNormalN
and morphFooN
where N is integer 0~7. User prepares their own shader using their morph attributes with ShaderMaterial
or with onBeforeCompile()
. Material
has morphTargets
and mophNormals
boolean properties to indicate whether to support morph attribute then we may need to add a new property morphCustoms
or something.
If we don't need flexibility but we want to add color morph, just supporting geometry.morphAttributes.color
and adding material.morphColors
would be good enough, simpler, and clearer.
Here's the use case.
https://r105.threejsfundamentals.org/threejs/threejs-lots-of-objects-morphtargets-w-colors.html
Basically making the WebGL Globe but fixing the part where colors only match the first dataset.
The solution I'd prefer is to be able to ask three what it is going to do.
const morphInfo = someFunctionGetMorphInfo(someDataNeededGetComputeMorphInfo)
It doesn't have to usable in onBeforeRender.
To be clear, I'd prefer not adding color support to three.js specifically but a solution that allows the user to add their own code and still utilize what three is already doing. That's the point of the example. It is not "how do I morph colors", it is "how do I add my own data to morph targets"
I can use solution takahirox posted above and just comment that this isn't really something three wants to support and that you should really write your own custom shaders and write your own morph target system if that's the recommended solution. I was just hoping that given the code to support morph targets is already in three.js that it was possibly to piggyback off that rather than having to go 100% custom.
@greggman Wouldn't it be better for that use case to use InstancedMesh
? Specially now that it supports color per instance.
I could have sworn i had a PR about this, but it seems i was wrong.
What i hear:
- i like the convenience of a
onBeforeRender
"hook"- it doesn't work
What i think is the solution:
- have the convenience of some "hook"
- have that hook fire _before_ three does under the hood magic
While this may be an esoteric use case, i think onBeforeRender
has the same limitations when used with much more common cases (if i remember correctly, changing uniforms at this stage was also iffy).
Before we ever call renderer.render()
and before we ever let three do it's under the hood magic, we are free to do all kinds of configurations on objects that we created. Use case such as this included - we should be able to do almost anything with the buffers and attributes we can get a handle to. It's when this convenience of a "hook" comes to play, that things start to break.
Consider an additional hook called updateStuffHere
. We can implement it as such:
const scene = new Scene()
scene.autoUpdate = false
class MyConvenientObject3D extends Object3D {
public updateStuffHere( frameData: IFrameData ){}
}
const myUpdatePass = (frameData: IFrameData) => ( object: Object3D ) {
if(object.needsUpdate) object.updateMatrixWorld()
object.updateStuffHere( frameData )
object.children.forEach(myUpdatePass(frameData))
}
function myLoop(){
requestAnimationFrame(myLoop)
const frameData = {...someData}
const updater = myUpdatePass(frameData)
updater(scene)
renderer.render(scene,camera)
}
Question, would something like this be able to replicate the functionality from the examples?
animate()
?init()
?@greggman Wouldn't it be better for that use case to use
InstancedMesh
? Specially now that it supports color per instance.
Not sure that's a win. For a 1000 cubes the morph target solution requires updating max 8 attributes, 8 influence values, and 1 draw call per frame. The InstanceMesh solution requires computing 1000 lerped matrices, updating 1000 matrices, computing 1000 lerped colors, 1000 colors, and 1 draw call per frame.
In any case, for the sample to work ATM I updated it to use @takahirox's example, shortened to
// remove all the color attributes
for (const {name} of colorAttributes) {
baseGeometry.deleteAttribute(name);
}
// three.js provides no way to query this so we have to guess and hope it doesn't change.
const maxInfluences = 8;
// three provides no way to query which morph targets it will use
// nor which attributes it will assign them to so we'll guess.
// If the algorithm in three.js changes we'll need to refactor this.
mesh.morphTargetInfluences
.map((influence, i) => [i, influence]) // map indices to influence
.sort((a, b) => Math.abs(b[1]) - Math.abs(a[1])) // sort by highest influence first
.slice(0, maxInfluences) // keep only top influences
.sort((a, b) => a[0] - b[0]) // sort by index
.filter(a => !!a[1]) // remove no influence entries
.forEach(([ndx], i) => { // assign the attributes
const name = `morphColor${i}`;
baseGeometry.setAttribute(name, colorAttributes[ndx].attribute);
});
and it's no longer in onBeforeRender
, it's just part of the render loop.
If the top few lines of WebGLMorphTarget
were separated into a utility function the user could call to find out morph target usage as well that would be my preference. Then things like maxInflunces
and which BufferAttribute
s are assigned to which shader attributes effectively be queried.
Most helpful comment
I also think that VAO usage is a must for a WebGL engine. We should clearly state to not support specific use cases (like adding/removing attributes in
onBeforeRender()
) rather than revert VAO.