Three.js: FBX Loader Web Worker Support

Created on 10 Jan 2019  路  21Comments  路  Source: mrdoob/three.js

Description of the problem

Currently using the FBX Loader to load face rigs lock up the UI for upwards to 20 seconds while parsing the fbx tree. The model I'm loading is about 50 mb, and passing a onProgress callback only reports progress during the initial loading in of the buffer by the Three File Loader. After that, the FBX Loader takes over, parsing the FBX tree and can block the UI.

I attempted to alleviate this by offloading the FBX Loading to a Web Worker after seeing the OBJ loader supports web workers but unfortunately found that at the time of parsing textures, the Three Image Loader uses the DOM API createElementNS to load the images. Web Workers don't have access to this function and as a result error out when parsing the textures.

Will the FBX Loader support webworkers via the Worker Director similar to how OBJLoader2 does, as shown in the Web Worker LoaderDirector Parallels Demo? Or is there a better way to approach the loading of FBX files via the FBX Loader without expirencing the UI lock up?

Three.js version
  • [ ] Dev
  • [x] r100
  • [ ] ...
Browser
  • [ ] All of them
  • [ ] Chrome
  • [ ] Firefox
  • [ ] Internet Explorer
  • [x] Electron/Chromium
OS
  • [ ] All of them
  • [x] Windows
  • [ ] macOS
  • [ ] Linux
  • [ ] Android
  • [ ] iOS
Hardware Requirements (graphics card, VR Device, ...)

CPU: Intel i7-4720HQ @ 2.60GHz
GPU: NVIDIA Quadro K2200M

Enhancement Loaders

Most helpful comment

@imgntn For sure. Code is a bit messy right now, needs optimizing, and references specific model objects. Note this is using R100 of Three.

So on load of the model we create a data texture with the size 4096 by 4096 and load that up with geometry data for every blendshape as such:

```//Create data containing all blendshape verts
let size = 4096 * 4096;
let data = new Float32Array(3 * size);
let vertIndexs = new Float32Array(this.model.children[2].geometry.attributes.position.count);
//Loop over each original vertex
let stride = 0;
let vertexId = 0;
for(let v = 0; v < this.model.children[2].geometry.attributes.position.array.length; v+=3) {
//Loop over all blendshapes
for(let i = 0; i < this.model.children[2].geometry.morphAttributes.position.length; i++) {

let morphAttr = this.model.children[2].geometry.morphAttributes.position[i];
//Copy x, y, and z for the given vertex
data[ stride ] =  morphAttr.array[v];
data[ stride + 1 ] =  morphAttr.array[v + 1];
data[ stride + 2 ] =  morphAttr.array[v + 2];

stride += 3;

}

vertexId++;
//Also set vertIndex at v to v which is the vert index
vertIndexs[vertexId] = vertexId;
}
this.model.children[2].geometry.addAttribute('vertIndex', new THREE.BufferAttribute(vertIndexs, 1));
//CREATE DATA TEXTURE AND PLACE ON SHADER MAT
let dataTexture = new THREE.DataTexture(data, 4096, 4096, THREE.RGBFormat, THREE.FloatType);
dataTexture.needsUpdate = true;

let uni = { texture0: {type: 't', value: dataTexture}, influences: {value: this.model.children[2].morphTargetInfluences}, mainTexture: {type:'t', value: texture}};
let shaderMat = new THREE.ShaderMaterial({uniforms: uni});

this.model.children[2].material = shaderMat;

Then in the vertex shader we take the uniform data, as well as the vertex index as an attribute and modify all the vertex positions based off the blendshape influences passed in. Note that it is hardcoded to 136 blendshapes, this will obviously change based on how many blendshapes you have on the model. This also is unoptimized. From the passed in data, you can calculate the texture coordinate that contains the vertex data for any given vertex on the model at 100 percent blendshape influence. You then take that data and set the vertex position to the current vertex position minus the vertex at 100 percent influence and multiply it by the actual influence of the blendshape for that frame. That is how we were able to get 136 blendshapes firing in Three js!:

//Data texture
uniform sampler2D texture0;
//Blendshape influences
uniform float influences[136];
//Current vertex index
attribute float vertIndex;

varying vec2 vUv;

void main() {
  vUv = uv;
  vec4 transformed = vec4(position, 1.0);

  //Offset used for fixing the x y coordinates on the data texture
  float offset = vertIndex * 136.;
  //Loop over every blendshape
  for(int i=0; i<136; i++) {
    float iFloat = float(i);
    //If influence is 0, lets not waste GPU processing, move on
    if(influences[i] == 0.) {
      continue;
    }
    //Find the x and y position of the vertex data based on vertex index and blendshape index
    float x = mod(offset + iFloat, 4096.);
    float y = ((offset + iFloat) / 4096.);

    //Grab the data at x and y
    vec2 texCoord = vec2(x / 4096.,y / 4096.);
    vec4 data = texture2D(texture0, texCoord);

    //Modify the current vertex position with the data found in the texture and the current blendshape influence
    transformed.x -= (position.x - data.x) * influences[i];
    transformed.y -= (position.y - data.y) * influences[i];
    transformed.z -= (position.z - data.z) * influences[i];
  }

  gl_Position = projectionMatrix * modelViewMatrix * transformed;
}

```

For us, we were able to use one 4096 by 4096 texture to populate the blendshape data of a model with 100k verts and 136 blendshapes. The texture has room for 16,777,216 data points. Which was well enough for this model. If you need more, from what I understand you can pass up to 8 textures to a shader.

All 21 comments

/cc @kaisalmen any idea what would be involved here?

If you're able to use a more optimized data format, in previous tests I've seen glTF models parse 95% faster than FBX equivalents. (see fbx2gltf and https://threejs.org/docs/#manual/en/introduction/Loading-3D-models)

FBX Importing is hard requirement for this project, so we are stuck with that.

Funny enough, I was looking at #14783 and tested that Loader. It loaded in 3 seconds. I dug into the diff and noticed the version of the Loader in the PR ignores any morph targets after the 8th one in parseMorphTargets. This makes sense now, given the facial rig I am loading in has 136 blendshapes and the model is 116,640 verts. I know that normally only 8 blendshapes are supported natively in Three, but I wrote a custom material shader that took the imported blendshapes encoded in a 4096x4096 texture and their influences and updated the vertices all on the GPU. This allowed for us to drive all 136 blendshapes in real time to provide realistic facial animation in Three.

I'm still wondering however if maybe just this heavy operation can be offloaded to a webworker, as having access to those blendshapes is important for us.

Edit: Nevermind it isn't parseMorphTargets that is the heavy operation, rather it is the function that uses the parsed deformers, GeometryParser().parse( deformers );

I know that normally only 8 blendshapes are supported natively in Three...

Technically you can have more, but only 8 can be active (weight > 0) at the same time.

I think a worker-based FBX loader is possible, but would probably need to be either (a) a fork of FBXLoader e.g. FBXLoader2, or (b) a wrapper around FBXLoader. I'm not sure if someone has bandwidth to do that right now...

Technically you can have more, but only 8 can be active (weight > 0) at the same time.

Sorry that is what I meant. For our use case though we can have up to 40 blendshapes active at any given frame, which is why we went the custom shader route.

I appreciate you're thoughts on the issue. It would be nice to see the FBX Loader improve in this way. In the time being I will look at parseMorphTargets and see what sort of optimizations can be done.

Again, any additional thoughts or work arounds are welcomed as well. Thanks!

Hey,
I am dealing with same issue. Could you find any solution? I am trying to seperate heavy functions from Loader to create worker for them. But it is not easy while I need to refactor the code as it works with events. I would like to hear your solution if you proceed with this issue.

Hey @umurcg, we haven't really found a solution yet on our end. Due to the nature of the application we are developing, we have decided for the time being to create another Electron window to serve as a loading overlay while the main window loads FBXs. It is not ideal, but the FBX loader relies heavily on using the DOM to load images for textures, making all attempts to put it in a dedicated webworker impossible.

@JosephCoppola-FW thanks for the update and for sharing some details on your workaround

Hey @JosephCoppola-FW , I think I found a workaround. I've seperated parseGeometires method and parseMaterials method into two different script. With web worker I am calling geometry parser. That prevents UI lock. After that I parse materials in main thread. And when worker is done with parsing geometry, I assign materials to objects by hand.

I will write a clean version of this approach. If you are still interested, I can upload the code.

@umurcg Interesting! Yeah I would love to see what you found.

@umurcg very cool! Would love to see your code 馃憤

@JosephCoppola-FW any tips on implementing that texture-based approach for many blendshapes? thanks!

@imgntn For sure. Code is a bit messy right now, needs optimizing, and references specific model objects. Note this is using R100 of Three.

So on load of the model we create a data texture with the size 4096 by 4096 and load that up with geometry data for every blendshape as such:

```//Create data containing all blendshape verts
let size = 4096 * 4096;
let data = new Float32Array(3 * size);
let vertIndexs = new Float32Array(this.model.children[2].geometry.attributes.position.count);
//Loop over each original vertex
let stride = 0;
let vertexId = 0;
for(let v = 0; v < this.model.children[2].geometry.attributes.position.array.length; v+=3) {
//Loop over all blendshapes
for(let i = 0; i < this.model.children[2].geometry.morphAttributes.position.length; i++) {

let morphAttr = this.model.children[2].geometry.morphAttributes.position[i];
//Copy x, y, and z for the given vertex
data[ stride ] =  morphAttr.array[v];
data[ stride + 1 ] =  morphAttr.array[v + 1];
data[ stride + 2 ] =  morphAttr.array[v + 2];

stride += 3;

}

vertexId++;
//Also set vertIndex at v to v which is the vert index
vertIndexs[vertexId] = vertexId;
}
this.model.children[2].geometry.addAttribute('vertIndex', new THREE.BufferAttribute(vertIndexs, 1));
//CREATE DATA TEXTURE AND PLACE ON SHADER MAT
let dataTexture = new THREE.DataTexture(data, 4096, 4096, THREE.RGBFormat, THREE.FloatType);
dataTexture.needsUpdate = true;

let uni = { texture0: {type: 't', value: dataTexture}, influences: {value: this.model.children[2].morphTargetInfluences}, mainTexture: {type:'t', value: texture}};
let shaderMat = new THREE.ShaderMaterial({uniforms: uni});

this.model.children[2].material = shaderMat;

Then in the vertex shader we take the uniform data, as well as the vertex index as an attribute and modify all the vertex positions based off the blendshape influences passed in. Note that it is hardcoded to 136 blendshapes, this will obviously change based on how many blendshapes you have on the model. This also is unoptimized. From the passed in data, you can calculate the texture coordinate that contains the vertex data for any given vertex on the model at 100 percent blendshape influence. You then take that data and set the vertex position to the current vertex position minus the vertex at 100 percent influence and multiply it by the actual influence of the blendshape for that frame. That is how we were able to get 136 blendshapes firing in Three js!:

//Data texture
uniform sampler2D texture0;
//Blendshape influences
uniform float influences[136];
//Current vertex index
attribute float vertIndex;

varying vec2 vUv;

void main() {
  vUv = uv;
  vec4 transformed = vec4(position, 1.0);

  //Offset used for fixing the x y coordinates on the data texture
  float offset = vertIndex * 136.;
  //Loop over every blendshape
  for(int i=0; i<136; i++) {
    float iFloat = float(i);
    //If influence is 0, lets not waste GPU processing, move on
    if(influences[i] == 0.) {
      continue;
    }
    //Find the x and y position of the vertex data based on vertex index and blendshape index
    float x = mod(offset + iFloat, 4096.);
    float y = ((offset + iFloat) / 4096.);

    //Grab the data at x and y
    vec2 texCoord = vec2(x / 4096.,y / 4096.);
    vec4 data = texture2D(texture0, texCoord);

    //Modify the current vertex position with the data found in the texture and the current blendshape influence
    transformed.x -= (position.x - data.x) * influences[i];
    transformed.y -= (position.y - data.y) * influences[i];
    transformed.z -= (position.z - data.z) * influences[i];
  }

  gl_Position = projectionMatrix * modelViewMatrix * transformed;
}

```

For us, we were able to use one 4096 by 4096 texture to populate the blendshape data of a model with 100k verts and 136 blendshapes. The texture has room for 16,777,216 data points. Which was well enough for this model. If you need more, from what I understand you can pass up to 8 textures to a shader.

Credit to @jspdown for the idea and the help implementing. He originally came up with the solution here: #6465

amazing, thank you @JosephCoppola-FW & @jspdown

@JosephCoppola-FW this is working well for me on nvidia, but giving me strange hedgehog spikes on AMD / intel cards. did you run into this issue at all? any insight? thanks! :)

also, did you do anything around getting joints to work at the same time as manually updating the blendshapes via shader? re-implement joint skinning in the same shader maybe? that's next up for us.

best!

@imgntn To be honest I've only tested this on NVIDIA. Are you seeing the spikes on GPU or CPU?

In terms of joints, I figured we would update them normally, while keeping all blendshapes to the GPU.

@JosephCoppola-FW thanks for the response -

sorry if i was unlcear: the "spikes" i was seeing were actual visual spikes of the vertices in the model! Like Hellraiser :)

had a pretty tough time tracking down what was going on, but changing
float y = ((offset + iFloat) / 4096.);
to
float y = floor((offset + iFloat) / 4096.);

fixed the issue with AMD / Intel cards. Nvidia seems more forgiving.

I tried changing some joints while also using this method for animating blendshapes and the skinning didn't work, but now that I've fixed this AMD bug I'll start working on that and keep you posted!

p.s. let me know if there's a better place to continue this conversation, I feel like I've hijacked the thread a bit but hopefully the information is useful!

@imgntn Sounds good thanks for the update. I sent you a connection message on your LinkedIn, from there we can establish a better channel for communication. Thanks!

Hey @JosephCoppola-FW , I think I found a workaround. I've seperated parseGeometires method and parseMaterials method into two different script. With web worker I am calling geometry parser. That prevents UI lock. After that I parse materials in main thread. And when worker is done with parsing geometry, I assign materials to objects by hand.

I will write a clean version of this approach. If you are still interested, I can upload the code.

Any updates on this @umurcg ?

@JosephCoppola-FW. Hello, I meet the same thing in my project. However, I'm learn three.js just a few days. So I'm not very sure your code is match my situation. Could you help check the following demand.And It's my pleaseure to get your contact email.Here is what I want to do:
I need to change 52 morphTarget to control the gltf model faces. And the morphTarget values is come from websocket where has a server to calc what the model need to do.(something like commutation with the model)

@SingleMai if you need help, please use the forum.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

boyravikumar picture boyravikumar  路  3Comments

danieljack picture danieljack  路  3Comments

jlaquinte picture jlaquinte  路  3Comments

jack-jun picture jack-jun  路  3Comments

fuzihaofzh picture fuzihaofzh  路  3Comments