Three.js: make loaders to create indexed geometry

Created on 7 Aug 2017  路  15Comments  路  Source: mrdoob/three.js

You guys aim to deprecate Geometry, and to that end remove it from stuff like OBJLoader. Unfortunately, BufferGeometry you create is flat. If I want to have smooth normals with OBJ model, I am forced to bloat my file and export them. Before I could do

mesh.geometry.computeVertexNormals ();

which was good CPU-vs-file size trade-off. Now, I would have to do

var geometry = new THREE.Geometry ();
geometry.fromBufferGeometry (mesh.geometry);
geometry.mergeVertices ();
geometry.computeVertexNormals ();
mesh.geometry = geometry;

which wastes way more CPU just because you guys destroy index information in the parser, and moreover it will break once you succeed in deprecating Geometry. BufferGeometry can keep this information, so why do you have to choose for me that I do not want it?

All 15 comments

oh look, we've been there before. I guess I will have to do the same thing as back then.

Why can't you directly run mesh.geometry.computeVertexNormals()? Does it work differently on a BufferGeometry?

ok,

BufferGeometry can keep this information

it actually can not on two fairly common occasions:

  • uv rips
  • hard edges

however, I think it should be possible to build indexed BufferGeometry in a way that original indices are end up being used unless there is a reason not to.

for those who might be interested, here's the code I use to create said normals.

/**
 * Created by Alex Goldring on 29/05/2016.
 */
"use strict";

/**
 * 
 * @param {Array.<Number>} normals
 */
function normalizeNormals(normals) {
    let x, y, z, n;

    let i = 0;
    const il = normals.length;
    for (; i < il; i += 3) {

        // extract vector components
        x = normals[i];
        y = normals[i + 1];
        z = normals[i + 2];

        // compute vector inverse magnitude
        n = 1.0 / Math.sqrt(x * x + y * y + z * z);

        normals[i] *= n;
        normals[i + 1] *= n;
        normals[i + 2] *= n;

    }
}

/**
 * based on code from THREE.js
 * normals need to be set to 0
 * @param {Array.<Number>} vertices
 * @param {Array.<Number>} normals
 * @param {Array.<Number>} indices
 */
function computeVertexNormals(vertices, normals, indices) {
    let vA, vB, vC;

    let vAx, vAy, vAz, vBx, vBy, vBz, vCx, vCy, vCz;

    let vCBx, vCBy, vCBz, vABx, vABy, vABz;

    let crossX, crossY, crossZ;
    // indexed elements
    let i = 0;
    const il = indices.length;
    for (; i < il; i += 3) {

        vA = indices[i] * 3;
        vB = indices[i + 1] * 3;
        vC = indices[i + 2] * 3;

        //obtain vertex coordinates
        vAx = vertices[vA];
        vAy = vertices[vA + 1];
        vAz = vertices[vA + 2];

        vBx = vertices[vB];
        vBy = vertices[vB + 1];
        vBz = vertices[vB + 2];

        vCx = vertices[vC];
        vCy = vertices[vC + 1];
        vCz = vertices[vC + 2];

        //compute CB and AB vectors
        vCBx = vCx - vBx;
        vCBy = vCy - vBy;
        vCBz = vCz - vBz;

        vABx = vAx - vBx;
        vABy = vAy - vBy;
        vABz = vAz - vBz;

        //compute triangle normal
        crossX = vCBy * vABz - vCBz * vABy;
        crossY = vCBz * vABx - vCBx * vABz;
        crossZ = vCBx * vABy - vCBy * vABx;

        normals[vA] += crossX;
        normals[vA + 1] += crossY;
        normals[vA + 2] += crossZ;

        normals[vB] += crossX;
        normals[vB + 1] += crossY;
        normals[vB + 2] += crossZ;

        normals[vC] += crossX;
        normals[vC + 1] += crossY;
        normals[vC + 2] += crossZ;
    }

    normalizeNormals(normals);
}

export default {
    computeNormals: computeVertexNormals
};

I now have a fully working version of OBJLoader2 that is able to load all OBJ file data into indexed BufferGeometry, see #12048 and https://github.com/kaisalmen/three.js/tree/OBJLoader2_V2

It all boils down to this (see code):
When the faces instructions in OBJ are parsed, a key is build from the vertex, uv and normal index. When it does not exist the values are pushed to vertex, uv and normal arrays respectively and the pointer is stored as a value and pushed to the indices array. If it exits the the value is retrieved and set to the indices array (=no extra storage).
OBJLoader re-orders objects vertices/uv/normals by material and smoothing group. Therefore indices need to be adjusted before they stored in the final Arraybuffers.
Multi-materials gave me a headache, because the start and length need to reference the indices and not the vertices (see code). This sounds super logic now, but wasn't obvious because the meshes seemed to miss vertices, but it was just due to wrong material group settings.

See the following examples using indexed BufferGeometry:
obj Indexed
Bigger OBJ files stage (External) Indexed

I added stats in the log of what was saved (=Multiple definitions).
Stats from Female02 (first example). 72% less vertices:

Overall counts: 
    Vertices: 5057 
    Faces: 18699 
    Multiple definitions: 13642 (=not stored)

Stats from kitchen sink (second example). 78% less vertices.

Overall counts: 
    Vertices: 2029188 
    Faces: 9549876 
    Multiple definitions: 7520688

Unfortunately, BufferGeometry you create is flat. If I want to have smooth normals with OBJ model, I am forced to bloat my file and export them.

The thing is... OBJs have 3 different "index buffers". One for triangles, one for normals and one for uvs. WebGL only supports 1 index buffer. So it's basically impossible to retain the original mapping when you're also trying to be performant and memory efficient.

But, you are able to store only unique face descriptions (at least per OBJ group in OBJLoader2 impl). This at least reduces the required amount of memory without destroying information.
Let's take the numbers from female02 above. The unindexed loader stores 1869934 bytes vertices, 1869934 bytes normals and 1869924 uvs bytes = 598.368 bytes.

The indexed version detected 5057 unique combination of v/uv/n face definitions: 505734+505734+505724 bytes = 161.824 bytes and 18699*4 bytes index which results in 236.620 total bytes. You save 60 percent memory.

There are face descriptions without normals or uvs which I did not consider in the calculation above, but this will not significantly change the result.

Ratios are better with quad based models.

Woops!

My point is that, if we want to retain the structure of a OBJ we'll have to create a new geometry structure that can have multiple indices.

I think WebGL 2.0 had something that accommodated this, but I can't remember what was it.

Ah ok, I don't know. I have to investigate, too.
Anyway, a more detailed explanation of what is done helps.

Anyway, a more detailed explanation of what is done helps.

Did I provide enough information? Or should I try to explain in more detail?

Oh, sorry, I was referring to my own explanation of how OBJLoader2 performs the index creation. I will check whether independent buffers for vertex, normals and uvs are possible with WebGL 2. That's what you meant, right?

I will check whether independent buffers for vertex, normals and uvs are possible with WebGL 2. That's what you meant, right?

Yep!

@mrdoob and @makc I promised an answer and did not provide it, yet. Here it is.

The answer is simple: WebGL 2.0 only allows a single index.

Regarding OpenGL I found this: AMD_interleaved_elements is an extension for OpenGL 4.3 Core Profile. If I understand the specs correctly understand the maximum index length is rather short and therefore not suited for large meshes:

This extension allows OpenGL to process multi-component packed element data. The maximum size of a vertex's index data is not increased, but the facility to store 2 16-bit or 2 or 4 8-bit indices per vertex is introduced.

Another idea is to write a custom shader that access the different indices from a buffer texture in the vertex shader, but this will decrease rendering performance (see this stackoverflow post).

Was this page helpful?
0 / 5 - 0 ratings