Saturday, March 24, 2007

Outline for a Mesh Object Architecture

The idea for this comes from both my work at NVIDIA as well as from my own prior frustrations in creating a flexible, fast and object-oriented framework for representing meshes in a 3D application. The framework I want to present is very simple in concept and reflects a tendancy of stucture that many 3D file formats exhibit (specifically the more modern ones that I have seen: FBX, COLLADA, X File).

Meshes consist of vertices, normals, texture coordinates, tangents, binormals, vertex colours, etc. Some of these are not always necessary, however having the ability to load all of this information whenever an artist feels like it is still very useful. Meshes also consist of polygons (usually a big mess of triangles), which can be represented by appropriately arranging the sets of information previously mentioned. Lastly (note: I'm not going to be talking about bones and animation in this posting) meshes have materials associated with them. All of this information is easy enough to digest; the more annoying part is pieceing it together with the right structure. For example, we could have a "Vertex" object which contains attributes for its position, colour, normal, etc. and then associate that with a "Polygon" object, which could have a type (e.g., triangle, quad) and we could finally associate a whole bunch of those with a "Mesh" object. Having done this before, there are pitfalls to such an approach. For one, it's not very fast - representing everything as objects creates tons of nested drawing calls from one function to the next and code gets bloated quickly due to object composition. Furthermore, there are many cases where polygons can share vertices or other significant attributes, just as vertices may share normals (which may duplicate information unnecessarily), this kind of tracking and monitoring of objects is a huge pain and without smart pointers it can get really messy in an unmanaged language like C++.

A better solution is just to avoid bulky OO relationships altogether when it comes to storing vertex-related values as well as polygons. It is better to represent such information as streams of data and indices into those streams. Vertex-related attributes like position, normals, texture coordinates, etc. can all be seperated into their own stream/array of data, that data can then be indexed to retrieve information without having to duplicate it (or manage it) for seperate objects. Polygon representations can then be simple objects that hold arrays of indices into the streams that are relevant to representing that particular polygon (see the diagram below for an illustration of this).

In the above diagram a Mesh contains a variable set of information streams pertaining to how it is defined. These streams are inconsequential, however, without the definition of one or more PolygonGroups, which are the base unit for seperating one type of surface on the mesh from another (due to the fact that each is associated with exactly one Material). Consider this in the case of a mesh representing a spoon with a wooden handle; we would want the handle to have one material (a wood material) and the rest to have a another (a metal material perhaps). In such an example these two parts of the spoon would be represented by seperate PolygonGroups each with its own material (which could be shared amongst other PolygonGroups. Each PolygonGroup is then built up of a set of Polygons (these are not shared between groups). An important point for a Polygon is that each one contains a set of interleaved indices that refer to the sets of streams belonging to the mesh they are a part of. Though it is not specified in the diagram, I believe it should be flexible as to what streams a PolygonGroup is indexing. So, for example, if there were streams for position, normal, texture and tangent, a PolygonGroup may tell its set of Polygons to only index the position and normal (or some other subset of what is available). The reason I chose the PolygonGroup level and not the Polygon level is because this happens to be the structure in COLLADA files and also because it tends to make more sense - there are few/no situations where you have some random polygon (that isn't already part of a larger or equal sized group and associated with a distinct material) that has a preference in exactly what data it needs to build itself. One final point to make concerns the MeshInstance object. The purpose of this object is to illustrate the possiblity of instancing meshes, each with different possible materials associated with them etc. Such instances could then be placed into a structure like a scene graph so that less information needs to be stored in memory.

Below is a completely hypothetical, pseudocode implementation of the creation of a mesh object to better illustrate this architecture.
Mesh mesh = new Mesh();

// Note: position and normal arrays are just arrays of 3 vectors
Stream posStream = new Stream(InfoType.POSITION, posArray);
Stream normStream = new Stream(InfoType.NORMAL, normArray);
// ... other streams

// The follow is an example of just one polygon group
// being created
PolygonGroup polyGrp = new PolygonGroup(PolygonType.TRIANGLES);
Polygon[] polygons = new Polygon[NUM_POLYS];

for (int i = 0; i < NUM_POLYS; i++){
Indexer posIndices = new Indexer(InfoType.POSITION);
Indexer normIndices = new Indexer(InfoType.NORMAL);
for (int j = 0; j < 3; j++){
posIndices[j] = positionsFromFile[i][j];
normIndices[j] = normalsFromFile[i][j];
}
polygons[i].addIndices(posIndices);
polygons[i].addIndices(normIndices);
}

Material mat = new Material(...);
polyGrp.setPolyArray(polygons);
polyGrp.setMaterial(mat);

mesh.addStream(posStream);
mesh.addStream(normStream);
mesh.addPolygonGroup(polyGrp);