Friday, June 22, 2007

Raytracing and Design Patterns

While working on a ray tracer I found a pretty convenient design pattern for applying custom material types to the fragments encountered while colliding view rays with scene geometry. The figure below shows a general outline of what the pattern looks like in the context of the ray tracer.


The idea here is that various materials will require differing parameters for shading a particular pixel (e.g., basic Phong shading may simply require a normal, while something more complex like Greg Ward's full Bidirectional Reflectance Distribution Function (BRDF), may require a tangent and binormal as well). In order to accommodate this problem there are two hierarchies of objects: fragments which hold the appropriate parameters for specific material types and the material types themselves. While ray tracing, geometry in the scene is tested against rays from the eye for collisions, when a collision occurs the geometry it occurs with should be aware of 1) what the geometry is (as a mesh or primitive) and 2) its respective material. Using that material we can then figure out what fragment we want to generate - the material is responsible for doing this since it knows what parameters it needs from the fragment and is therefore capable of generating the correct kind of fragment.
When pixel colouring is being done (later on) in the ray tracing program, the colour can be more dynamically derived from fragments without having to worry about having the appropriate parameters to feed into the material calculations. A simple call to the render function in a Fragment is all that is required.

Advantages
- Dyanamic accomidation of materials by fragments
- Saved memory space: only use the fragment parameters needed
- Scales to new materials

Disadvantages
- Very high coupling between Fragment and Material and between the two hierarchies in general
- Slow down due to virtual table look-ups (polymorphism)

Improvements, Additions and Side Notes
Obviously you will be passing other parameters to functions like render, for example you will probably need to pass the scene graph for things like recursive rays. You will definitely need to accommodate lights somewhere (probably as another parameter to the render function).
One question that might arise is "how do I know what parameters to extract when I'm performing the actual collision itself?". For example, lets say I have a primitive of a sphere and I have my eye ray and it collides with the sphere - how do I know what parameters I want (other then the ones I would always want (i.e., position of collision, normal, view direction), for specific materials. This could be taken care of by introducing a method for every primitive type into the Fragment class hierarchy (ugly slow way), or in a more object oriented, design-pattern approach, which would be to use the Visitor pattern and have a separate visitor node hierarchy with a method for dealing with each primitive type in relation to every fragment type. Either way, this will become complicated to maintain any form of object orientedness.