In this assignment you will implement your own volume renderer. I will provide several volume data sets in the form of 3-D grids. Your job is to write the software to read that data and display an arbitrary perspective projection of it with attenuation due to light absorption and single scattering.
The details of exactly how you do shading are up to you, but the techniques discussed by Drebin (SIGGRAPH '88) are highly recommended.
The choice of rendering algorithm (e.g. ray tracing, painter's, splatting, etc.) is also up to you, but please do true volume rendering -- don't convert to a surface and do surface rendering. Ray tracing is probably the easiest way to go. Your pictures should have minimal visible artifacts, they should show the data in a useful way, and they should look nice.
See /afs/cs/project/classes-ph/862.95/pub/p2/README for pointers to some volume data files and software to read them. Test your volume renderer on this data, your own volume data, or a procedural volumetric function of your own creation.
An easy way to create some interesting volume data is to take a colorful 24 bit picture and compute its RGB color histogram. This will give you 256^3 voxels, or if you quantize the colors to 7 bits per channel before histogramming, 128^3 voxels, which uses less memory. The standard Mandrill image has a good histogram.
The minimal interface would be to read the volume data and some camera parameters and write out a picture file. Optionally, you could create an interactive front end that allows the user to spin the volume around (with crude rendering) at interactive rates, and then hit a button to run your volume renderer, which will hopefully take only a few seconds. For a simple front end using Xforms and OpenGL, see pub/p2/src/spin.
You are free to bother other peoples' code for viewing, user interfaces, and volume data input, but the rendering code should be your own.
Turn in, in students/YOURNAME/p2: source code, pictures, executable, and a README file. Your source code should have comments labeling which parts you wrote and which parts were borrowed from others (if any). Comment your code so I can tell the purpose of each subroutine, structure, or class. Generate at least one picture of each of two different volume models, one of them being pub/p2/data/vdic. It is OK to render downsampled volume data (e.g. you could shrink the vdic data by a factor of 2 in x & y), but don't downsample so much that the information content in the volume data is totally lost. Pictures can be in any file format that I can display (e.g. SGI's RGB, TIFF, JPEG, PPM). Make your pictures color, by choosing some (semi-arbitrary, but hopefully artistic and meaningful) assignment of colors as a function of voxel value. Pictures should be free of aliasing or other artifacts. Generate your pictures at 24 bits per pixel (on most of our SGI's, double buffering reduces the pictures to 12 bits dithered, in which case you should turn off double buffering when generating the pictures that you turn in in order to get 24 bpp). Executables need not be for an SGI (although I prefer those, for ease of testing). The README file should describe what you've done, how your algorithm works, what papers or books you got your algorithm from (if any), what problems you encountered, compute time to generate each picture, command to type to run your program and reproduce your pictures, and a summary of what's in your directory.
Paul Heckbert