packages icon
STANDARD PROCEDURAL DATABASES, by Eric Haines, 3D/Eye, Inc.

[Created while under contract to Hewlett-Packard FSD and HP Laboratories]
Version 3.6, as of 4/3/95
    address: 3D/Eye, Inc., 1050 Craft Road, Ithaca, NY 14850
    email: erich@eye.com

This software package is not copyrighted and can be used freely.

History
-------
Versions 1.0 to 1.5 released February to June, 1987 for testing.
Version 2.0 released July, 1987.
Version 2.1 released August, 1987 - corrected info on speed of the HP 320,
    other minor changes to README.
Version 2.2 released November, 1987 - shortened file names to <=12 characters,
    procedure names to <= 32 characters, and ensured that all lines are <= 80
    characters (including return character).
Version 2.3 released March, 1988 - corrected gears.c to avoid interpenetration,
    corrected and added further instructions and global statistics for ray
    tracing to README.
Version 2.4 released May, 1988 - fixed hashing function for mountain.c.
Version 2.5 released October, 1988 - added NFF documentation.
Version 2.6 released August, 1989 - lib_output_cylcone fix (start_norm.w was
    not initialized).
Version 2.7 released July, 1990 - comment correction in lib.c, NFF file
    clarifications.
Version 3.0 released October, 1990 - added teapot.c database, mountain.c
    changed to mount.c, additions to def.h and lib.c, changes to README and
    NFF, added patchlevel.h file.
Version 3.1 released November, 1992 - minor typo fixes, updated FTP list,
    makefile update
Version 3.1a released November, 1993 - readnff added, lib split into multiple
    smaller files, mac code updated
Version 3.2 released May, 1994 - added RIB output, readdxf added, more updates
Version 3.3 released June, 1994 - added DXF output, more lib splitting, etc
Version 3.3f4 released August, 1994 - fixes to Mac files (see README.MAC),
    error handling for Mac added to readnff program.
Version 3.4 released October, 1994 - Larry Gritz fixed RenderMan output.
Version 3.5 released November, 1994 - Alexander Enzmann fixed PLG output and
    added Wavefront OBJ output and RenderWare RWX output, and Wavefront OBJ
    reader "readobj".  Deleted Mac binary files from distribution (too much
    trouble, and a Mac sit version is distributed separately).
Version 3.6 released April, 1995 - vertex order fixes by Enzmann and Michael
    Jones, Enzmann added jacks and transformation support, other minor cleanup.
    drv_hp.c changed to reflect changes to view transform output.
Version 3.7 - typo fix in NEW_HASH area of mount.c


{These files use tab characters worth 8 spaces}


Introduction
------------

    This software is meant to act as a set of basic test images for ray
tracing algorithms.  The programs generate databases of objects which are
fairly familiar and "standard" to the graphics community, such as the teapot,
a fractal mountain, a tree, a recursively built tetrahedral structure, etc.  I
originally created them for my own testing of ray tracing efficiency schemes.
Since their first release other researchers have used them to test new
algorithms.  In this way, research on algorithmic improvements can be compared
in a more standardized way.  If one researcher ray-traces a car, another a
tree, the question arises, "How many cars to the tree?"  With these databases
we may be comparing oranges and apples ("how many hypercubes to a timeshared
VAX?"), but it's better than comparing oranges and orangutans.  In addition,
this document outlines some statistics that are more meaningful to researchers
than raw timing tests.  Using these statistics along with the same scenes
allows us to compare results in a more meaningful way.

    With the development and release of the Anderson benchmarks for graphics
hardware, the use of the SPD package for hardware testing is somewhat
redundant.  Therefore I have deleted references to testing hidden-surface
algorithms in this version.  However, another interesting use for the SPD has
been noted:  debugging.  By comparing the images and the statistics with the
output of your own ray tracer, you can detect program errors.  For example,
"mount" is useful for checking if refraction rays are generated correctly, and
"balls" can check for the correctness of eye and reflection rays.

    The images for these databases and other information about them can be
found in "A Proposal for Standard Graphics Environments," IEEE Computer
Graphics and Applications, vol. 7, no. 11, November 1987, pp.  3-5.  See
IEEE CG&A, vol. 8, no. 1, January 1988, p. 18 for the correct image of the
tree database (the only difference is that the sky is blue, not orange).  The
teapot database was added later, and consists of a shiny teapot on a shiny
checkerboard.

    The SPD package is available via anonymous FTP from:

	[ftp.]princeton.edu:/pub/Graphics/SPD
	avalon.chinalake.navy.mil:/pub/utils/SPD
	wuarchive.wustl.edu:/graphics/graphics/SPD (I think! Hard to get on!)

among others.  For those without FTP access, write to the netlib automatic
mailer:  research!netlib and netlib@ornl.gov are the sites.  Send a one line
message "send index" for more information, or "send haines from graphics" for
the latest version of the SPD package.


File Structure
--------------

    Various different procedural database generators are included.  The
original programs were:  balls, gears, mount, rings, tetra, and tree, and so
these are more commonly used for performance testing.  The teapot generator
was added due to popular request (what with it being the sixth platonic solid
and all).  These models were designed to span a fair range of primitives,
modeling structures, lighting and surface parameters, background conditions,
and other factors.  There are also shell and lattice generators just for
amusement, and a sombrero height field generator to show off this type of
output (which is not a part of NFF, but is useful for other types of output).
A complexity factor is provided within each program to control the size of the
database generated.  See the MANIFEST for a description of each file.

    The compiled and linked programs will output a database in ASCII to stdout
containing viewing parameters, lighting conditions, material properties, and
geometric information.  The data format is called the 'neutral file format'
(or NFF) and is described in the `NFF' file.  This format is meant to be
minimal and easy to attach to a user-written filter program which will convert
the output into a file format of your choosing.

    Either of two sets of primitives can be selected for output.  If
OUTPUT_FORMAT is defined as OUTPUT_CURVES, the primitives are spheres, cones,
cylinders, and polygons.  If OUTPUT_FORMAT is set to OUTPUT_PATCHES, the
primitives are output as polygonal patches and polygons (i.e.  all other
primitives are polygonalized).  In this case OUTPUT_RESOLUTION is used to set
the amount of polygonalization of non-polygonal primitives.  In general,
OUTPUT_CURVES is used for ray-trace timing tests, and OUTPUT_PATCHES for
polygon-based algorithm timings.  Note that sphere primitives are not
polygonalized using the simple longitude/latitude tessellation, but rather are
chopped along the six faces of a cube projected onto the sphere.  This
tessellation avoids generation of the cusp points at the sphere's poles and so
eliminates discontinuities due to them.

    The size factor is used to control the overall size of the database.
Default values have been chosen such that the maximum number of primitives
less than 10,000 is output.  One purpose of the size factor is to avoid
limiting the uses of these databases.  Depending on the research being done
and the computing facilities available, a larger or smaller number of
primitives may be desired.  The size factor can also be used to show how an
algorithm's time changes as the complexity increases.

    To generate the databases, simply type the name of the database and direct
the output as desired, e.g. "balls > balls.nff" creates the default sized
database and sends the output to balls.nff.  A new feature in 3.0 is that you
can enter the size factor on the command line, e.g.  "balls 2 > balls.nff"
gives a much smaller database of 91 spheres.  You can also decide to output in
true curved or polygonal mesh modes, and can also choose from a variety of
output modes other than NFF.  Typing "balls -?" gives a summary.  See the
header of the database C file (e.g. "balls.c") for how the size factor
affects the output.

    Other parameters in the code itself (for example, branching angles for
"tree.c" and the fractal dimension in "mount.c") are included for your own
enjoyment, and so normally should not be changed if the database is used for
timing tests.  Because the hashing function in the original release of
"mount.c" is not very good (at low resolutions there is patterning), there is
a better hashing function provided which can be turned on by defining
NEW_HASH.  Use the old hashing function (i.e. don't change anything) for
consistency with previously published results, use the new one for better
results.

    Since the SPD package is designed to test efficiency, the actual shading
of the images is mostly irrelevant.  All that matters is that reflective
surfaces spawn reflection rays and transmitters spawn refraction rays.  For
this reason the effect of the other shading parameters is up to the
implementer.  Note that light intensities, ambient components, etc. are not
specified.  These may be set however you prefer.  Feel free to change any
colors you wish (especially the garish colors of `rings').  The thrust of
these databases is the testing of rendering speeds, and so the actual color
should not affect these calculations.  An ambient component should be used for
all objects, so that the time to compute it is included in the ray tracing
statistics.  A simple formula for a relative intensity (i.e. between 0 and
1) for each light and for the ambient component is the following:

	sqrt(# lights) / (# lights * 2).

    If you desire a model for a good scene description language, NFF is not
it.  Instead, you should look at Craig Kolb's ray-tracer language, which is
part of his excellent RayShade ray tracer, available via anonymous FTP from
princeton.edu:/pub/Graphics, or at POV-Ray, at ftp.povray.org.  Of the public
domain ray tracers, Rayshade is the one to beat for speed, POV for popularity
(especially on the PC).

    The programs all attempt to minimize program size (for ease in
distributing) and memory usage, and none allocate any dynamic memory.  As
such, many of the programs are relatively inefficient, regenerating various
data again and again instead of saving intermediate results.  This behavior
was felt to be unimportant, since generating the data is a one-time affair
which normally takes much less time than actually rendering the scene.


Database Analysis
-----------------

    The databases were designed with the idea of diversity in mind.  The
variables considered important were the amount of background visible, the
number of lights, the distribution of sizes of objects, the amount of
reflection and refraction, and the depth complexity (how many objects a ray
from the eye passes through).

balls:  This database is sometimes called "sphereflake", as it is generated
    like a snowflake curve.  This database consists mostly of various sized
    spheres.  It has no eye rays which hit the background, and the three light
    sources cause a large number of shadow rays to be generated.

gears:  This database consists of a set of meshed gears.  Some of the gears
    are transmitters, making this database lengthy to render.  The gear faces
    each have 144 vertices, and thus tests polygon inside/outside test
    efficiency.  Depth complexity is medium.

mount:  The fractal mountain generator is derived from Loren Carpenter's
    method, and the composition with the four glass spheres is inspired by
    Peter Watterberg's work.  Most objects are tiny (i.e. fractal facets),
    but rendering time is dominated by the rendering of the four large
    spheres.  Depth complexity is low, and there is much background area.

rings:  Objects with six pentagonal rings form a pyramid against a background
    polygon.  With a high amount of interreflection and shadowing, this scene
    is fairly lengthy to render.  Depth complexity is also high, with
    all of the objects partially or fully obscured.

teapot:  The famous teapot on a checkerboard.  There are a number of
    variations on the teapot database, i.e. whether the bottom is included
    (the IEEE CG&A article added a bottom), variations in the control points
    on the lid (which creates thin, almost degenerate triangles), etc.  For
    this database generator, the bottom is created, and the degenerate
    polygonal patches at the center of the lid and bottom are not output.  The
    bottom of the teapot is not flat, interestingly enough.  The resolution of
    the checkerboard shows the resolution of the teapot meshing (with each
    teapot quadrilateral cut into two triangles), e.g. an 8x8 checkerboard
    means that 8x8x2 triangles are generated per patch.  The definitions for
    the 32 Bezier patches are a part of the program.  All objects are
    reflective and there are two light sources.  Depth complexity is low.

tetra:  A recursive tetrahedral pyramid, first visualized by Benoit Mandelbrot
    and Alan Norton.  This scene is dominated by background (around 80%).  With
    the objects not being reflective and there being only one light source,
    this database is particularly quick to render, with various forms of
    coherency being very useful.  Depth complexity is medium, though some
    rays must pass by many triangles for some of the background pixels.

tree:  A tree formed using Aono and Kunii's tree generation method.  With
    seven light sources, the emphasis is on shadow testing.  Shadow caching
    yields little improvement due to the narrow primitives, and many shadow
    rays pass through complex areas without hitting any objects.  There is a
    fair amount of background area.  Depth complexity is low.


		balls      gears       mount       rings
		-----      -----       -----       -----
primitives        SP         P           PS         YSP
total prim.      7382       9345        8196        8401
poly/patches    1417K       9345        8960        874K

lights            3          5           1           3
background        0%         7%         34%          0%
specular         yes        yes         yes         yes
transmitter       no        yes         yes          no

eye hit rays   263169     245086      173125      263169
reflect rays   175095     304643      354769      315236
refract rays        0     207564      354769           0
shadow rays    954368    2246955      412922     1085002


	       teapot       tetra        tree
	       ------       -----        ----
primitives       TP           P           OSP
total prim.      9264        4096        8191
poly/patches     9264        4096        852K

lights            2           1           7
background       39%         81%         35%
reflector        yes          no          no
transmitter       no          no          no

eye hit rays   161120       49788      169836
reflect rays   225248           0           0
refract rays        0           0           0
shadow rays    407656       46112     1097419


    "primitives" are S=sphere, P=polygon, T=polygonal patch, Y=cylinder,
O=cone, listed from most in database to least.

    "total prim." is the total number of ray-tracing primitives (polygons,
spheres, cylinders and cones) in the scene.  The number of polygons and vectors
generated is a function of the OUTPUT_RESOLUTION.  The default value for this
parameter is 4 for all databases.

    "poly/patches" is the total number of polygons and polygonal patches
generated when using OUTPUT_PATCHES.

    "lights" is simply the number of lights in a scene.  "background" is the
percentage of background color (empty space) seen directly by the eye for the
given view.  "reflector" tells if there are reflective objects in the scene,
and "transmitter" if there are transparent objects.

    "eye hit rays" is the number of rays from the eye which actually hit an
object (i.e. not the background).  513x513 eye rays are assumed to be shot
(i.e. one ray per pixel corner).  "reflect rays" is the total number of rays
generated by reflection off of reflecting and transmitting surfaces.  "refract
rays" is the number of rays generated by transmitting surfaces.  "shadow rays"
is the sum total of rays shot towards the lights.  Note that if a surface is
facing away from a light, or the background is hit, a light ray is not formed.
The numbers given can vary noticeably from a given ray tracer, but should all
be within about 10%.

    "K" means exactly 1000 (not 1024), with number rounded to the nearest K.
All of the above statistics should be approximately the same for all
classical ray tracers.


Testing Procedures
------------------

Below are listed the requirements for testing various algorithms.  These test
conditions should be realizable by most renderers, and are meant to represent
a common mode of operation for each algorithm.  Special features which the
software supports (or standard features which it lacks) should be noted in
your statistics.

    1)  The non-polygon (OUTPUT_CURVES) format should normally be used for
	ray-tracing tests.

    2)  All opaque (non-transmitting) primitives can be considered one-sided
	for rendering purposes.  Only the outside of primitives are visible in
	the scenes.  The only exception to this is the "teapot" database, in
	which the teapot itself should normally be double sided (this is
	necessary because the lid of the teapot does not fit tightly, allowing
	the viewer to see back faces).

    3)  Polygonal patches (which are always triangles in the SPD) should have
	smooth shading, if available.

    4)  Specular highlighting should be performed for surfaces with a
	reflective component.  The simple Phong distribution model is
	sufficient.

    5)  Light sources are positional.  If unavailable, assign the directional
	lights a vector given by the light position and the viewing "lookat"
	position.

    6)  Render at a resolution of 512 x 512 pixels, shooting rays at the
	corners (meaning that 513 x 513 eye rays will be created).  The four
	corner contributions are averaged to arrive at a pixel value.  If
	rendering is done differently, note this fact.  No pixel subdivision
	is performed.

    7)  The maximum tree depth is 5 (where the eye ray is of depth 1).  Beyond
	this depth rays do not have to be spawned.

    8)  All rays hitting only reflective and transmitting objects spawn
	reflection rays, unless the maximum ray depth was reached by the
	spawning ray.  No adaptive tree depth cutoff is allowed; that is, all
	rays must be spawned (adaptive tree depth is a proven time saver and
	is also dependent on the color model used - see Roy Hall's work for
	details).

    9)  All rays hitting transmitting objects spawn refraction rays, unless
	the maximum ray depth was reached or total internal reflection occurs.
	Transmitting rays should be refracted using Snell's Law (i.e. should
	not pass straight through an object).  If total internal reflection
	occurs, a reflection ray should still be generated at this node.  Note
	that all transmitters in the SPD are also reflective, but this is
	not a requirement of the file format itself.

    10) A shadow ray is not generated if the surface normal points away from
	the light.  This is true even on transmitting surfaces.  Note any
	changes from this condition.

    11) Assume no hierarchy is given with the database (for example, color
	change cannot be used to note a clustering).  The ray tracing program
	itself can create its own hierarchy, but this process should be
	automatic.  Note any exceptions to this (e.g. not including the
	background polygon in the efficiency structure, changing the
	background polygon into an infinite plane, etc).  Such changes can be
	critical for the efficiency of some schemes, so an explanation of why
	changes were made is important.

    12) Timing costs should be separated into at least two areas: preprocessing
	and ray-tracing.  Preprocessing includes all time spent initializing,
	reading the database, and creating data structures needed to ray-trace.
	Preprocessing should be all the constant cost operations--those that
	do not change with the resolution of the image.  Ray-tracing is the
	time actually spent tracing the rays (i.e. everything that is not
	preprocessing).

    13) Other timing costs which would be of interest are in a breakdown of
	times spent in preprocessing and during actual ray-tracing.  Examples
	include time spent reading in the data itself, creating a hierarchy,
	octree, or item buffer, and times spent on intersection the various
	primitives and on calculating the shade.

    14) Time-independent statistics on the performance of the algorithm should
	be gathered.  Some examples are the number of ray/object intersection
	tests, the number of ray/object tests which actually hit objects,
	number of octree or grid nodes accessed, and the number of successful
	shadow cache hits.


Timings
-------

Rendering time of the ray-traced test set on an HP-835, optimized (512 x 512):

       Input   Setup  Ray-Tracing  |  Polygon   Sphere    Cyl/Cone    Bounding
	    (hr:min:sec)           |   Tests     Tests      Tests    Box Tests
-------------------------------------------------------------------------------
balls   0:14    0:19      24:56    |    822K     6197K        0        51726K
gears   0:56    0:18    1:02:31    |  13703K       0          0       107105K
mount   0:31    0:14      18:49    |   4076K     3978K        0        31106K
rings   0:41    0:33      53:41    |   1045K     5315K     16298K      91591K
teapot  1:02    0:18      25:38    |   7281K       0          0        57050K
tetra   0:15    0:06       3:48    |    965K       0          0         7637K
tree    0:40    0:31      11:36    |    479K      524K      1319K      22002K

Input:  time spent reading in the NFF file.

Setup:  time spent creating the database and any ray tracing efficiency
structures, and cleaning up after ray trace (does not include "Input" time).

Ray-Tracing:  time spent traversing and rendering the pixel grid.


A typical set of ray tracing intersection statistics for the tetra database is:

[these statistics should be the same for all users]
  image size: 512 x 512
  total number of pixels: 262144			[ 512 x 512 ]
  total number of trees generated: 263169		[ 513 x 513 ]
  total number of tree rays generated: 263169		[ no rays spawned ]
  number of eye rays which hit background: 213381	[ 81% - might vary ]
  average number of rays per tree: 1.000000
  average number of rays per pixel: 1.003910
  total number of shadow rays generated: 46111		[ might vary a bit ]

[these tests vary depending on the ray-tracing algorithm used]
Intersector performance
  Bounding box intersections:      7636497  -    26.71 usec/test
  Polygon intersections:            964567  -    36.77 usec/test

Ray generation
  Eye rays generated:              263169 ( 49788 hit - 18.92% )
  Reflection rays generated:            0
  Refraction rays generated:            0
  Shadow rays generated:            46111
      Coherency hits:                3407  -  7.39 % of total
      Fully tested:                 42704


    The ray-tracer which generated these statistics is based on hierarchical
bounding boxes generated using Goldsmith & Salmon's automatic hierarchy method
(see IEEE CG&A May 1987).  It no longer uses an item buffer, hence the higher
number of overall intersection tests from earlier SPD versions.

    One problem worth analyzing when using the SPD for efficiency tests is how
octree, SEADS, and other space dividing algorithms perform when the background
polygon dimensions are changed (thus changing the size of the outermost
enclosing box, which changes the encoding of the environment).  One analysis of
the effect of the background polygon on RayShade can be found in "Ray Tracing
News", volume 3, number 1.


Future Work
-----------

    These databases are not meant to be the ultimate in standards, but are
presented as an attempt at providing representative modeled environments.  A
number of extensions to the file format could be provided someday, along with
new database generators which use them.  The present databases do not contain
polygons with holes, spline patches, polygonal mesh, triangular strip, or
polyhedron data structures.  Modeling matrices are not output, and CSG
combined primitives are not included.  Light sources are particularly
simplistic.  For a richer, more user-friendly scene description language,
take a look at RayShade or RenderMan.

    As far as database geometry is concerned, most scenes have a preponderance
of small primitives.  If you find that these databases do not reflect the type
of environments you render, please write and explain why (or better yet, write
one or more programs that will generate your "typical" environments--maybe it
will get put in the next release).


Acknowledgements
----------------

    I originally heard of the idea of standard scenes from Don Greenberg back
in 1984.  Some time earlier he and Ed Catmull had talked over coming up with
some standard databases for testing algorithmic claims, and to them must go
the credit for the basic concept.  The idea of making small programs which
generate the data came to me after Tim Kay generously sent huge files of his
tree database via email - I felt there had to be a better way.  Adding the
teapot was inspired by the repeated demand on comp.graphics for this database,
certainly the most famous of all.

    Many thanks to the reviewers, listed alphabetically:  Kells Elmquist, Jeff
Goldsmith, Donald Greenberg, David Hook, Craig Kolb, Susan Spach, Rick Speer,
K.R. Subramanian, J. Eric Townsend, Mark VandeWettering, John Wallace, Greg
Ward, and Louise Watson.  Other people who have kindly offered their ideas and
opinions on this project include Brian Barsky, Andrew Glassner, Roy Hall, Chip
Hatfield, Tim Kay, John Recker, Paul Strauss, and Chan Verbeck.  These names
are mentioned mostly as a listing of people interested in this idea.  They do
not necessarily agree (and in some cases strongly disagree) with the validity
of the concept or the choice of databases.

    Your comments and suggestions on these databases are appreciated.  Please
do send me any timing results for software and hardware which you test, or any
publications which use the SPD package.