Share Email Print
cover

Proceedings Paper

An Illumination-Based Model Of Stochastic Textures
Author(s): Michael T. DiBrino; Ian R. Greenshields
Format Member Price Non-Member Price
PDF $14.40 $18.00
cover GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free. Check Access

Paper Abstract

It is now widely accepted that a certain class of naturally occurring textures are best modeled using the stochastic fractal methodologies proposed by Mandelbrot. Currently, most scene modeling techniques which appeal to the fractal paradigm construct their models by the successive application of perturbations to a primitive polygonally-based representation of the scene. While we concede the efficacy of this technique in rendering the model, we argue that the technique is inherently non-scale invariant, in that it requires that the perturbed model be recomputed whenever the viewpoint of the scene is altered. Rather than adopt this approach, we argue that a knowledge of the basic fractal statistics of the scene should be sufficient to construct a rendered model of the texture without requiring the intermediate computation of a perturbed polygonal structure. Our approach involves the application of fractal geometry to the illumination physics of the object; by this, we mean that a. basic (possibly polygonal) model of the object, along with its fractal statistics, will suffice to construct a rendered version of the object with a. fractal texture independent of scene viewpoint position. To accomplish this, we rely on local perturbations of illumination nornials at the time that the normal is evaluated from the basic model of the object. The extent and nature of this local perturbation is guided by both the fractal statistics of the object and the position of the scene viewpoint. Thus, we argue that the development of the stochastic texture be done in situ at the time that the illumination is developed, rather than performed as a preliminary step in the modeling of the scene. In this way, we substantially reduce the net computational complexity of the modeling process and its subsequent rendering, since the size of the ray-tree is determined by the complexity of the base object model and not the size of the perturbed and subdivided model.

Paper Details

Date Published: 7 March 1989
PDF: 7 pages
Proc. SPIE 1005, Optics, Illumination, and Image Sensing for Machine Vision III, (7 March 1989); doi: 10.1117/12.949036
Show Author Affiliations
Michael T. DiBrino, University of Connecticut (United States)
Ian R. Greenshields, University of Connecticut (United States)


Published in SPIE Proceedings Vol. 1005:
Optics, Illumination, and Image Sensing for Machine Vision III
Donald J. Svetkoff, Editor(s)

© SPIE. Terms of Use
Back to Top