Proceedings Volume 1083

Three-Dimensional Visualization and Display Technologies

Scott S. Fisher, Woodrow E. Robbins
cover
Proceedings Volume 1083

Three-Dimensional Visualization and Display Technologies

Scott S. Fisher, Woodrow E. Robbins
View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 11 September 1989
Contents: 1 Sessions, 34 Papers, 0 Presentations
Conference: OE/LASE '89 1989
Volume Number: 1083

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • All Papers
All Papers
icon_mobile_dropdown
Stereoscopic CAD and Environmental Sculpture: Enhancement of the Design Process in the Visual Arts
Robert N. Fisher, Pier Luigi Bandini
In this paper, co-authors Robert Fisher and Pier Luigi Bandini describe their personal observations concerning stereo enhancements of computer graphics images employed in their research. in Part One, Robert Fisher, a professional sculptor, Professor and Artist-in-Residence in the College of Engineering at Penn State, cites three recent environmental sculpture projects: "See-scape," "A Page from the Book of Skies," and an as yet untitled work. Wireframe images, interior views of architectural spaces, and complex imagery are rendered comprehensible by stereo 3-D. In Part Two, Pier L. Bandini, Associate Professor of Architecture and Director of the Architecture CAD Lab at Penn State, describes the virtues of the stereo-enhanced wireframe model--the benefits of the "see-through coupled with a complete awareness of the whole space." The final example, of a never-realized XVIII-century project, suggests a new and profound application of stereo 3-D to historical inquiry, namely, the experience of ancient spaces and structures that are no longer existing or that were never constructed.
Stereo TV Improves Manipulator Performance
Robert E. Cole, Donna L. Parker
Six observers, experienced in telerobotic operations, were used across four replicated studies of remote performance of a simulated space station assembly task. An alignment/insertion task was performed with a remotely operated manipulator arm viewed either directly or through stereoscopic or monoscopic TV viewing systems. Target position, space light, and learning effects were also assessed by measures of task time and manipulator collisions. Performance with Stereo view was significantly superior to that with Mono in Experiments 1, 2, and 4. It's superiority fell below required significance levels in Experiment 3 due to of the accumulation of practice effects across the first two studies. Experiment 4, in which the left-right positions of manipulator arm and task element were reversed, reestablished the strong superiority of Stereo view over Mono. These results clearly show the superiority of Stereo TV over Mono viewing systems. They suggest that learning can also improve performance under Mono view when accompanied by Direct view and Stereo view experience, but such learning is specific to the perceptual and motor conditions that were present in practice. Space lighting was not significant in the two studies in which it was assessed.
Experience with Stereoscopic Display Devices and Output Algorithms
James S. Lipscomb
Unobtrusiveness seems much more important than price or image quality to high-end workstation stereo users. Polarizing plate technology freed our users to concentrate better on their application. Unobtrusiveness seems to he important in the marketplace too, since the polarizing plate is selling well despite a price 2-3 times that of similar active glasses installations. These observations come from watching chemists at the University of North Carolina at Chapel Hill as they used molecular computer graphics with many stereo display devices. The output algorithm of rotation to produce stereo is well known to be wrong for the perspective case, but correct for the non-perspective (orthographic) case. However, a shear is better for orthographic stereo, because it correctly handles clipping planes on some graphics hardware, and it is faster to compute than a rotation. A shear creates the illusion that transformations order is reversed.
Voice Controlled Stereographic Video Camera System
Georgianna D. Goode, Michael L. Philips
For several years various companies have been developing voice recognition software. Yet, there are few applications of voice control in the robotics field and virtually no examples of voice controlled three dimensional (3-D) systems. In late 1987 ARD developed a highly specialized, voice controlled 3-D vision system for use in remotely controlled, non-tethered robotic applications. The system was designed as an operator's aid and incorporates features thought to be necessary or helpful in remotely maneuvering a vehicle. Foremost is the three dimensionality of the operator's console display. An image that provides normal depth perception cues over a range of depths greatly increases the ease with which an operator can drive a vehicle and investigate its environment. The availability of both vocal and manual control of all system functions allows the operator to guide the system according to his personal preferences. The camera platform can be panned +/-178 degrees and tilted +/-30 degrees for a full range of view of the vehicle's environment. The cameras can be zoomed and focused for close inspection of distant objects, while retaining substantial stereo effect by increasing the separation between the cameras. There is a ranging and measurement function, implemented through a graphical cursor, which allows the operator to mark objects in a scene to determine their relative positions. This feature will be helpful in plotting a driving path. The image seen on the screen is overlaid with icons and digital readouts which provide information about the position of the camera platform, the range to the graphical cursor and the measurement results. The cursor's "range" is actually the distance from the cameras to the object on which the cursor is resting. Other such features are included in the system and described in subsequent sections of this paper.
Exploring Virtual Worlds With Head-Mounted Displays
J. C. Chung, M. R. Harris, F. P. Brooks, et al.
For nearly a decade the University of North Carolina at Chapel Hill has been conducting research in the use of simple head-mounted displays in "real-world" applications. Such units provide the user with non-holographic true three-dimensional information, since the kinetic depth effect, stereoscopy, and other visual cues combine to immerse the user in a "virtual world" which behaves like the real world in some respects. UNC's head-mounted display was built inexpensively from commercially available off-the-shelf components. Tracking of the the user's head position and orientation is performed by a Polhemus Navigation Sciences' 3SPACE* tracker. The host computer uses the tracking information to generate updated images corresponding to the user's new left eye and right eye views. The images are broadcast to two liquid crystal television screens (220x320 pixels) mounted on a horizontal shelf at the user's forehead. The user views these color screens through half-silvered mirrors, enabling the computer-generated image to be superimposed upon the user's real physical environment. The head-mounted display has been incorporated into existing molecular modeling and architectural applications being developed at UNC. In molecular structure studies, chemists are presented with a room-sized molecule with which they can interact in a manner more intuitive than that provided by conventional two-dimensional displays and dial boxes. Walking around and through the large molecule may provide quicker understanding of its structure, and such problems as drug-enzyme docking may be approached with greater insight. In architecture, the head-mounted display enables clients to better appreciate three-dimensional designs, which may be misinterpreted in their conventional two-dimensional form by untrained eyes. The addition of a treadmill to the system provides additional kinesthetic input into the understanding of building size and scale.
Low Cost Design Alternatives For Head Mounted Stereoscopic Displays
Stephen W. Martin, Richard C. Hutchinson
Described is a low cost design approach for stereoscopic head mounted displays (HMDs). The approach is to couple two miniature image sources to a helmet using an aviator's night vision system mount, and directly view the images using telescope eyepieces. Various configurations are realized by use of different image sources and different focal length eyepieces. Performance depends upon the capabilities of the individual components utilized. Two different types of displays constructed for teleoperation applications are described. These displays employ miniature cathode ray tubes with RS-170A compatible video driver electronics, 24.5mm focal length eyepieces, weigh approximately six pounds, provide apparent horizontal fields of view of 40 and 55 degrees, and achieve from 250 to 500 + TV lines per picture height of horizontal resolution. Limitations of and recommendations for these types of dis-play are discussed.
Chromostereoscopic Microscopy
Richard A. Steenblik
Chromostereoscopy is a simple optical technique for transforming the colors in a single image into stereoscopic image planes. The process allows the user to control the amount of depth observed and to invert the depth at will. Preliminary work has begun to determine the usefulness of applying this technique to microscopy. Potential benefits of this technique include enhancing the discrimination of color and effecting the spatial isolation of selected colors. A number of standard microscope illumination techniques exist which can produce brilliant multicolored images, often without altering the specimen. Examination of six of these techniques indicates that under selected conditions the chromostereoscopic process can be used with binocular microscopes to create a stereoscopic depth effect.
Computer-Generated Barrier-Strip Autostereography
Daniel J. Sandin, Ellen Sandor, William T. Cunnally, et al.
This paper discusses (1) the computer graphics transformations necessary to produce source images for barrier-strip autostereograms, and (2) current research to replace photographic processes with computational processes to combine different views into a stereogram. By connecting a computer to a high-resolution output scanner, computer-based images and digitized camera images can automatically be combined and printed on transparency film. Automating this process improves the visual quality of the autostereograms and expands the medium's commercial and artistic potential.
Everyman's real-time real 3-D
Homer B. Tilton
Hardware is described which converts your dual-trace oscilloscope into a parallactiscope (parallactic oscilloscope). The parallactiscope allows you to readily generate real 3-D images in real time. The images are real 3-D in the sense that they are holoform (hologram-like), meaning you can peer around them and see stereo without glasses. Multiple observers see different images. They are also holoform in that the basic images are directly viewed and are presented on a stationary surface (the CRT screen) as are holograms. The images are produced in real time electronically from an arbitrary trio of waveforms, just as ordinary oscilloscope images are produced in real time electronically from an arbitrary duo of waveforms. Sufficient detail is given to enable you to build the described hardware at a parts cost of less than $500, exclusive of the oscilloscope cost. Chances are the oscilloscope will not need modification. This is truly a breakthrough in 3-D. It is a product of the fuller understanding of how holoform images are constituted resulting from the perfection of the hologram.
Parallax Barrier 3DTV
Ian Sexton, David Crawford
This paper presents an overview of 3D display techniques and differentiates between the requirements of computer graphic displays and television displays. The virtues of a parallax barrier technique in the form of a scanning optical slit are expounded, and potential 3D-TV display techniques are examined.
Compatibility Of Stereoscopic Video Systems With Broadcast Television Standards
Lenny Lipton
A flickerless stereoscopic video system employing a time multiplexing technique producing 120 fields per second was developed by StereoGraphics Corporation. An important characteristic of the product is that it uses unmodified recording and transmission equipment operating at the nominal 60 fields per second standard, doubling the number of fields at playback while operating within the NTSC protocol without any increase in bandwidth. In order to improve image quality, an off-the-shelf scan converter is used to double the number of lines per field per eye. Display screen size liquid crystal modulators were developed so that passive glasses may be worn by the viewer. In addition, the Universal Camera Controller was developed to multiplex the signals from two unmodified cameras, so that they can fit within the existing NTSC bandwidth. The system, designed for industrial applications, comprises, from camera to display device, an integral approach to upward compatibility. The design of a downwardly compatible system would be most appropriate for a consumer product and needs to consider the question of compatibility with regard to all aspects of the broadcast service and video infrastructure, including existing consumer video tape recorders.
Computer Generated Lenticular Stereograms
Shaun Love, David F. McAllister
Computer generated true three dimensional hardcopy using lenticular sheets offers advantages over alternative hardcopy methods. Easier to record than holographic stereograms and offering full color, lenticular displays have image quality similar to barrier displays but without problems of brightness. Two different means of computer generation are possible. For both, a sequence of perspective views are calculated then merged into a single composite image or stereogram. Interlacing these views is usually done optically but a computational method is also considered. Efforts have been made at direct writing of holograms but can be achieved much more readily for lenticular stereograms since a much lower spatial frequency will suffice. The resolution needed depends upon the pitch of the cylindrical lenses in the lenticular sheet and the number of perspective views to be interlaced. We consider the use of 300 dpi laser printers and show that while the resolution is sufficient, linewidth and spacing are problematic. We also consider production of full color stereograms and discuss the problems caused by dithering for color palette generation. Higher resolution output devices such as digital film recorders offer much greater potential for panoramic, full color display and direct writing of lenticular stereograms is considered a viable, cost effective technique to generate autostereoscopic 3D display.
Three-Dimensional Measurement, Display, and Interpretation of Fluid Flow Datasets
Minami Yoda, Lambertus Hesselink
The three-dimensional structure of the flow around a delta wing with tangential leading edge blowing is visualized. The flow is "sliced" by a scanning laser light sheet into a set of two-dimensional cross sections, resulting in a set of three-dimensional data. The light scattered off smoke particles in the flow during a 30 ns laser pulse is imaged onto a high speed camera. The measurement period of a few milliseconds is brief enough to image the flow "instantaneously" (i.e. to within the resolution of the apparatus). The resulting cross sections are digitized and filtered to reduce noise. The STANSURFS software package developed in the Fourier Optics and Optical Diagnostics lab at Stanford is used to threshold the pictures at a prespecified value, stack the thresholded crossections, and reconstruct the three-dimensional flow field at that threshold using cubic B-splines. The resulting structures are displayed in stereo pairs and viewed in three dimensions. Various display techniques including clipping the surface to reveal interior details and rotating the viewer perspective around the structure are used to gain further insight into the three-dimensional nature of this and other flows.
Environment for Distributed Visualization
H. Stephen Anderson, John Andrew Berton Jr., Barbara Helfer Dean
This paper describes an environment for distributed visualization. Distributed visualization is a twofold concept. It encompasses spreading the computational load of graphical simulations over a local area network and distributing access to graphical information directly to the desktops of those needing it. At The Ohio Supercomputer Center, a typical user work area not only has network access to other computing resources such as supercomputers, but he also has access to a graphical network of visual devices. Scarce resources such as full-color frame buffers and digital video recorders are allocated and accessed using transparent network tools. This system will be described in detail, including philosophy of design and equipment utilization.
VERITAS: Visualization Environment Research In The Applied Sciences
A. Giacalone, J. Heller, A. Kaufman, et al.
VERITAS (Visualization Environment Research In The Applied Sciences) involves the design and development of tools for the creation of user environments for scientific computing systems. The initial goal has been to provide a tool-based environment in which a scientist can easily and rapidly construct, with no programming effort, a powerful customized graphical user interface to an existing scientific application. The ultimate goal is to support in a similar interactive, high-level way the development of the entire scientific application system by the scientist, to allow a tighter integration of the application model and the graphical interface. The project involves research and development in visualization and interactive data presentation techniques, knowledge representation and management systems, and high-level specification and programming languages.
Statistical Characteristics of Stereoscopic Images for Image Coding
Hiroyuki Yamaguchi, Yasushi Tatehira, Kenji Akiyama, et al.
Statistical characteristics of stereoscopic images and the possibility of stereoscopic image data compression utilizing the mutual correlation between right and left images are presented. First, the mutual (cross) correlation between right and left images and the autocorrelation of the left images are measured. Next, one image is divided into blocks of fixed size. Each block is shifted consecutively and then subtracted from the corresponding area of the other image to form a residual block at each displacement position. The block with the least residual among all translated blocks is determined. These least residual blocks are then assembled to form a least residual image. Finally, the statistical properties of residual images and block translation values are investigated. The results indicate that data compression is possible by shifting one image horizontally and subtracting it from the corresponding area of the other. This research allows the efficient image coding for stereoscopic images by utilizing the correlation between two images, and makes possible low cost, more realistic 3D visual communications.
Automated analysis of fluid flow topology
James Helman, Lambertus Hesselink
Much research has been devoted to image understanding and the automated analysis of images of two-dimensional scalar data, but comparatively little work has been devoted to understanding higher dimensional data. Experiments and numerical simulations of fluid flows yield extremely large multivariate data sets which are often too complicated for manual inspection, manipulation, comparison and display. As with scene analysis, an abstract representation of the data set would greatly facilitate these tasks. In the analysis of vectorial data such a representation can be based on the topology of the vector field. A new method is presented for the representation and visualization of vector fields, in particular those derived from fluid flows. It is based on critical point analysis. The method can be applied to general vector fields, but as our main interest is in fluid flows, we have included in the analysis walls (no-slip boundaries) on which the vector field vanishes. Singular points on the walls and critical points in the external flow serve as a basis for building a representation of the global topology as determined by the tangent curves of the vector field. The resulting representations, which consist of critical points, dividing stream surfaces, surfaces of separation, vortical cores, etc., may then be displayed and compared to study the development of flow topology with time or as a function of a parameter such as angle of attack. The representation may also be used as a "road map" for further investigation of the original data set in its entirity. Results are presented from the application of these methods to two-dimensional and two-dimensional parameter-dependent fluid flow data sets.
Visualization Of Complex Data
Edward J. Farrell, Zaphiris D. Christidis
Large-scale simulations and scientific measurements often produce complex multi-dimensional data sets. To fully utilize the results of costly computations and measurements effective visualization methods are required to interpret the data.1, 2, 3 Prior methods based on contour plots, mesh surfaces, or titled structures are not adequate. Structure modeling does not have the capacity needed for display and interpretation; data interpretation is more complex than every day vision. The user may wish to visualize a nebulous 3D region, data without distinct surfaces or structures, or a region buried inside a larger structure, or interrelate 3D regions in different data sets.
Tools For 3D Scientific Visualization In Computational Aerodynamics At NASA Ames Research Center
Gordon Bancroft, Todd Plessel, Fergus Merritt, et al.
The purpose of this paper is to describe the tools and techniques in use at the NASA Ames Research Center for performing visualization of computational aerodynamics, for example visualization of flow fields from computer simulations of fluid dynamics about vehicles such as the Space Shuttle. The hardware used for visualization is a high-performance graphics workstation connected to a super computer with a high speed channel. At present, the workstation is a Silicon Graphics IRIS 3130, the supercomputer is a CRAY2, and the high speed channel is a hyperchannel. The three techniques used for visualization are post-processing, tracking, and steering. Post-processing analysis is done after the simulation. Tracking analysis is done during a simulation but is not interactive, whereas steering analysis involves modifying the simulation interactively during the simulation. Using post processing methods, a flow simulation is executed on a supercomputer and, after the simulation is complete, the results of the simulation are processed for viewing. This is by far the most commonly used method for visualization of computational aerodynamics. The next two methods are much more desirable, yet much less common given the current state of supercomputer and workstation evolution and performance. Both of these are more sophisticated methods because they involve analysis of the flow codes as they evolve. Tracking refers to a flow code producing displays that give a scientist some indication how his experiment is progressing so he could, perhaps, change some parameters and then restart it. Steering refers to actually interacting with the flow codes during execution by changing flow code parameters. (Steering methods have been employed for grid generation pre-processing as well to substantially reduce the time it takes to construct a grid for input to a flow solver). When the results of the simulation are processed for viewing by distributing the process between the workstation and the supercomputer, it is called distributed processing. This paper describes the software in use and under development at NASA Ames Research Center for performing these types of tasks in computational aerodynamics. Workstation performance issues, benchmarking, and high-performance networks for this purpose are also discussed as well as descriptions of other hardware for digital video and film recording. A new software environment, FAST, is introduced that is currently being developed at NASA Ames for implementation on workstations that will be procured in the latter half of 1989. This modular software environment will take advantage of the multiple processor and large memory configurations and other features as specified in the NASA RFP for these workstations and is a natural evolution of the techniques described in this paper.
A Unified Approach To The Design Of Visualization Software For The Analysis Of Field Problems
Robert R. Dickinson
The term 'field' is used herein to refer to a process which associates a physical quantity with each point in a region of space. Fields can be scalar, vector, or tensor valued. Until recently, the commonly used methods for visualizing results of field analyses have tended to over-emphasize the typically discrete nature of analysis output. In contrast, emphasis in this paper is placed on the continuous nature of fields. Test results of an experimental system described herein suggest that interactive feature extraction may become more important than traditional 'batch process' oriented approaches to post-analysis visualization. Accordingly, users will need to be given a lot more freedom and flexibility to seek out those features of a given field that they consider important and useful.
Three-dimensional Optical Tomographic Measurements of Mixing Fluids
Ray Snyder, Lambertus Hesselink
Instantaneous three-dimensional measurements of species concentration in a co-flowing jet are obtained with optical tomography. The processing of 36 interferometric views to obtain measurements with 1.3mm resolution and 5% accuracy is described. Data from one experiment is presented, and merits of two visualizations of the data are compared.
The Cube System As A 3D Medical Workstation
A. Kaufman, R. Bakalash
The Cube system is a 3D graphics system centered around a large cubic frame-buffer of voxels with several processors that input, manipulate, view, and render both medical and synthetic 3D images. A software prototype has been integrated on a Sun workstation with a 6D Polhemus input device. The physician interacts directly with the medical images, the synthetic objects, and their transformations, employing inherent 3D interaction tools. The system supports the reconstruction, manipulation, analysis, and display of 3D volumetric medical images. The Cube medical system is applicable to diagnostic, planning, therapeutic, surgical, instructional, and research purposes. A case study of a CT reconstruction and display of the cervical region is presented.
Using Electronic Stereoscopic Color Displays: Limits Of Fusion And Depth Discrimination
Yei-Yu Yeh, Louis D. Silverstein
The effective use of stereoscopic display systems is dependent, in part, upon reliable data describing binocular fusion limits and the accuracy of depth discrimination for such visual display devices. In two experiments, these issues were addressed as were the effects of interocular crosstalk. Results showed that the limits of fusion were approximately 27.11 minutes of arc for crossed disparity and 24.21 minutes of arc for uncrossed disparity. Crosstalk had no effect on fusion limits for the contrast ratio and stimulus configuration used. Crosstalk also did not affect accuracy in discriminating disparities within the fusion limits. Subjects were extremely accurate in distinguishing relative distances among four groups of stimuli and were able to identify a pair of stimuli located at the same depth plane within each group. However, crosstalk affected subjects' vergence responses as well as subjective ratings of image quality and the conspicuity of ghost images.
Perceptual Issues In Scientific Visualization
Mary K. Kaiser, Dennis R. Proffitt
In order to develop effective tools for scientific visualization, consideration must be given to the perceptual competencies, limitations, and biases of the human operator. Perceptual psychology has amassed a rich body of research on these issues, and can lend insight to the development of visualization techniques. Within a perceptual psychological framework, the computer display screen can best be thought of as a special kind of impoverished visual environment. Guidelines can be gleaned from the psychological literature to help visualization tool designers avoid ambiguities and/or illusions in the resulting data displays.
Optimal Display Factors in Stereoscopic TV Images for Human Stereoscopic Vision
Yasushi Tatehira, Hiroyuki Yamaguchi, Kenji Akiyama, et al.
Experiments to compare the capability of a stereoscopic TV image to generate disparity (binocular parallax), the main depth cue for human stereoscopic vision, with the characteristics of human stereoscopic vision are presented. In this paper we show that, because of the inadequate horizontal resolution of the conventional TV format (NTSC), the minimum amplitude of disparity that a stereoscopic TV image can generate is stereoscopically perceptible thus causing image quality deterioration in the form of false depth contouring. We also showed that the perceptible high frequency limit of disparity is about 4cpd (cycles/degree) for horizontal gratings, and about 3cpd for vertical gratings. These values are below the spatial frequency bandwidth of disparity that a stereoscopic TV image can generate, therefore, efficient bandwidth utilization is possible. Results of these experiments present the guidelines for designing high quality display for three-dimensional image.
Visions of Visualization Aids: Design Philosophy and Observations
Stephen R. Ellis
Aids for the visualization of high dimensional scientific or other data must be designed. Simply casting multidimensional data into a 2 or 3D spatial metaphor does not guarantee that the presentation will provide insight or a parsimonious description of phenomena implicit in the data. Useful visualization, in contrast to glitzy, high-tech, computer-graphics imagery, is generally based on pre-existing theoretical beliefs concerning the underlying phenomena These beliefs guide selection and formatting of the plotted variables. Visualization tools are useful for understanding naturally 3D databases such as those used by pilots or astronauts. Two examples of such aids for spatial maneuvering illustrate that informative geometric distortion may be introduced to assist visualization and that visualization of complex dynamics alone may not be adequate to provide the necessary insight into the underlying processes.
UIMX: A User Interface Management System For Scientific Computing With X Windows
Michael Foody
Applications with iconic user interfaces, (for example, interfaces with pulldown menus, radio buttons, and scroll bars), such as those found on Apple's Macintosh computer and the IBM PC under Microsoft's Presentation Manager, have become very popular, and for good reason. They are much easier to use than applications with traditional keyboard-oriented interfaces, so training costs are much lower and just about anyone can use them. They are standardized between applications, so once you learn one application you are well along the way to learning another. The use of one reinforces the common elements between applications of the interface, and, as a result, you remember how to use them longer. Finally, for the developer, their support costs can be much lower because of their ease of use.
Scientific Work Environments In The Next Decade
Julian E. Gomez
The application of contemporary computer graphics to scientific visualization is described, with emphasis on the non-intuitive problems. We then describe a radically different approach which centers on the idea of the scientist being in the simulation display space rather than observing it on a screen. Interaction is performed with nonstandard input devices to preserve the feeling of being immersed in the three dimensional display space. Construction of such a system could begin now with currently available technology.
Alternative Representations of Visual Space
Aries Arditi
Although each retinal image is two-dimensional, binocular geometry requires complete representations of field of view to be three-dimensional: the set of visual directions from which light can impinge on either retina can be fully represented in no less than three dimensions. An easily interpretable means of representing environmental space as viewed by a human operator would have wide application in many areas of human factors engineering. This paper discusses a method for delineating and testing hypotheses about the relationship between the retinal images and the three-dimensional visual space they serve, under the conditions of (a) changing eye position, (b) occlusion by structures that are part of or are mounted on the observer such as the bony facial structures, spectacles or headgear, (c) occlusion by environmental objects, (d) defects of the visual field such as the normal blind spot, areas of temporarily reduced visibility due to local adaptation and photopigment bleaching effects, and (c) variables that alter the focus of environmental imagery on the retinas.
Digital Perspective Generation And Stereo Display Of Composite Ocean Bottom And Coastal Terrain Images
Kirk G. Smedley, Barry K. Haines, David Van Vactor, et al.
Depth sounding data was adapted and used in the generation of stereoscopic perspective imagery. After resampling the soundings into a uniform grid, a lighting function was applied in order to create a shaded gray scale image. The uniform depth grid and the gray scale image were then used in a perspective reprojection program that generates any user-defined perspective view of the data. Simply by generating a pair of such images with a slight difference in azimuth angle, a stereo pair of images may be created. A newly adapted version of this perspective reprojection program has been developed that enables a user to interactively generate stereo perspective "movies" in near real-time, further enhancing three di-mensional perception. Finally, three different types of data -- National Oceanic and Atmospheric Administration (NOAA) depth soundings, U. S. Geological Survey (USGS) aerial photography, and USGS map elevations -- have been combined to create a striking composite three dimensional view of on- and off-shore elevations.
The Visualization Management System Approach To Visualization In Scientific Computing
D. M. Butler, M. H. Pendley
We introduce the visualization management system (ViMS), a new approach to the development of software for visualization in scientific computing (ViSC). The conceptual foundation for a ViMS is an abstract visualization model which specifies a class of geometric objects, the graphic representations of the objects and the operations on both. A ViMS provides a modular implementation of its visualization model. We describe ViMS requirements and a model-independent ViMS architecture. We briefly describe the vector bundle visualization model and the visualization taxonomy it generates. We conclude by summarizing the benefits of the ViMS approach.
GeoRGE-3D: A Minicomputer-Based System For Interactive 3-D Rendering Of Digital Environmental Data Sets
Thomas H. Vonder Haar, Donald L. Reinke, H. Shaz Naqvi
The visualization of environmental data in 4-dimensions has come to the forefront of research activities in the computer imaging component of the meteorological community. Because of the perishable nature of many of the atmospheric data sets real time 3-D displays have not been practical in the operational arena. We have been limited to the generation of 2-dimensional image displays or time consuming 3-dimensional renderings, used primarily for post-analysis and research.
Visualization Tools For Industrial Design Problems Applied to Electron Optics
J. Alexander, D. Bechis, N. Winarsky, et al.
At the David Sarnoff Research Center there exists a long standing research effort in support of the design of electron optic devices. The design process for such devices - the color high definition picture tube being the most recent and most outstanding example - involves the generation of very large volumes of data, from both simulation and experimentation. The goal of the design process is to simultaneously optimize performance parameters such as resolution, brightness, contrast and color purity, subject to manufacturing constraints, while varying the geometry or operating conditions of the electron gun or magnetic deflection system. To make effective use of this multi-dimensional information requires visualization tools to aid designers in exploring the design space. In this paper, we describe in detail the nature of the interactive graphics tools we have developed. After a brief overview of the optical design process, we sketch the details of the engineering database, which contains the simulation and experimentation results. We then present "general use" visualization tools which can be applied to many engineering fields to manipulate arrays of data. We also describe paradigms for using these tools to perform constrained optimization. Finally, we present a prototype visual searcher for the design-space, a tool that allows designers to visually search through the design database and ask "what-if" type questions.
Alternative Views of a Hurricane
Robert E. Marshall, Peter G. Carswell
This paper describes various 3D visualization methods applied to a simulation of a hurricane. The data consists of a 3D grid of six variables over a number of time steps. The previous method used to analyze the data was line printer graphics using asterisks and numeric output, which was of limited value due to the large amount of data produced. The visualization at the Ohio Supercomputer Graphics Project (OSGP) brought together the scientist, animation specialists, and computer scientists to produce 3D animations of the data. The initial step is the conversion of the simulation data into a useful format. The data is then mapped using various methods for graphical display. One method is a traditional polygon representation. The polygons are generated from the relative humidity data and displayed as clouds using a scanline polygon renderer. The second method uses orthogonal ray tracing to volumetrically render multiple variables, such as temperature change vs. relative humidity. The final method uses wind velocity data displayed as particles. The resulting animations have proven very useful in analyzing the data. The polygonal animation revealed errors in the simulation data that were previously overlooked. The volumetric rendering showed relationships between different variables. The particle animation clearly indicated the swirling patterns and the anti-cyclonic outflow generated by the hurricane, a key indicator that the simulation is producing accurate data.