Proceedings Volume 2409

Stereoscopic Displays and Virtual Reality Systems II

cover
Proceedings Volume 2409

Stereoscopic Displays and Virtual Reality Systems II

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 30 March 1995
Contents: 8 Sessions, 37 Papers, 0 Presentations
Conference: IS&T/SPIE's Symposium on Electronic Imaging: Science and Technology 1995
Volume Number: 2409

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • New Developments in Stereoscopic Displays Technologies I
  • New Developments in Stereoscopic Displays Technologies II
  • New Developments in Stereoscopic Displays Technologies III
  • Software Issues in Stereoscopic Displays
  • Enabling Technologies I
  • Enabling Technologies II
  • Building Applications I
  • Building Applications II
  • Software Issues in Stereoscopic Displays
  • Building Applications I
  • Enabling Technologies II
New Developments in Stereoscopic Displays Technologies I
icon_mobile_dropdown
Three-dimensional (3D) imaging systems for video communication applications
Michael R. Jewell, Giles R. Chamberlin, Dennis E. Sheat, et al.
In this paper we describe the background to, and our interest in, 3D imaging systems for improved video-telecommunications services. There have been two main thrusts: for video- telephony and for improving response in tele-operations or remote working situations where depth information is vital. We have built prototype demonstrations that offer enhanced presence and are comfortable to view. These demonstrations have been well received. These systems have become feasible because of the recent rapid improvements in display technologies. Further improvements in 3D image quality can be anticipated as the 2D display technologies continue to develop and as the benefit of these applications is transformed to user demand. We anticipate a growing requirement to provide the bandwidth for a range of 3D imaging applications operating over telecommunications networks.
Viewpoint-dependent stereoscopic display using interpolation of multiviewpoint images
Akihiro Katayama, Koichiro Tanaka, Takahiro Oshino, et al.
This paper presents a novel approach to autostereoscopic display which can show viewpoint dependent images according to viewer's movement. This display system is not supported by a specialized display device but consists of a computer, a head tracking device, and a popular binocular stereoscopic display. The keypoint of our approach is based on the idea that the interpolation and reconstruction of multi-viewpoint images can provide a viewer with unlimited number of images according to his/her smooth movement. After describing the basic concepts, algorithms, and problems of the proposed method, the very successful experimental results, which are achieved for a real scene, are shown.
Head-tracked stereoscopic display using image warping
Leonard McMillan, Gary Bishop
In traditional stereoscopic displays, the virtual 3D object does not appear to be fixed in space as the viewer's head moves. This apparent motion results from the fact that a correct stereo image can only be formed for a particular viewpoint and interpupillary distance. At other viewpoints, our brain interprets the stereo image as a slightly skewed and rotated version of the original. When moving the head, this skewing of the image is perceived as apparent motion of the object. This apparent motion anomaly can be overcome with head tracking. Unfortunately, a head-tracked stereo-display system requires the generation of images from arbitrary viewpoints. This has previously limited the practical use of head-tracked stereo to synthetic imagery. We describe a stereoscopic display system which requires the broadcast of only a stereo pair and sparse correspondence information, yet allows for the generation of the arbitrary views required for head-tracked stereo. Our proposed method begins with a pair of hyper-stereo reference images. From these, a sparse set of corresponding points is extracted. Next, we use image warping and compositing techniques to synthesize new views based on the user's current head position. We show that under a reasonable set of constraints, this method can be used to generate stereo images from a wide range of viewpoints. This technique has several advantages over previous methods. It does not require an explicit geometric description, and thus, avoids the difficult depth-from-stereo problem. We also describe a unique visibility solution which allows the synthesized images to maintain their proper depth relationships without appealing to an underlying geometric description.
New autostereoscopic display system
David Ezra, Graham J. Woodgate, Basil Arthur Omar, et al.
This paper presents a new autostereoscopic display system based on conventional Thin Film Transistor Liquid Crystal Display technology giving bright, high quality, full color and high resolution 3D images over a wide viewing range without special glasses. In addition, 3D image look-around and multiple viewers are possible. Methods of obtaining improved image quality are described as well as interfacing with conventional video and computer image generation sources. The system is suitable for a number of professional and domestic 3D applications.
On-the-wall stereoscopic liquid crystal display
Tomohiko Hattori
On-the-wall stereoscopic liquid crystal displays are described that permits the observation of a stereo pair by several persons simultaneously without the use of special glasses. One of the on- the-wall systems is composed of a high refresh rate transparent type color liquid crystal plate with a special back light unit and an infrared illuminating and the image taking system. The back light unit consists of a monochrome 2D display and a convex lens array. The unit distributes the light to the viewers' correct eyes. The system is a time-interface stereoscopic system. Another system consists of a transparent type color liquid crystal plate which polarizer must be micropolarizer with a special back light unit and an infrared illuminating and the image tracking system. The back light unit consists of a monochrome 2D liquid crystal display eliminated its polarizing analyzer and a convex lens array. The unit distributes the light to the viewers' correct eyes. The system is a spatial-multiplexing stereoscopic system. These systems were able to enlarge the image size and to shorten the thickness of the display.
Autostereoscopic-projection displays
DTI has demonstrated new optical configurations designed to project autostereoscopic images to very large sizes using only one display and projector. They will allow the creation of large (50 cm and greater) immersive high resolution autostereoscopic displays with advanced features like head tracking and/or look around imaging. Previous autostereoscopic projection devices have used multiple displays in combination with multiple projectors, with attendant complexity and expense. The basic technique uses a small LCD to create viewing zones within a single large projection lens, of the type normally used for projection television applications. The lens images the LCD onto a large Fresnel lens. The Fresnel lens in turn re-images the viewing zones into the space in front of it. In this arrangement, the viewing area is limited to roughly twice the size of the lens that is used for projection. Methods used to expand the viewing area will be described.
New Developments in Stereoscopic Displays Technologies II
icon_mobile_dropdown
Three-dimensional projection systems with vertical enhancement
Lowell Noble
The perception of three dimensions (3D) depends upon many cues to the brain. Parallax, shading, focus differential movement and stereo or separate images all contribute to the Perception. To be effective three dimensional display systems require separate images for each eye. The vertical parallax and separate view are the primary aids in using the mind to perceive 3D. Different methods are used to project or produce different images to each eye. The primary methods use shutters and or polarization to produce images for each eye. Active glasses using LCD shutters are the typical example of this method. Other current methods use an LCD light rotator over the projector CRT or screen and passive glasses over the eyes to provide different left eye and right eye images. The illusion of reality can be created with high quality 3D images. When the clarity or resolution of the images approaches that of real life, then the 3D fusing becomes easier to maintain and the illusion of reality becomes more pronounced. We will discuss an enhancement system that sharpens the edges of video images and demonstrates the improved 3D fusing that is produced.
Stereoscopic imaging via rotation and translation
In some applications, such as industrial inspection, it is often convenient or necessary to generate a stereo pair of images with a single photographic, X-ray, or video camera. The left and right views are created by moving the objects or the camera. This paper discusses this method of stereo image acquisition, illustrates some pitfalls, and shows how to overcome them. Examples images are presented that have come from applying the methods to the radiographic inspection of armament and tires.
Three-dimensional (3D) textures using stereo kaleidoscopes
David F. McAllister, Dafan Pang
Kaleidoscopes are normally constructed of three mirrors in a triangular pattern set in a tube. A changing 2D image is set at one end of the tube and observed by a monoscopic viewer at the opposite end. The orientation of the mirrors produces an infinite wall paper pattern with symmetries described by the algebraic structure known as the Dihedral Group D3. We show that the kaleidoscope can be used to generate 3D textures in a natural way. We generalize the kaleidoscope to allow binocular viewing, any number of mirrors, warped mirrors and objects which can move in and out of the tube at various depths. The images are produced using the rendering technique of ray tracing.
Stereoscopic triangulation control of a robot using wide-angle imaging
Walter R. Walsh, Peter Hansen, H. Lee Martin, et al.
Stereoscopic triangulation control of a manipulator can be achieved by using a unique viewing system with two cameras. Knowing the vectors pointing to a target, triangulation can determine the location of the selected target. This location can then be used as a basis for manipulator control. Camera information is first collected, and then manipulated to give valid data. The approach uses a viewing system that allows the user to select objects in the camera's wide angle image being displayed on dual touch screen monitors. Knowing the characteristics of the camera, a triangulation routine can be used to calculate the location of the object in 3D space. The calculated point can then be used by the manipulator control system to move the manipulator to that point. Selecting three points can define a cylindrical volume in space. This volume can be used as an exclusion zone in which the manipulator is not allowed to maneuver. TRI has integrated the TeleMate and OmniviewTM to offer triangulation control of a manipulator.
Three-dimensional optical display with movable intervening medium
Vladimir I. Girnyk, Vitalij N. Kurashov, Yaroslav I. Mihyeyev
Here is considered the 3D optical display with movable intervening medium (MIM) for the 3D dynamic stages representation and the experimental results of its basic elements creation are provided. MIM implements two important functions in the proposed display architecture: it forms the secondary light source (SLS)--the element of 3D stage resolution and fulfills scanning by depth (provides forming of stage's planes). The dependence of the SLS parameters from the primary source, MIM structure and shape of its surface have been investigated theoretically and experimentally as well as their connection with 3D stage parameters and 3D display architecture at all. With the aim of display parameters optimization the researches have been carried out for the diffusers with surface- and volumetric-scattering with different statistical parameters. It has been shown that the usage of volumetric diffusers permits to expand the directness diagram to 160 degrees by means of SLS brightness decreasing, to make it be independent from the angle of primary beam falling, to increase the resolution and improve conditions of stage perception by means of speckle decreasing. There have been made calculations of 3D stages formed by MIM non-linear distortions and chosen the optimal shape of MIM surface.
New Developments in Stereoscopic Displays Technologies III
icon_mobile_dropdown
Simplification of infrared illumination of stereoscopic liquid crystal TV
Yoko Nishida, Tomohiko Hattori, Shigeru Omori, et al.
Stereoscopic, multi-parallax, electro-holographic and multi-planar methods are no glasses methods as real-time 3D imaging devices. These methods except stereoscopic need several parallax images or several plane images for their 3D image component. It is known there are many problems for taking and transmission of their 3D images. As for stereoscopic method using lenticular sheet limits the position of the viewers and/or is impossible to observation of the 3D image by several persons simultaneously. Conventional our method Stereoscopic Liquid Crystal TV does not have such above drawbacks but has difficulty to enlarge the size of the 3D image output screen and still remains improvement of the infrared illuminating system for the observers and the image taking system of the TV because of its characteristics of the system components. Two infrared TV cameras and two infrared lumps are necessary for the conventional method. But in this time we produced new methods necessary an infrared TV camera and an infrared lump.
Prototype flat panel hologram-like display that produces multiple perspective views at full resolution
Over the past two years, DTI has developed technology under an SBIR program designed to create advanced autostereoscopic hologram-like displays yielding multiple full resolution perspective images that can be viewed passively by multiple observers across a wide area. The first prototype display of this type was completed in 1994. It demonstrated three key technologies necessary for the practical embodiment of an advanced commercial flat panel autostereoscopic display. (1) A fast surface mode LCD capable of displaying 180 images per second with many gray levels. (2) An interlaced light line illumination system that is responsible for making different images visible from different regions of space in front of the display, and allows flicker free imaging at near 30 fps. (3) A controller designed to accept perspective images in standard formats, interlace them, and display them on the LCD. These technologies are demonstrated on an 800 X 400 LCD with 32 true gray shades, yielding up to six perspective views every 33 ms. At half resolution it would be possible to generate twelve views. The system is driven by an entry level workstation and could also accept input from multiple cameras, given the right interface. Results of the project and plans for the future will be discussed.
Novel low-cost 2D/3D switchable autostereoscopic system for notebook computers and other portable devices
Mounting a lenticular lens in front of a flat panel display is a well known, inexpensive, and easy way to create an autostereoscopic system. Such a lens produces half resolution 3D images because half the pixels on the LCD are seen by the left eye and half by the right eye. This may be acceptable for graphics, but it makes full resolution text, as displayed by common software, nearly unreadable. Very fine alignment tolerances normally preclude the possibility of removing and replacing the lens in order to switch between 2D and 3D applications. Lenticular lens based displays are therefore limited to use as dedicated 3D devices. DTI has devised a technique which removes this limitation, allowing switching between full resolution 2D and half resolution 3D imaging modes. A second element, in the form of a concave lenticular lens array whose shape is exactly the negative of the first lens, is mounted on a hinge so that it can be swung down over the first lens array. When so positioned the two lenses cancel optically, allowing the user to see full resolution 2D for text or numerical applications. The two lenses, having complementary shapes, naturally tend to nestle together and snap into perfect alignment when pressed together--thus obviating any need for user operated alignment mechanisms. This system represents an ideal solution for laptop and notebook computer applications. It was devised to meet the stringent requirements of a laptop computer manufacturer including very compact size, very low cost, little impact on existing manufacturing or assembly procedures, and compatibility with existing full resolution 2D text- oriented software as well as 3D graphics. Similar requirements apply to high and electronic calculators, several models of which now use LCDs for the display of graphics.
Electronic capture and display of full-parallax 3D images
Michael Brewin, Matthew C. Forman, Neil A. Davies
Integral Imaging is a method of recording full parallax 3D images which may offer an alternative to stereoscopic techniques, such as multi-view systems, for domestic 3D-TV. Previous understanding of integral images indicates that the required recording and display resolution is in excess of that available in current electronic imaging systems. This, in conjunction with the difficulty of manufacturing high quality micro-optical arrays has precluded wide spread research into integral imaging. An advanced form of Integral Image generation which has been previously described is re-examined with respect to new experimental evidence, which suggests that resolution requirements are far less stringent than currently believed. An outline of a possible integral-3DTV system is given. The novel application of existing compression schemes is described, which shows that the bandwidth requirements for Integral 3DTV are the same as for HDTV.
New three-dimensional visualization system based on angular image differentiation
Juan D. Montes, Pascual Campoy
This paper presents a new auto-stereoscopic system capable of reproducing static or moving 3D images by projection with horizontal parallax or with horizontal and vertical parallaxes. The working principle is based on the angular differentiation of the images which are projected onto the back side of the new patented screen. The most important features of this new system are: (1) Images can be seen by naked eye, without the use of glasses or any other aid. (2) The 3D view angle is not restricted by the angle of the optics making up the screen. (3) Fine tuning is not necessary, independently of the parallax and of the size of the 3D view angle. (4) Coherent light is not necessary neither in capturing the image nor in its reproduction, but standard cameras and projectors. (5) Since the images are projected, the size and depth of the reproduced scene is unrestricted. (6) Manufacturing cost is not excessive, due to the use of optics of large focal length, to the lack of fine tuning and to the use of the same screen several reproduction systems. (7) This technology can be used for any projection system: slides, movies, TV cannons,... A first prototype of static images has been developed and tested with a 3D view angle of 90 degree(s) and a photographic resolution over a planar screen of 900 mm, of diagonal length. Present developments have success on a dramatic size reduction of the projecting system and of its cost. Simultaneous tasks have been carried out on the development of a prototype of 3D moving images.
Calibration system for a new 3D autostereoscopic device based on angular differentiation
Miguel A Lazaro, Juan D. Montes, Pascual Campoy, et al.
This paper presents the calibration of a new autostereoscopic system called realvisor, where the image seen at each point of the screen depends on the angle from which the point is seen. In the developed prototype 40 images of the same 3D scene, which are projected through 8 objectives and 5 separate flat mirrors, should match to each other on the back of the screen. To obtain the 3D effect, the corresponding points of each image should be projected on the same place on the screen. This fact cannot be achieved without a calibration of the whole system and the subsequent correction of the images due to different distortions suffered by the images in two stages: (1) non-linear deformations produced by the film register and (2) a lack of optical and mechanical accuracy in the manufactured system. The methodology used for this non-uniform deformation of the pixels within each image is based on computing one table for each image, which indicates the correspondence between the pixels of the ideal image and the distorted one on the screen. These tables calibrate the system and are used for correcting all mentioned distortions. This technique has provided excellent results with the first prototype, obtaining a matching accuracy of less than 1 mm on the screen, enough for achieving the visual concordance and the subsequent 3D effect.
Software Issues in Stereoscopic Displays
icon_mobile_dropdown
Geometry of binocular imaging II: the augmented eye
Victor S. Grinberg, Gregg W. Podnar, Mel Siegel
We address the issue of creating imagery on a screen that, when viewed by naked human eyes, will be indistinguishable from the original scene as viewed through a visual accessory. Visual accessories of interest include, for example, binoculars, stereomicroscopes, and binocular periscopes. It is the nature of these magnifying optical devices that the transverse (normal) magnification and longitudinal (depth-wise) magnification are different. That is why an object viewed through magnifying optical devices looks different from the same object viewed with the naked eye from a closer distance--the object looks `squashed' (foreshortened) through telescopic instruments and the opposite through microscopic instruments. We rigorously describe the quantitative relationships that must exist when presenting a scene on a screen that stereoscopically simulates viewing through these visual accessories.
Algorithm for dynamic disparity adjustment
Colin Ware, Cyril Gobrecht, Mark Paton
This paper presents an algorithm for enhancing stereo depth cues for moving computer generated 3D images. The algorithm incorporates the results from an experiment in which observers were allowed to set their preferred eye separation with a set of moving screens. The data derived from this experiment were used to design an algorithm for the dynamic adjustment of eye separation (or disparity) depending on the scene characteristics. The algorithm has the following steps: (1)Determine the near and far points in the computer graphics scene to be displayed. This is done by sampling the Z buffer. (2) Scale the scene about a point corresponding to the midpoint between the observer's two eyes. This scaling factor is calculated so that the nearest part of the scene comes to be located just behind the monitor. (3) Adjust an eye separation parameter to create stereo depth according to the empirical function derived from the initial study. This has the effect of doubling the stereo depth in flat scene but limiting the stereo depth for deep scenes. Steps 2 and 3 both have the effect of reducing the discrepancy between focus and vergence for most scenes. The algorithm is applied dynamically in real time with a damping factor applied so the disparities never change too abruptly.
Broadcast-quality-stereoscopic video in a time-critical entertainment and corporate environment
Jean-Philippe Gay
`reality present: Peter Gabrial and Cirque du Soleil' is a 12 minute original work directed and produced by Doug Brown, Jean-Philippe Gay & A. Coogan, which showcases creative content applications of commercial stereoscopic video equipment. For production, a complete equipment package including a Steadicam mount was used in support of the Ikegami LK-33 camera. Remote production units were fielded in the time critical, on-stage and off-stage environments of 2 major live concerts: Peter Gabriel's Secret World performance at the San Diego Sports Arena, and Cirque du Soleil's Saltimbanco performance in Chicago. Twin 60 Hz video channels were captured on Beta SP for maximum post production flexibility. Digital post production and field sequential mastering were effected in D-2 format at studio facilities in Los Angeles. The program was world premiered to a large public at the World of Music, Arts and Dance festivals in Los Angeles and San Francisco, in late 1993. It was presented to the artists in Los Angeles, Montreal and Washington D.C. Additional presentations have been made using a broad range of commercial and experimental stereoscopic video equipment, including projection systems, LCD and passive eyewear, and digital signal processors. Technical packages for live presentation have been fielded on site and off, through to the present.
Three-dimensional (3D) stereoscopic X windows
Scott A. Safier, Mel Siegel
All known technologies for displaying 3D-stereoscopic images are more or less incompatible with the X Window System. Applications that seek to be portable must support the 3D-display paradigms of multiple hardware implementations of 3D-stereoscopy. We have succeeded in modifying the functionality of X to construct generic tools for displaying 3D-stereoscopic imagery. Our approach allows for experimentation with visualization techniques and techniques for interacting with these synthetic worlds. Our methodology inherits the extensibility and portability of X. We have demonstrated its applicability in two display hardware paradigms that are specifically discussed.
Double-buffering technique for binocular imaging in a window
Jeffrey S. McVeigh, Victor S. Grinberg, Mel Siegel
Binocular digital imaging is a rapidly developing branch of digital imaging. Any such system must have some means that allows each eye to see only the image intended for it. We describe a time-division multiplexing technique that we have developed for Silicon Graphics Inc. (SGITM) workstations. We utilize the `double buffering' hardware feature of the SGITM graphics system for binocular image rendering. Our technique allows for multiple, re-sizable, full-resolution stereoscopic and monoscopic windows to be displayed simultaneously. We describe corresponding software developed to exploit this hardware. This software contains user-controllable options for specifying the most comfortable zero-disparity plane and effective interocular separation. Several perceptual experiments indicate that most viewers perceive 3D comfortably with this system. We also discuss speed and architecture requirements of the graphics and processor hardware to provide flickerless stereoscopic animation and video with our technique.
Photogrammetric determination of the location and orientation of a group of cameras for a perspective transformation on a new autostereoscopic display
Francisco Penafiel, Jesus Gomez, Juan D. Montes, et al.
This paper presents a methodology for determining the spacial location and orientation of a multi-viewpoint image set (MVIS) in relation to an absolute coordinate system for their projection on a new autostereoscopic display based on the angular image differentiation system patented by Montes. The different points of view from which a real scene is acquired are most of the time completely unknown a priori. To determine such camera locations and orientations, two algorithms based on photogrammetric techniques are applied. The first one named `Numerical plotting of a photo pair' consists on the calculation of the relative orientation of two different photographs of the scene. The second algorithm named `Resection in space' takes into account the projection of the 3D points onto the rest of the photographs to determine their absolute location and orientation. Once the absolute location and orientation of each of the images of the MVIS is known, a perspective correction is needed before its projection. This is done so because the image perspective deformations can introduce visual distortions that could be appreciated by an observer. For this purpose, a backward warping transformation is applied to each image depending on the positions of both the acquisition coordinate system, calculated through the previously mentioned algorithms, and the reproduction coordinate system.
Enabling Technologies I
icon_mobile_dropdown
Video support platform for virtual environment applications
Jim Humphries, Sam Eriskin, Joe D. Deardon, et al.
In earlier work at the NASA/Ames Research Center there was a need to develop a standard hardware platform for supporting multiple Virtual Environment display systems. Besides providing the electrical interface between different display types and various graphics systems, the platform was also required to support common auxiliary functions that were not otherwise easily implemented. Examples of these auxiliary functions include interfacing with video camera systems, recording and playback of stereo video on conventional equipment (including portable recorders), gray level calibration signal generation for display setup, generating alignment patterns for Inter Pupillary Distance adjustment and image reversing for `mirrored' image correction. The platform concept evolved through several iterations into a bus oriented, modular system employing a standard P1 VME backplane and a suite of plug-in function modules. Six systems were constructed between 1987 and 1988 and most are still in use. The platform design details (including schematics and fabrication drawings) have been made publicly available through the NASA/Ames Office of Technology Utilization. In surveying display system offerings from the proliferating list of current manufacturers, there seem to be no commercially available equivalents to this `standard' platform. Believing that such a platform would be of use within the Virtual Reality community, this paper describes the platform functions, rational and the general electronics associated with implementing them.
Diffractive optics for head-mounted displays
Diffractive optics have the potential to play a key role in several areas of head mounted displays. They can reduce size and weight while providing some unique optical functions that would be difficult to implement with conventional refractives. There are four areas in which diffractive optics may contribute: Magnifier optics, combiner optics, head and hand tracking, and optical data interface. This paper is primarily concerned with the introduction of a new image combiner element based on Babinet's principle.
Alternative display and interaction devices
Mark T. Bolas, Ian E. McDowall, R. X. Mead, et al.
While virtual environment systems are typically thought to consist of a headmounted display and a flex-sensing glove, alternative peripheral devices are beginning to be developed in response to application requirements. Three such alternatives are discussed: fingertip sensing gloves, fixed stereoscopic viewers and counterbalanced headmounted displays. A subset of commercial examples that highlight each alternative is presented, as well as a brief discussion of interesting engineering and implementation issues.
Role of computer vision in augmented virtual reality
Rajeev Sharma, Jose Molineros
An important issue in augmented virtual reality is making the virtual world sensitive to the current state of the surrounding real world as the user interacts with it--changing gaze, manipulating an object, etc. For providing the right virtual stimulus at the right position and time, the system needs some sensor to interpret the surrounding scene. Computer vision holds great potential in providing the necessary interpretation of the scene. We present the preliminary design of a computer vision-based augmented reality system for helping a human in assembling an industrial part from its components. The context of assembly helps in keeping the computer vision task simple by exploiting the geometric model of the assembly components for recognition and pose estimation. The augmentation stimuli include labeling of objects in the scene, helping with sequencing using an assembly planner, visualization of assembly at different stages, handling errors by the human operator, etc. Such a system would have potential applications in assembling complex parts, maintenance, and education. We will present an overview of the design of the system and discuss some of the issues involved in computer vision-based augmented reality.
Enabling Technologies II
icon_mobile_dropdown
Bridge between developers and virtual environments: a robust virtual environment system architecture
Rudolph P. Darken, Cynthia Tonnesen, Kimberly Passarella-Jones
A new approach is presented to the problems associated with designing and implementing virtual environment applications which supports robust methods of describing and building virtual worlds and interaction techniques. Despite the functional similarities between virtual world and graphical user interface designers, virtual world designers are hindered by a lack of low level support which user interface designers typically get from a toolkit or application framework. Furthermore, a well-defined set of standard interaction techniques and devices has been established for the desktop metaphor which does not exist in the virtual environment arena. The importance of shielding the programmer from the low level details of rendering, network communication, and device software is widely recognized. However, most off-the- shelf virtual reality systems provide only a device polling mechanism through which the entire interface must be built. Polling forces the programmer to delve into the low level details of each device for every interaction technique to be used. We are currently developing a system called the Bridge that is designed to meet the needs and requirements of highly interactive virtual environment applications. The distributed architecture of the Bridge clearly separates the interaction from the application and supports a more efficient, event-driven model. High level descriptions of user-computer dialogues can be easily constructed allowing modification of interaction techniques and styles.
Increased productivity through Modeltime behaviors
Paul Mlyniec, Daniel Mapes
Recent attention has been focused on the assignment of behaviors to graphical entities. Most of these behaviors are targeted for the runtime environment. By defining modeltimeTM behaviors, behaviors whose sole purpose is to help with the task of modeling and whose scope is limited to the modeling environment itself, great gains in productivity can be realized in rapid assembly tasks.
Time-realistic 3D computer graphics (CG) simulator sight
Hiroshi Kamada, Katsuhiko Hirota, Kaori Suzuki, et al.
We propose a new `time-realistic' computer graphics (CG) simulator using real-time CG in four dimensions including space and time, to solve two fundamental problems of conventional real-time CG systems. First, we implemented a realistic time function to solve the problem of jerky displays. The function is based on a dynamic time control method which can map the CG generation time to the actual real-world time. Each CG frame is displayed by a prediction technique; that is, we can predict the next CG display time by using the current CG display time, since each frame has a coherency. Second, we implemented a realistic view- manipulation function to allow the user to manipulate the view intuitively. The function is based on a dynamic view-control method where the view is controlled according to the identity of CG objects, such as a wall or a floor. The identity can be defined interactively by using new `walk-through attributes'. We developed `Sight', a 3D CG simulator, based on the above two functions, and evaluated time-realistic CG using an experimental CG world. We showed that realistic time has been achieved, since we reduced the time-difference to 1/33 that of conventional real-time CG. The realistic view-manipulation has been also achieved, since any user can easily walk in the CG environment without passing walls and floors.
Building Applications I
icon_mobile_dropdown
Gloveless interface for interaction in scientific visualization virtual environments
Mark Ferneau, Jim Humphries
The possibility of natural interaction within a Virtual Environment is one of the most potentiating aspects of VR in the on-going evolution of the man/machine interface. In simulation applications, where some aspect of the real world is artificially recreated, more or less natural dexterous interaction is available by means of instrumented gloves (direct manipulation of virtual objects replaces the abstraction of interaction via keyboards and text commands). In scientific visualization applications, however, we face a conundrum in trying to provide a more natural interface to highly abstract data. Although scientists and engineers develop perspective and intuition through their real world experiences, few have had any visceral or physical experience that maps naturally to the mathematical manipulation of unseen forces--for scientists and engineers instrumentation is often the natural interface. To provide an effective interface for these applications, we have developed a Virtual Instrument that can replace the glove. This VI is loosely based on the Virtual Tricorder concept that was developed by Henry Sowizral of Boeing. A prototype Virtual Instrument was successfully demonstrated by Sterling at NASA Goddard Space Flight Center.
Three-dimensional (3D) object manipulation techniques: immersive versus nonimmersive interfaces
Daniel Mapes, Paul Mlyniec
Identifying applications which are appropriate to the higher performance but higher cost virtual environment (VE) interface is a non-trivial problem. A VE application should not only demonstrate better cost/performance than it's non-immersive windows and mouse (WM) based alternatives, it must also address the time and effort required by the end user in becoming immersed. Identifying promising problem domains requires clearly understanding the theoretical advantages of the VE interface as well as the hardware specs necessary to best implement those advantages. This paper identifies a series of common tasks requiring varying degrees of viewpoint movement, object selection and manipulation and subjectively compares theoretical implementations between a VE interface having two depth cursors and a WM interface. Each task is intended to highlight fundamental operations where either WM performance begins to degrade, increased command sets leads to loss of generality or there is a justifiable requirement for presence which cannot be provided. These performance issues balanced by the current realities of VE technology are used to suggest a point when certain problem domains solution should move to a VE implementation and when they should remain WM.
Building Applications II
icon_mobile_dropdown
Recent developments in virtual experience design and production
Today, the media of VR and Telepresence are in their infancy and the emphasis is still on technology and engineering. But, it is not the hardware people might use that will determine whether VR becomes a powerful medium--instead, it will be the experiences that they are able to have that will drive its acceptance and impact. A critical challenge in the elaboration of these telepresence capabilities will be the development of environments that are as unpredictable and rich in interconnected processes as an actual location or experience. This paper will describe the recent development of several Virtual Experiences including: `Menagerie', an immersive Virtual Environment inhabited by virtual characters designed to respond to and interact with its users; and `The Virtual Brewery', an immersive public VR installation that provides multiple levels of interaction in an artistic interpretation of the brewing process.
Bar code hotel: diverse interactions of semi-autonomous entities under the partial control of multiple operators
In this paper I describe an interactive installation that was produced in 1994 as one of eight Art and Virtual Environments projects sponsored by the Banff Center for the Arts. The installation, Bar Code Hotel, makes use of a number of strategies to create a casual, social, multi-person interface. Among the goals was to investigate methods that would minimize any significant learning curve, allowing visitors to immediately interact with a virtual world in a meaningful way. By populating this virtual world with semi-independent entities that could be directed by participants even as these entities were interacting with each other, a rich and heterogeneous experience was produced in which a variety of relationships between human participants and virtual objects could be examined. The paper will describe some of the challenges of simultaneously processing multiple input sources affecting a virtual environment in which each object already has its own ongoing behavior.
Software Issues in Stereoscopic Displays
icon_mobile_dropdown
Stereoscopic computer graphics for ultrasonic medical data
Isabelle R. Dautraix, Isabelle E. Magnin
Stereo-echography is a promising visualization technique allowing the restitution of the relief of 3D ultrasonic medical data. In this paper, the feasibility of this technique is investigated. Stereo-echograms are computer-generated images. An original algorithm based on the colinearity equations of the photogrammetry and on the projection of the volume of voxels has been developed. A great advantage of the stereo-echography is the possibility to optimize the stereoscopic parameters for both acquisition and display. Finally, the stereo-echography has been tested on actual 3D ultrasonic data and visual restitution of the relief provides satisfying results.
Building Applications I
icon_mobile_dropdown
Embedding the 2D interaction metaphor in a real 3D virtual environment
Ian G. Angus, Henry A. Sowizral
Recent advances in both rendering algorithms and hardware has brought virtual reality to the threshold of being able to model realistically complex environments, e.g., the mockup of a large structure. As impressive as these advances have been there is still little that a user can do within a VR system other than look--and react. If VR is to be usable in a design setting users will need to be able to interact, to control and modify their virtual environments using virtual tools inside that environment. In this paper we describe a realistic virtual computer/personal digital assistant that we have built. To the user this virtual computer appears as a hand held flat panel display. Input can be provided to this virtual computer using a virtual finger or stylus to `touch' the screen. We migrate applications developed for a flat screen environment into the virtual environment without modifying the application code. A major strength of this approach is that we meld the naturally 3D interaction metaphor of a hand held virtual tool with the software support provided by some 2D user interface toolkits. Our approach has provided us great flexibility in both designing and implementing user interface components for VR environments, and it has enabled us to represent the familiar flat screen human computer interaction metaphor within the VR context. We will describe some applications which made use of our capability.
Enabling Technologies II
icon_mobile_dropdown
Virtual Sensors
Henry A. Sowizral
Virtual sensors are a software abstraction that permit application programs to have a simple and consistent view of input devices. Using virtual sensors we can substitute devices at any time before or during an application's execution. Additionally, virtual sensors allow an application to treat computationally constructed data as if it came from a real device. Examples of such constructed device data include data filtering, data fusion from two or more detectors, and phantom device data by combining data from real device inputs using a set of constraints.