Proceedings Volume 3012

Stereoscopic Displays and Virtual Reality Systems IV

cover
Proceedings Volume 3012

Stereoscopic Displays and Virtual Reality Systems IV

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 15 May 1997
Contents: 12 Sessions, 58 Papers, 0 Presentations
Conference: Electronic Imaging '97 1997
Volume Number: 3012

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Human Factors and Evaluation of Stereoscopic Displays
  • Stereoscopic Camera Systems
  • Stereoscopic Image Generation
  • Autostereoscopic Displays
  • Stereoscopic Image Formats and Compression Methods
  • New Developments in Stereoscopic Displays
  • Applications of Stereoscopic Displays
  • Poster Session
  • Creation and Evaluation of Virtual Environments
  • Immersive Displays
  • Augmented Reality/Medical Applications
  • Viewpoints on VR
  • Poster Session
  • Stereoscopic Camera Systems
  • Poster Session
Human Factors and Evaluation of Stereoscopic Displays
icon_mobile_dropdown
How hyperstereopsis can improve the accuracy of spatial perception: an experimental approach
D. E. Sipes, V. Grayson CuQlock-Knopp, Warren Torgerson, et al.
It has been shown that people consistently underestimate distances between objects in the depth direction as compared to the lateral direction. This study examined the use of artificially enhanced stereopsis (hyperstereopsis) in judging relative distances. The data showed that doubling interocular distance by means of a telestereoscope reduced the illusory compression of depth: subjects who viewed the scene without the telestereoscope averaged a depth compression of 0.28. Subjects who used the telestereoscope yielded an average compression of 0.40. Individual verbal self-reports of depth compression effects were unreliable, pointing out the value of quantitative experimental methods.
Comparison of a new glasses-free three-dimensional screen, a passive-glasses three-dimensional screen, and a two-dimensional imaging system for use in laparoscopic surgery
Pedram Salimpour, Cadence A. Kim M.D., Wayne LaMorte, et al.
New video imaging technologies have significantly improved the development of minimally invasive surgical and laparoscopic procedures. The next step in this evolution, the advent of more complex procedures performed under minimally invasive conditions, demands a greater need for accurate depth perception; further improvements in imaging technology as well as instrumentation are needed for the surgeon to perform difficult manipulative tasks with the same skill, accuracy, and speed as in open surgery. Two different techniques are currently available to produce 3-dimensional image: the 'with glasses' technique and the 'glasses-free' technique. The purpose of this experiment is twofold. First, to objectively compare the 3-D images created by the 'glasses-free' monitor, the passive glasses 3-D system, and a 2-D monitor. The second objective was to subjectively assess the quality of each screen as perceived by the operator.
Autostereoscopic display for radiotherapy planning
Roger J. Hubbold, David J. Hancock, Christopher J. Moore
The work described here forms part of a program of research into the application of direct volume rendering methods to the visualization of radiation therapy plans. The full program covers a number of related topics, including the investigation of fast parallel algorithms for interactive rendering, the design of special protocols for communication between low-cost workstations and a parallel computer used to store data and generate images, and research into direct manipulation techniques for interaction with volume data. In this paper, however, we focus on the difficulties of visualizing 3D volume data, and we report the results of preliminary experiments designed to evaluate the utility of stereoscopic displays for this purpose. The remainder of the paper is organized as follows. In the next section we outline the specific problem we are trying to solve: the interactive visualization and manipulation of radiation therapy plans. We introduce, briefly, direct volume rendering methods outlining the approach we have taken and the reasons for this. Next, we address some of the problems which arise, and how stereoscopic displays may help to alleviate these. We then describe our experiments and their results, concluding with a discussion of their significance.
Stereoscopic display using multimedia and depth sense test
Jiaxin Wang, Zaixing Zhang, Peifa Jia, et al.
A vision telepresence system was built in our lab, including stereoscopic display on a CRT and a spherical screen. Two images from two cameras are sent to two multimedia video cards (grand-video cards), Ml and Mr. The two cards, Ml and Mr, transform from two video signals of two cameras to RGB signals in its memory. The field synchronism signal of a VGA card is used as a switch, which takes the two RGB signals in Ml and Mr alternately and sends them into a CRT display or a projector for the spherical screen. In fact, the images of two cameras are put into multimedia cards at lower frequency, but are taken from multimedia cards and shown on CRT at higher frequency, the display may be at higher field frequency. The twinkle can be improved. In order to test the depth sense of the stereoscopic display, an experiment was made in our lab. The purpose of the test is to check the effect for the subjects to use the stereoscopic display to recognize the depth of objects. The test result gives some conclusion. (1) The stereo-display-mode (SDM) using 124 mm camera-distance is worse (has bigger depth-error) than the nude-eye-mode (NEM). But the SDM using 226 mm camera-distance is better than the NEM. (2) The 20 m object-distance has bigger depth-error than the 10 m object-distance. And the 30 m object-distance has very big depth-errors
Stereoscopic layout of a perspective flight guidance display
Matthias Hammer, Stephan K. M. Muecke, Udo Mayer
Analyses of aviation accidents ascribe about 75% of all incidents to human (pilot) behavior. A strong effort is being made to improve ergonomic cockpit layout, because of dramatic changes in the airspace structure, the increase in air traffic, and larger aircraft. One part of an interdisciplinary research project investigates the potential of stereoscopic flight-guidance displays in order to improve pilots' situation awareness. This experimental approach, which aims to research and apply ergonomic design recommendations for stereoscopic flight displays, is based upon a new type of perspective flight-guidance display. The examination of existing research regarding stereoscopic flight displays reveals a lack of basic knowledge, as well as a need for further systematic research into cockpit application. Thus the project contains experiments on different levels of abstraction, ranging from classic parameter experiments to flight simulator tests. Both current knowledge and recent discoveries are applied to superimposed 2-D flight parameters and to real and synthetic 3-D elements, such as a perspective landscape, other airplanes or flight prediction. The stereoscopic layout takes into consideration specific informational needs within different flight phases and is evaluated by means of pilot performance and pilot strain. Selected symbols of the flight guidance display and actual results are presented as examples of the research approach.
Evaluation of a 3D autostereoscopic display for telerobotic operations
Ben C. Lee, Mark E. Katafiaz
Current subsea and space based telerobotic operations rely upon multiple 2D camera views or specially designed targets to guide the teleoperator in positioning a robot. In many remote environments, it is not feasible or is too costly to set up the required camera views or supply and position the necessary robot targets. Three-dimensional displays may provide the solution to these problems. In March of 1996, an autostereoscopic display designed by Dimension Technology Inc., was evaluated in Oceaneering Space Systems' Robotic Testing and Integration Laboratory (RTAIL). The display was integrated into a robot workstation and test operators evaluated the display by using it to perform basic telerobotic tasks similar to tasks planned for NASA Space Station. Results of the testing showed that the use of the autostereoscopic display improved telerobotic task performance by reducing perceived task complexity and improving task times. Using this display should reduce or may even eliminate the need for auxiliary camera view and targets. In addition, teleoperators will be able to perform tasks that would normally be considered too difficult due to the lack of adequate camera views. However, issues related to image ghosting and screen resolution need to be addressed before full benefits of this system can be realized. This paper details the methodology and results of our evaluation of this autostereoscopic display for telerobotic operations.
Printed circuit board visual inspection performance: a comparative analysis of mono- and stereovision macroscopic views
Steven F. Wiker, Ken Stewart, Tommey Meyers, et al.
The objective of this study was to compare visual inspection performance of printed circuit boards (PCBs) and flexible circuit boards (flex) when using monovision and autostereovision modes of the Dimension Technologies' Virtual WindowTM. We measured completion times for visual inspections of PCBs and flex cards, and detection of manufacturing or post manufacturing defects in each of the products. In the case of printed circuit boards, the stereovision display mode produce faster inspection (approximately 17% improvement in inspection rate) when compared against the monovision mode. No change in inspection performance was observed with flex cards, or in the number of flaws detected, with the introduction of autostereovision. Recommendations are made for improving the effectiveness of the Virtual Window for visual inspection of printed circuit boards and like phenomena.
Stereoscopic Camera Systems
icon_mobile_dropdown
Time-multiplexed autostereoscopic camera system
Neil A. Dodgson, John R. Moore, Stewart R. Lang
A camera system has been developed to provide live 3D video input for a time-multiplexed autostereoscopic display. The system is capable of taking video input from up to sixteen sources and multiplexing these into a single video output stream with a pixel rate an order of magnitude faster than the individual input streams. Both monochrome and color versions of the system have been built. Testing of the system with eight cameras and a Cambridge autostereo display has produced excellent live autostereoscopic video. The basic operation of the camera system is to digitize multiple input video streams, one for each view direction, and to multiplex these into a single autostereoscopic video stream. A simple circuit boards (the camera board) can digitize, process and buffer the video input from a single video source. Several of these are connected together via a backplane to another circuit board (the multiplexer board) which contains all the circuitry necessary for generating the output video and synchronization signals and for controlling the rest of the system. Alignment and synchronization were two major challenges in the design of the system. Pixel-level control of camera alignment is provided by an image processing chip on each camera board, while synchronization is provided by a number of carefully designed control mechanisms.
Development of a compact underwater stereoscopic video camera
Andrew J. Woods, John D. Penrose, Dan Clark
This paper describes the development of a compact underwater stereoscopic video camera. The camera was specifically developed for use on Underwater Remotely Operated Vehicles (ROVs) which are operated extensively in the offshore oil and gas industry. The camera has been used at the oil and gas fields operated by Woodside Offshore Petroleum off the northwest coast of Western Australia. The camera is 11 cm in diameter, 24 cm long and weighs just under four kilograms. The camera housing contains a pair of miniature video cameras and an internal 3D multiplexer which generates the single video output signal (in the field-sequential 3D video format). Since the camera outputs a single video signal (although in 3D), it can be easily interfaced with an underwater ROV's existing video system. The 3D video signal is transmitted to the surface by a single video channel where it is viewed on a stereoscopic display installed in the ship-based control room. The camera provides several improvements over the first underwater stereoscopic video camera developed by the Centre in 1991. One of the notable improvements is the camera's reduced size, which offers a number of operational benefits. The optics of the camera have also been improved by using parallel camera axes to eliminate keystone distortion and depth plane curvature.
Sliding-aperture multiview 3D camera-projector system and its application for 3D image transmission and IR to visible conversion
Serguei A. Shestak, Jung-Young Son, Hyung-Wook Jeon, et al.
A new architecture of the 3-D multiview camera and projector is presented. Camera optical system consist of a single wide aperture objective, a secondary (small) objective, a field lens and a scanner. Projector supplementary includes rear projection pupil forming screen. The system is intended for sequential 2-D prospective images acquisition and projection while the small working aperture is sliding across the opening of the big size objective lens or the spherical mirror. Both horizontal and full parallax imaging are possible. The system can transmit 3-D images in real time through fiber bundles, free space, and video image transmission lines. This system can also be used for real time conversion of infrared 3-D images. With this system, clear multiview stereoscopic images of real scene can be displayed with 30 degrees view zone angle.
Stereoscopic Image Generation
icon_mobile_dropdown
Conversion system of monocular image sequence to stereo using motion parallax
Yukinori Matsumoto, Hajime Terasaki, Kazuhide Sugimoto, et al.
A three-dimensional reconstruction system -- the 3DR system -- has been developed. The main feature of the 3DR system is that it converts monocular image sequences to stereoscopic ones. This provides the following advantages: (1) Stereoscopic images can be reproduced even from films taken in the past. (2) A compact 3D-scene capturing system using a monocular camera is realized. The key 3DR technology is depth sensing based on motion parallax. A novel technique for motion analysis is proposed where, according to classification of motion vectors using the stability and colinearity, an iterative operation is performed to obtain an accurate solution. Preliminary evaluations have shown that not only was the motion parallax analyzed very accurately but also stereoscopic images of high quality were generated.
Shape initialization of 3D objects in videoconference scenes
Thomas B. Riegel, Andre Kaup
An ultimate goal for future telecommunication applications is giving the viewer the feeling of being present in the scene, or short 3-D telepresence. One way to achieve a natural 3-D impression is to encode image sequences using 3-D model objects and animate them again by computer graphic means regarding the observers' eye positions. A crucial task within such a system is the estimation of 3-D shape parameters based on range information derived from disparity estimation between two or more cameras. The contribution describes a new approach to create a triangulated irregular net over a given region in a depth map. It attaches more importance to rendering aspects regarding texture mapping than to spatial accuracy. The shape estimation starts with the polygonalization of the region boundary, which describes a curve in 3-D space. After projecting the polygon into the image plane a constrained, quality conforming Delaunay triangulation is executed in the image plane. This forces the triangulation to insert all segments of the projected boundary polygon into the mesh and not to create any triangle exceeding a certain area. Finally the triangles of the net are subdivided recursively further, until an approximation criteria depending on the local curvature of the depth map is reached.
Parallax engine: a display generation architecture for motion parallax and stereoscopic display effects
Ray M. Broemmelsiek
Motion parallax and binocular disparity are two compelling visual cues for the viewer which enhance depth perception as well as perceived display real-estate. Very low lag between viewer's movements and display generation is vital. The lack of practical, low-cost viewer position-dependent display generation capability has precluded the application of these cues from use in routine compute graphical user interfaces (GUIs). While modern desktop computer GUIs and their applications demand increasingly more display real estate for effective user multitasking or for simultaneous display of data, conventional display device size and resolutions have increased only gradually, requiring manual manipulation of the desktop interface as the only alternative. The continued rapid decline in memory prices and rise in memory performance has resulted in a new balance. Cost per pixel of the display device significantly outweighs the cost per pixel of the memory device employed in conventional display generation subsystems. This balance suggests another alterative. This paper presents the Parallax Engine, a display generation architecture that facilitates these two cues for monoscopic and stereoscopic displays.
Stereoscopic 3D graphics generation
Zhi Li, Jianping Liu, Y. Zan
Stereoscopic display technology is one of the key techniques of areas such as simulation, multimedia, entertainment, virtual reality, and so on. Moreover, stereoscopic 3D graphics generation is an important part of stereoscopic 3D display system. In this paper, at first, we describe the principle of stereoscopic display and summarize some methods to generate stereoscopic 3D graphics. Secondly, to overcome the problems which came from the methods of user defined models (such as inconvenience, long modifying period and so on), we put forward the vector graphics files defined method. Thus we can design more directly; modify the model simply and easily; generate more conveniently; furthermore, we can make full use of graphics accelerator card and so on. Finally, we discuss the problem of how to speed up the generation.
Autostereoscopic Displays
icon_mobile_dropdown
Stereoscopic projection display using curved directional reflection screen
Tetsuya Ohshima, Osamu Komoda, Yoshiyuki Kaneko, et al.
A stereoscopic projection display using a curved directional reflection screen (CDR) is proposed. The CDR is composed of a corner reflection mirror sheet for horizontal focusing and a lenticular lens sheet for vertical diffusion. The vertically curved shape has been introduced to expand the observable range of the screen. The prototype CDR 3D display provides clear stereoscopic images and shows its potential to achieve immersivity.
Retroreflective screens and their application to autostereoscopic displays
Philip V. Harman
An autostereoscopic display is described that enables a viewer, or viewers, to view 3D TV or computer graphics images without the need to wear special glasses or other head-wear. The display is based upon a retroreflective screen which provides the display with a number of unique features. One of these features is the capability to produce a large screen (greater than 80' diagonal) multiviewer autostereoscopic display whereby each viewer can see the same or a completely different image. Each image can occupy the entire screen without crosstalk between multiple images occurring. Retroreflective material has the characteristic that light incident on it is returned to the viewer along exactly the same path as the incident light. This is in contrast to reflections from other surfaces where in general the angle of incident is equal to the angle of reflection. Incident light returned from a retroreflective screen can be structured such that scatter or other disturbances can be minimized. This results in a very narrow viewing angle over which the incident image can be observed. Sine the majority of the light incident on the screen is returned to the originating source the screen can be considered to have a 'gain' when compared with a similar configuration where the light is incident on a conventional projection screen.
Hologramlike video images by 45-view stereoscopic display
Yoshihiro Kajiki, Hiroshi Yoshikawa, Toshio Honda
Hologram-like video images have been demonstrated using a stereoscopic display developed by a member of the 3-D project at TAO. Images are made of 45-view stereoscopic images at resolution of 400 by 400 pixels -- each pixel has 45 horizontal parallaxes -- and refreshed at 30 Hz. Stimulation of conventional stereoscopic video display causes a physiological problem, called accommodation-vergence conflict, though it provides multi-view images. Holography is a usual approach to avoid this conflict but it has great difficulty of providing video images. A new stereoscopic approach is described in this paper, called super multi-view (SMV) region. SMV region means the following condition; there are more than two parallax-images passing through a pupil of viewer's each eye. At the SMV region, if viewer accommodates one's focus to the spatial position of 3-D image -- perceived by binocular disparity -- then the parallax-images passed through a pupil are focused to the same position on a retina. Therefore, there is an ability of fusing accommodation and vergence due to the effect of monocular and binocular parallax. To satisfy the SMV condition, the display must provide numerous view images with narrow interval of parallax. It is difficult to satisfy the SMV condition using conventional displaying technique, but the FLA concept (proposed at Practical Holography X, EI '96) realized providing 45-view images with 0.5 degree interval. The design of this display is also described in this paper. SMV technique is just a quasi-holography because reproduced images have natural stimulus for monocular and binocular vision.
Developments in autostereoscopic displays using holographic optical elements
David J. Trayner, Edwina Orr
The use of holographic optical elements (HOEs) in conjunction with LCDs to make high performance economical autostereoscopic displays is a new and promising development. The very first demonstrators showed that the technique can be used to produce very good quality 3D and 2D images. Our current work is directed at improving performance and exploring other opportunities opened up by the basic concept. These include improved optical design of the HOE and, possibility of using LEDs as light sources and that of using the HOE to generate 3D and at the same time provide color separation to allow the display of full color images on a monochrome LCD without the use of conventional color filters. In addition the use of this technology to display two or more images on the LCD such that two viewers will each see different images on the one display.
3D image technique with a grating plate on high-resolution CRT
Tetsuya Shiroishi, Takafumi Nakagawa, Shuhei Nakata, et al.
We propose the 3D image technique using a CRT with a grating plate. The main component of the system is a CRT, a lens and a diffraction plate. We make a composite image on the CRT. The composite image is compounded from several images taken from different view points. This image is focused on the diffraction plate through the lens. There are many pixels of the diffraction gratings that correspond to the pixels of the composite image. Each image is diffracted to different viewing areas by the diffraction gratings. Because we can see one image from each viewing area independently, we can observe 3D image. We made a test system. The size of composite image on the CRT screen is 8-inch. The composite image is compounded from four images. The size of the diffraction grating plate is 3-inch. The width and height of one viewing area is 6 cm by 6 cm and total viewing area is 24 cm by 6 cm. The system is able to divide the composite image into four images and we can observe the images that have perspective.
Characterization and optimization of 3D-LCD module design
Cees van Berkel, John A. Clarke
Autostereoscopic displays with flat panel liquid crystal display and lenticular sheets are receiving much attention. Multiview 3D-LCD is truly autostereoscopic because no head tracking is necessary and the technology is well poised to become a mass market consumer 3D display medium as the price of liquid crystal displays continues to drop. Making the viewing experience as natural as possible is of prime importance. The main challenges are to reduce the picket fence effect of the black mask and to try to get away with as few perspective views as possible. Our solution is to 'blur' the boundaries between the views. This hides the black mask image by spreading it out and softens the transition between one view and the next, encouraging the user to perceive 'solid objects' instead of a succession of flipping views. One way to achieve this is by introducing a new pixel design in which the pixels are slanted with respect to the column direction. Another way is to place the lenticular at a small (9.46 degree) angle with respect to the LCD columns. The effect of either method is that, as the observer moves sideways in front of the display, he always 'sees' a constant amount of black mask. This renders the black mask, in effect, invisible and eliminates the picket fence effect.
Observer-tracking autostereoscopic 3D display systems
Graham J. Woodgate, David Ezra, Jonathan Harrold, et al.
This paper presents an examination of the requirements for observer tracking autostereoscopic 3D display systems. The optical requirements for the imaging of autostereoscopic viewing windows in order to maintain high image quality over a large range of observer positions are described. A number of novel displays based on LCD (liquid crystal display) technology have been developed and demonstrated at Sharp Laboratories of Europe Ltd (SLE). This includes an electronically switchable illuminator for the macro-optic twin-LCD display; and a compact micro-optic twin-LCD display which maintains image quality while extending display size and viewing freedom. Work has also been in progress with flat panel displays to improve window quality using a new arrangement of LCD pixels. This has led to a new means to track such a display with no moving parts.
Research of 3D display using anamorphic optics
Kenji Matsumoto, Toshio Honda
This paper describes the auto-stereoscopic display which can reconstruct more reality and viewer friendly 3-D image by increasing the number of parallaxes and giving motion parallax horizontally. It is difficult to increase number of parallaxes to give motion parallax to the 3-D image without reducing the resolution, because the resolution of display device is insufficient. The magnification and the image formation position can be selected independently in horizontal direction and the vertical direction by projecting between the display device and the 3-D image with the anamorphic optics. The anamorphic optics is an optics system with different magnification in horizontal direction and the vertical direction. It consists of the combination of cylindrical lenses with different focal length. By using this optics, even if we use a dynamic display such as liquid crystal display (LCD), it is possible to display the realistic 3-D image having motion parallax. Motion parallax is obtained by assuming width of the single parallax at the viewing position to be about the same size as the pupil diameter of viewer. In addition, because the focus depth of the 3-D image is deep in this method, conflict of accommodation and convergence is small, and natural 3-D image can be displayed.
Autostereoscopic video display with motion parallax
Stephen P. Hines
Described, is the HinesLab '3DTV,' a 3-dimensional video display which provides true stereo 3-D images, without glasses. Multiple viewers can move in front of the display, seeing true stereo images with motion parallax. Applications include 3-D video arcade games, avionics, engineering workstations, scientific visualization, video phones, and 3-D television. The display is built around a single liquid crystal panel, on which multiple images are projected to a screen where they form the 3-D image. The relationships of objects are confirmed in three dimensional space as the viewer moves through the viewing positions. The HinesLab autostereoscopic technology is transparent to the user. The 3DTV display can be produced economically because it uses a single display panel and conventional optics. The primary advantage of this technique is its simplicity. CGI images are supplied to the monitor with a single video board. Three- dimensional television can be broadcast by a single unmodified television station (NTSC, PAL, SECAM, HDTV, etc.), and recorded and replayed in 3-D with a VCR. From 4 - 21 eye positions can be created, with a range of resolution and viewing angles, limited only by currently available liquid- crystal display technology.
Stereoscopic Image Formats and Compression Methods
icon_mobile_dropdown
Compression of full-parallax integral 3D-TV image data
Matthew C. Forman, Amar Aggoun
An integral imaging system is employed as part of a three dimensional imaging system, allowing display of full color images with continuous parallax within a wide viewing zone. A novel approach to the problem of compressing the significant quantity of data required to represent integral 3D video is presented and it is shown that the reduction in bit cost achieved makes possible transmission via conventional broadcast channels.
Compression and interpolation of 3D stereoscopic and multiview video
Mel Siegel, Sriram Sethuraman, Jeffrey S. McVeigh, et al.
Compression and interpolation each require, given part of an image, or part of a collection or stream of images, being able to predict other parts. Compression is achieved by transmitting part of the imagery along with instructions for predicting the rest of it; of course, the instructions are usually much shorter than the unsent data. Interpolation is just a matter of predicting part of the way between two extreme images; however, whereas in compression the original image is known at the encoder, and thus the residual can be calculated, compressed, and transmitted, in interpolation the actual intermediate image is not known, so it is not possible to improve the final image quality by adding back the residual image. Practical 3D-video compression methods typically use a system with four modules: (1) coding one of the streams (the main stream) using a conventional method (e.g., MPEG), (2) calculating the disparity map(s) between corresponding points in the main stream and the auxiliary stream(s), (3) coding the disparity maps, and (4) coding the residuals. It is natural and usually advantageous to integrate motion compensation with the disparity calculation and coding. The efficient coding and transmission of the residuals is usually the only practical way to handle occlusions, and the ultimate performance of beginning-to-end systems is usually dominated by the cost of this coding. In this paper we summarize the background principles, explain the innovative features of our implementation steps, and provide quantitative measures of component and system performance.
Stereo-vision formats for video and computer graphics
There are several formats for time-shared stereoplexed electronic displays. A stereo-vision format is the technique used for assigning pixels (or lines, or fields) for the left and right images, enabling them to be available at the display screen as an image with true binocular stereopsis. These days most graphics workstations intrinsically output a high field rate, and don't require the above-and-below solution once used for workstations and now more commonly used on PCs. Another approach uses spatial multiplexing of rows or columns, either for individual selection devices or autostereoscopic displays. A new format, the white-line-code (WLC) system, was developed for PCs and offers a low cost but high resolution. This format doesn't care if the left and right fields are in interlace or progressive scan modes, and it doesn't care about field rate.
New Developments in Stereoscopic Displays
icon_mobile_dropdown
Full-color 3D prints and transparencies
Julius J. Scarpetti, Philip M. DuBois, Richard M. Friedhoff, et al.
We describe the preparation of full-color stereoscopic hardcopy from digital 3-D records. Digital image records may be produced directly in digital cameras, CAD systems and by various instrumental outputs, or they may be acquired by scanning and digitizing photographic images. To produce the stereoscopic image we render the component images in terms of degree of polarization, orienting the polarization axes of the left- and right-eye images at 90 degrees to one another. We print the two images on opposite surfaces of specially prepared substrates, using dichroic inks in otherwise standard desktop inkjet printers. Software provides accurate stereoscopic registration of the paired images. The process produces 3-D images as reflection prints or as transparencies. Observers view the superimposed image pair through standard 3- D polarizing glasses. Alternatively, autostereoscopic display apparatus permits viewing without glasses. The color gamut of a typical image is shown. Image resolution depends primarily on the printer used. Applications include molecular modeling, microscopy, data visualization, entertainment, and pictorial photography.
New color anaglyph method
Tomohiko Hattori, Eiji Arita, Toshihisa Nakamura, et al.
Anagliphs generally means a stereoscopic method using 2 principal color filters and is impossible to perceive the full-color stereo-pair for the viewers as above. A new anagliph method using 3 principal color filters (RGB) is presented in this paper. The method enables the complete full- color stereoscopic image taking and output technique. We produced the prototype system which composed of an ordinal TV camera with RGB color optical filters positioned at the pupil or the iris as a function of a single lens stereoscopic image taking device and using a special electrical circuit for a stereoscopic image output devices. Time-parallel full-color stereo pair was delivered to the several viewers by a prototype system with an ordinal our stereoscopic liquid crystal display (STEREVIQ) which permits the observation of a stereo pair by several persons simultaneously without the use of special glasses. Especially the system's cost performance is excellent except STEREVIQ.
Focus-distance-controlled 3D TV
Nobuaki Yanagisawa, Kyung-tae Kim, Jung-Young Son, et al.
There is a phenomenon that a 3D image appears in proportion to a focus distance when something is watched through a convex lens. An adjustable focus lens which can control the focus distance of the convex lens is contrived and applied to 3D TV. We can watch 3D TV without eyeglasses. The 3D TV image meets the NTSC standard. A parallax data and a focus data about the image can be accommodated at the same time. A continuous image method realizes much wider views. An anti 3D image effect can be avoided by using this method. At present, an analysis of proto-type lens and experiment are being carried out. As a result, a phantom effect and a viewing area can be improved. It is possible to watch the 3D TV at any distance. Distance data are triangulated by two cameras. A plan of AVI proto type using ten thousands lenses is discussed. This method is compared with four major conventional methods. As a result, it is revealed that this method can make the efficient use of integral photography and varifocal type method. In the case of integral photography, a miniaturization of this system is possible. But it is difficult to get actual focus. In the case of varifocal type method, there is no problem with focusing, but the miniaturization is impossible. The theory investigated in this paper makes it possible to solve these problems.
Emitting diagram control method for solid-object 3D display
Jung-Young Son, Serguei A. Shestak, Yong-Jin Choi, et al.
We present a new method for solid object 3-D imaging which can control the emitting diagram of each image point and can be applied in volumetric displays. A new method is based on the fact that the visibility of each point of the 3-D object or scene can be described in the terms of point's emitting diagram. It was found that the amount of information for the emitting diagram in general has the same order of magnitude as that for sampled hologram. The emitting diagram control can be applied to many different types of volumetric display with use of means like spatial filters. Unlike the electroholography, the method provides 3-D positioning of image points and emitting diagram forming separately. So in special cases, it allows us to reduce significantly information amount. Experimentally, we obtained a color stereoscopic image with a number of resolvable perspective views, distributed within 30 angular degrees using only two 2-D images and a moving shield as the diagram former.
Applications of Stereoscopic Displays
icon_mobile_dropdown
Lightweight, compact 2D/3D autostereoscopic LCD backlight for games, monitor, and notebook applications
DTI has demonstrated a backlight for LCDs that creates autostereoscopic 3D with no sacrifice in quality to conventional 2D images. The new backlight is comparable in size and cost to standard 2D backlights and can be manufactured using similar processes. Prototypes have been used in amusement games, LCD desk top monitors, and PC Notebooks. An acrylic light guide accepts light from a miniature fluorescent lamp. Linear structures on one side of the guide reflect light traveling through it toward the LCD. A lenticular lens images the light from the structures into a much larger number of thin light lines within the LCD glass. These lines are spaced in a precise relationship with the pixels of the LCD so that an observer sitting in front of the display sees all the light lines through the odd columns of pixels with the left eye and through the even columns of pixels with the right eye. An electronic shutter is placed between the light guide and the lenticular lens. In the off state the shutter will diffuse the light from the light lines for 2D mode viewing. In the clear state (power on), the light lines are visible for 3D mode viewing.
Real-Depth imaging: a new 3D imaging technology with inexpensive direct-view (no glasses) video and other applications
Eugene Dolgoff
Floating Images, Inc. has developed the software and hardware for a new, patent pending, 'floating 3-D, off-the-screen- experience' display technology. This technology has the potential to become the next standard for home and arcade video games, computers, corporate presentations, Internet/Intranet viewing, and television. Current '3-D graphics' technologies are actually flat on screen. Floating ImagesTM technology actually produce images at different depths from any display, such as CRT and LCD, for television, computer, projection, and other formats. In addition, unlike stereoscopic 3-D imaging, no glasses, headgear, or other viewing aids are used. And, unlike current autostereoscopic imaging technologies, there is virtually no restriction on where viewers can sit to view the images, with no 'bad' or 'dead' zones, flipping, or pseudoscopy. In addition to providing traditional depth cues such as perspective and background image occlusion, the new technology also provides both horizontal and vertical binocular parallax (the ability to look around foreground objects to see previously hidden background objects, with each eye seeing a different view at all times) and accommodation (the need to re-focus one's eyes when shifting attention from a near object to a distant object) which coincides with convergence (the need to re-aim one's eyes when shifting attention from a near object to a distant object). Since accommodation coincides with convergence, viewing these images doesn't produce headaches, fatigue, or eye-strain, regardless of how long they are viewed (unlike stereoscopic and autostereoscopic displays). The imagery (video or computer generated) must either be formatted for the Floating ImagesTM platform when written or existing software can be re-formatted without much difficulty.
Role of stereoscopic imaging in the astronomical study of nearby stars and planetary systems
David S. Mark, Corby Waste
The development of stereoscopic imaging as a 3D spatial mapping tool for planetary science is now beginning to find greater usefulness in the study of stellar atmospheres and planetary systems in general. For the first time, telescopes and accompanying spectrometers have demonstrated the capacity to depict the gyrating motion of nearby stars so precisely as to derive the existence of closely orbiting Jovian-type planets, which are gravitationally influencing the motion of the parent star. Also for the first time, remote space borne telescopes, unhindered by atmospheric effects, are recording and tracking the rotational characteristics of our nearby star, the sun, so accurately as to reveal and identify in great detail the heightened turbulence of the sun's corona. In order to perform new forms of stereo imaging and 3D reconstruction with such large scale objects as stars and planets, within solar systems, a set of geometrical parameters must be observed, and are illustrated here. The behavior of nearby stars can be studied over time using an astrometric approach, making use of the earth's orbital path as a semi- yearly stereo base for the viewing telescope. As is often the case in this method, the resulting stereo angle becomes too narrow to afford a beneficial stereo view, given the star's distance and the general level of detected noise in the signal. With the advent, though, of new earth based and space borne interferometers, operating within various wavelengths including IR, the capability of detecting and assembling the full 3-dimensional axes of motion of nearby gyrating stars can be achieved. In addition, the coupling of large interferometers with combined data sets can provide large stereo bases and low signal noise to produce converging 3- dimensional stereo views of nearby planetary systems. Several groups of new astronomical stereo imaging data sets are presented, including 3D views of the sun taken by the Solar and Heliospheric Observatory, coincident stereo views of the planet Jupiter during impact of comet Shoemaker-Levy 9, taken by the Galileo spacecraft and the Hubble Space Telescope, as well as views of nearby stars. Spatial ambiguities arising in singular 2-dimensional viewpoints are shown to be resolvable in twin perspective, 3-dimensional stereo views. Stereo imaging of this nature, therefore, occupies a complementary role in astronomical observing, provided the proper fields of view correspond with the path of the orbital geometry of the observing telescope.
3D moviemap and a 3D panorama
Michael Naimark
Two immersive virtual environments produced as art installations investigate 'sense of place' in different but complimentary ways. One is a stereoscopic moviemap, the other a stereoscopic panorama. Moviemaps are interactive systems which allow 'travel' along pre-recorded routes with some control over speed and direction. Panoramas are 360 degree visual representations dating back to the late 18th century but which have recently experienced renewed interest due to 'virtual reality' systems. Moviemaps allow 'moving around' while panoramas allow 'looking around,' but to date there has been little or no attempt to produce either in stereo from camera-based material. 'See Banff stereoscopic moviemap about landscape, tourism, and growth in the Canadian Rocky Mountains. It was filmed with twin 16 mm cameras and displayed as a single-user experience housed in a cabinet resembling a century-old kinetoscope, with a crank on the side for 'moving through' the material. 'Be Now Here (Welcome to the Neighborhood)' (1995-6) is a stereoscopic panorama filmed in public gathering places around the world, based upon the UNESCO World Heritage 'In Danger' list. It was filmed with twin 35 mm motion picture cameras on a rotating tripod and displayed using a synchronized rotating floor.
Poster Session
icon_mobile_dropdown
Task-dependent use of binocular disparity and motion parallax information within telepresence and quasi-natural environments
Andrew D. Parton, Mark F. Bradshaw, John R. G. Pretlove, et al.
The effect of different depth cues presented through a head mounted display (HMD) in a dark (no pictorial cue) environment was investigated. In four experiments the relative effects of binocular disparity, motion parallax, and a combination of the two, were assessed for four tasks at two viewing distances. These tasks (which varied in the minimum amount of information they required) were a nulling task (based on the Howard-Dolman stereo test), setting a triangle to be equilateral, matching two triangles at different depths and estimating absolute distance. Performance within the tasks varied considerably with the nulling task best. Performance in the other tasks indicates a difference at different viewing conditions which may be due to a failure in the assessment of absolute viewing distance. Although results from the final task indicate that observers can use this information under certain circumstances. It is argued that these results are task specific and may reflect limitations in the viewing equipment. Although there was some variation between different cue types they appear to be largely interchangeable within the tasks. This questions whether there is always a need to present both disparity and motion cues in telepresence systems.
Effects of image resolution on depth perception in stereo and nonstereo images
Kai-Mikael Jaeae-Aro, Lars Kjelldahl
Head-mounted displays (HMDs) in use today have fairly limited resolution, but the extends to which this low resolution may be detrimental to various tasks is not well known. We have studied the effect of low resolution on distance perception by letting experimental subjects estimate distances to objects in computer-generated 3D scenes. The images have been presented at varying resolutions both binocularly and biocularly on a workstation screen viewed through a Cyberscope stereoscopic device and biocularly in a flight helmet HMD. Our results indicate: (1) For good distance judgments, anti-aliasing is more important than stereo. In fact, stereo may even be detrimental in low resolution images. (2) Anti-aliasing works well as a resolution-enhancer. (3) The fuzziness of the LCD HMD screens gives an anti-aliasing effect, partially offsetting the low resolution. (4) Subjects utilize different estimation strategies depending on the resolution of the images to minimize their estimation errors. (5) Error estimates vary depending on the target object shape -- there is a tendency for better estimates for those objects whose sides line up with the coordinate axes. (6) The differences between subjects are considerably larger than those within subjects.
3D stereo 360-deg panoptic
A new process combination to produce panoramic photographic images cover 360 degrees horizontally and 110 degrees vertically, on the same film, in one and single take, with a special prototype camera, allowed to restitute the complete stereoscopic relief on 360 degrees with the help of a special existing viewer.
Ray space representation for 3D image processing
Toshiaki Fujii, Tadahiko Kimoto, Masayuki Tanimoto
This paper presents a novel 3-D image coding scheme based on the 'Ray Space' representation of 3-D spatial information. First, we give the definition of Ray Space and show that the Ray Space representation can be a common data format for the integrated 3-D visual communication. Then, we introduce the vector field in the Ray Space and propose a novel compression scheme in which the Ray Space data is compressed into the vector data on the divergent points. Finally, experimental results of the reconstruction of Ray Space data is presented.
Creation and Evaluation of Virtual Environments
icon_mobile_dropdown
MARTI: man-machine animation real-time interface
Christian Martyn Jones, Satnam Singh Dlay
The research introduces MARTI (man-machine animation real-time interface) for the realization of natural human-machine interfacing. The system uses simple vocal sound-tracks of human speakers to provide lip synchronization of computer graphical facial models. We present novel research in a number of engineering disciplines, which include speech recognition, facial modeling, and computer animation. This interdisciplinary research utilizes the latest, hybrid connectionist/hidden Markov model, speech recognition system to provide very accurate phone recognition and timing for speaker independent continuous speech, and expands on knowledge from the animation industry in the development of accurate facial models and automated animation. The research has many real-world applications which include the provision of a highly accurate and 'natural' man-machine interface to assist user interactions with computer systems and communication with one other using human idiosyncrasies; a complete special effects and animation toolbox providing automatic lip synchronization without the normal constraints of head-sets, joysticks, and skilled animators; compression of video data to well below standard telecommunication channel bandwidth for video communications and multi-media systems; assisting speech training and aids for the handicapped; and facilitating player interaction for 'video gaming' and 'virtual worlds.' MARTI has introduced a new level of realism to man-machine interfacing and special effect animation which has been previously unseen.
Realistic image generation using model-driven processing in an interactive system
Toshifumi Miyagi, Atsushi Hori, Hideo Sugama, et al.
For efficient generation of realistic images, 4 kinds of generic models (data-, object-, role- and process-models) are introduced in the system based on the extensible well (window- based elaboration language) which has been reported previously. These models are constructed so that they have hierarchical interfaces from data to processes. In order to satisfy multiple intentions interacting with each other, concepts of roles are introduced. Each role is recognized as a set of the object-networks, and the respective user's intentions are referred to a set of executions of many roles. Object-networks which consists of noun-objects and verb objects express transitions of states. In each occurrence of transition, the user's intention is issued in an event driven manner, and it provides concurrent processes Multiple roles are made interactive with each other by using the common platform which consists of windows. Each role is specified as a structure of the object-network, which is defined by a graph structure. Every object has templates which define data structure. Data of objects are specified by constraint- relating attributes of objects or referring to user's data driven action. By concepts of constraints and models, roles, realistic images are obtained with the least data. Some examples for human movements are demonstrated.
ROSE: the road simulation environment
Panos Liatsis, Panogiotis Mitronikas
Evaluation of advanced sensing systems for autonomous vehicle navigation (AVN) is currently carried out off-line with prerecorded image sequences taken by physically attaching the sensors to the ego-vehicle. The data collection process is cumbersome and costly as well as highly restricted to specific road environments and weather conditions. This work proposes the use of scientific animation in modeling and representation of real-world traffic scenes and aims to produce an efficient, reliable and cost-effective concept evaluation suite for AVN sensing algorithms. ROSE is organized in a modular fashion consisting of the route generator, the journey generator, the sequence description generator and the renderer. The application was developed in MATLAB and POV-Ray was selected as the rendering module. User-friendly graphical user interfaces have been designed to allow easy selection of animation parameters and monitoring of the generation proces. The system, in its current form, allows the generation of various traffic scenarios, providing for an adequate number of static/dynamic objects, road types and environmental conditions. Initial tests on the robustness of various image processing algorithms to varying lighting and weather conditions have been already carried out.
Body sway induced by 3D images
Miho Hoshino, Minoru Takahashi, Kenji Oyamada, et al.
We used body sway to evaluate a viewer's sense of presence with three kinds of 3D displays as follows: a head mounted display (HMD), a 70 inch 3D display and a consumer 3D television. This expedient used images with a fixed foreground and a rolling background as the visual stimuli to induce body sway. The images were taken from a boat rolling at five different frequencies (approx. 0.125, 0.20, 0.25, 0.33, 0.50 Hz). We examined eight healthy adults viewing each of the five images for three minutes on each display. We evaluated body sway using a motion analyzing system to measure the displacement of a marker placed on the head of the subjects. It was found that at all rolling frequencies of the image background, the HMD induced the greatest amount of body sway followed by the large 3D display and then the consumer 3D television. The amount of body sway was the greatest when the rolling frequency was 0.33 Hz. The results showed the amount of body sway depended on the type of display and the rolling frequency.
Evaluating an immersive virtual environment prototyping and simulation system
Kenneth Nemire
An immersive virtual environment (IVE) modeling and simulation tool is being developed for designing advanced weapon and training systems. One unique feature of the tool is that the design, and not just visualization of the design is accomplished with the IVE tool. Acceptance of IVE tools requires comparisons with current commercial applications. In this pilot study, expert users of a popular desktop 3D graphics application performed identical modeling and simulation tasks using both the desktop and IVE applications. The IVE tool consisted of a head-mounted display, 3D spatialized sound, spatial trackers on head and hands, instrumented gloves, and a simulated speech recognition system. The results are preliminary because performance from only four users has been examined. When using the IVE system, users completed the tasks to criteria in less time than when using the desktop application. Subjective ratings of the visual displays in each system were similar. Ratings for the desktop controls were higher than for the IVE controls. Ratings of immersion and user enjoyment were higher for the IVE than for the desktop application. These results are particular remarkable because participants had used the desktop application regularly for three to five years and the prototype IVE tool for only three to six hours.
Immersive Displays
icon_mobile_dropdown
Compact and wide-field-of-view head-mounted display
Shoichi Uchiyama, Hiroshi Kamakura, Joji Karasawa, et al.
A compact and wide field of view HMD having 1.32-in full color VGA poly-Si TFT LCDs and simple eyepieces much like LEEP optics has been developed. The total field of view is 80 deg with a 40 deg overlap in its central area. Each optical unit which includes an LCD and eyepiece is 46 mm in diameter and 42 mm in length. The total number of pixels is equivalent to (864 times 3) times 480. This HMD realizes its wide field of view and compact size by having a narrower binocular area (overlap area) than that of commercialized HMDs. For this reason, it is expected that the frequency of monocular vision will be more than that of commercialized HMDs and human natural vision. Therefore, we researched the convergent state of eyes while observing the monocular areas of this HMD by employing an EOG and considered the suitability of this HMD to human vision. As a result, it was found that the convergent state of the monocular vision was nearly equal to that of binocular vision. That is, it can be said that this HMD has the possibility of being well suited to human vision in terms of the convergence.
Virtual model displays
Mark T. Bolas, Steve T. Bryson, Ian E. McDowall
As the field of immersive display systems matures, the tools that are being created become more specialized and specific to the application at hand. This process is leading to a rich set of diverse approaches that appear to be grouped into three categories: head mounted displays, spatially immersive displays, and virtual model displays. This paper briefly introduces and describes these classifications and then highlights virtual model displays with recent observations from a variety of users and applications.
New generation of 3D desktop computer interfaces
Robert Skerjanc, Siegmund Pastoor
Today's computer interfaces use 2-D displays showing windows, icons and menus and support mouse interactions for handling programs and data files. The interface metaphor is that of a writing desk with (partly) overlapping sheets of documents placed on its top. Recent advances in the development of 3-D display technology give the opportunity to take the interface concept a radical stage further by breaking the design limits of the desktop metaphor. The major advantage of the envisioned 'application space' is, that it offers an additional, immediately perceptible dimension to clearly and constantly visualize the structure and current state of interrelations between documents, videos, application programs and networked systems. In this context, we describe the development of a visual operating system (VOS). Under VOS, applications appear as objects in 3-D space. Users can (graphically connect selected objects to enable communication between the respective applications. VOS includes a general concept of visual and object oriented programming for tasks ranging from, e.g., low-level programming up to high-level application configuration. In order to enable practical operation in an office or at home for many hours, the system should be very comfortable to use. Since typical 3-D equipment used, e.g., in virtual-reality applications (head-mounted displays, data gloves) is rather cumbersome and straining, we suggest to use off-head displays and contact-free interaction techniques. In this article, we introduce an autostereoscopic 3-D display and connected video based interaction techniques which allow viewpoint-depending imaging (by head tracking) and visually controlled modification of data objects and links (by gaze tracking, e.g., to pick, 3-D objects just by looking at them).
Controlling graphic objects naturally: use your head
Roger A. Browse, James C. Rodger, Ian Sewell, et al.
During normal viewing of an object, a human observer will typically make small movements in the position of the head resulting in small parallax-related image changes. The significance of these changes is apparent when viewing a static stereographic display. Since the observer expects modifications in viewing direction that accompany side to side head movements, the lack of such changes in viewing stereographic displays creates the striking illusion that the static display is rotating in a compensatory direction. Using head tracking, we generate the appropriate pairs of images on a stereographic display device in order to maintain a stable virtual stereo object for the viewer. Unnatural, but learnable mappings from input devices such as a mouse or a joystick are typically used to bring about changes in the viewing direction and viewing distance in graphic displays. As an alternative to these techniques, we have extended the use of the monitored head position, resulting in a display system that permits control of graphic objects with subtle head movements. The device permits a zone of small head movements for which there is no rotation or scaling of the virtual object, but only parallax-related images changes as projected to each eye. A slightly exaggerated head movement initiates rotation and/or scaling of the scene that terminates when the head returns to a central viewing position. We are carrying out experiments to test the performance of human subjects in tasks that require head movements to control the rotation of graphic objects. A preliminary study that only examines rotation around a single axis suggests that it may be a very effective and natural technique.
Let's move on the integration of motion rendering in VR
Udo Jakob, Efi Douloumi
Motion-rendering enables the user to get haptic feedback of accelerations, produced by changing his position in the virtual world. Motion simulation systems, for decades mainly used for flight and drive simulation, represent an expanding tool for supporting this task. Due to the increasing use in theme-parks the systems became continuously more inexpensive, smaller, noiseless and easier to maintain. Nowadays the systems have features that make it possible to use them in typical virtual reality (VR) environments. This paper shows how VR can be completed by using motion-rendering. For that the paper gives an overall view of the basics of motion- rendering, different types of motion simulation systems and various possibilities to integrate a motion simulation system into a VR environment. Furthermore a prototypical motion-base- supported flight simulator, which is used for therapy of patients with fear of flight, is described.
Augmented Reality/Medical Applications
icon_mobile_dropdown
Video engraving for virtual environments
Geb Thomas, Theodore T. Blackmon, Michael Sims, et al.
Some applications require a user to consider both geometric and image information. Consider, for example, an interface that presents both a three-dimensional model of an object, built from a CAD model or laser-range data, and an image of the same object, gathered from a surveillance camera or a carefully calibrated photograph. The easiest way to provide these information sets to a user is in separate, side-by-side displays. A more effective alternative combines both types of information in a single, integrated display by projecting the image onto the model. A perspective transformation that assigns image coordinates to model vertices can visually engrave the image onto corresponding surfaces of the model. Combining the image and geometric information in this manner provides several advantages. It allows an operator to visually confirm the accuracy of the modeling geometry and also provides realistic textures for the geometric model. We discuss several of our procedural methods to implement the integrated displays and discuss the benefits gained from applying these techniques to projects including robotic hazardous waste remediation, the virtual exploration of Mars, and remote mobile robot control.
Augmented reality using range images
Christian L. Schutz, Heinz Huegli
This paper proposes range imaging as a means to improve object registration in an augmented reality environment. The addressed problem deals with virtual world construction from complex scenes using object models. During reconstruction, the scene view is augmented by superimposing virtual object representations from a model database. The main difficulty consists in the precise registration of a virtual object and its counterpart in the real scene. The presented approach solves this problem by matching geometric shapes obtained from range imaging. This geometric matching snaps the roughly placed object model onto its real world counterpart and permits the user to update the virtual world with the recognized model. We present a virtual world construction system currently under development that allows the registration of objects present in a scene by combined use of user interaction and automatic geometric matching based on range images. Potential applications are teleoperation of complex assembly tasks and world construction for mobile robotics.
Haptic display for the VR arthroscopy training simulator
Rolf Ziegler, Christoph Brandt, Christian Kunstmann, et al.
A specific desire to find new training methods arose from the new fields called 'minimal invasive surgery.' With the technical advance modern video arthroscopy became the standard procedure in the ORs. Holding the optical system with the video camera in one hand, watching the operation field on the monitor, the other hand was free to guide, e.g., a probe. As arthroscopy became a more common procedure it became obvious that some sort of special training was necessary to guarantee a certain level of qualification of the surgeons. Therefore, a hospital in Frankfurt, Germany approached the Fraunhofer Institute for Computer Graphics to develop a training system for arthroscopy based on VR techniques. At least the main drawback of the developed simulator is the missing of haptic perception, especially of force feedback. In cooperation with the Department of Electro-Mechanical Construction at the Darmstadt Technical University we have designed and built a haptic display for the VR arthroscopy training simulator. In parallel we developed a concept for the integration of the haptic display in a configurable way.
VERS: a virtual environment for reconstructive surgery planning
Kevin N. Montgomery
The virtual environment for reconstructive surgery (VERS) project at the NASA Ames Biocomputation Center is applying virtual reality technology to aid surgeons in planning surgeries. We are working with a craniofacial surgeon at Stanford to assemble and visualize the bone structure of patients requiring reconstructive surgery either through developmental abnormalities or trauma. This project is an extension of our previous work in 3D reconstruction, mesh generation, and immersive visualization. The current VR system, consisting of an SGI Onyx RE2, FakeSpace BOOM and ImmersiveWorkbench, Virtual Technologies CyberGlove and Ascension Technologies tracker, is currently in development and has already been used to visualize defects preoperatively. In the near future it will be used to more fully plan the surgery and compute the projected result to soft tissue structure. This paper presents the work in progress and details the production of a high-performance, collaborative, and networked virtual environment.
Viewpoints on VR
icon_mobile_dropdown
Failings and future of VR
William R. Cockayne, Rudolph P. Darken
The field of VR has come to another crossroad in its development. The field has spent hundreds of man years and untold millions of dollars developing technology without an explicit guiding vision. The search for the next big thing will not be ended in the focus on technology. The field of VR must begin to focus on the user to define and refine its technology.
Circulating images of virtual systems: trodes, gloves, and goggles in the eighties and nineties
Mizuko Ito, Scott S. Fisher
Since the late 80s, the popular imagination surrounding virtual systems has been lively and contested, an intriguing brew of cyberpunk fiction, government and corporate research, and product development, with a dash of countercultural excess. Virtual systems, in their myriad forms, have captured the interest not only of scientists and engineers, but also of a broad spectrum of social actors, including the popular and alternative press, fiction and comic writers, visual artists, film and television producers, as well as large sectors of a curious public, all of whom have produced diverse and creative images of these systems for a range of different audiences. The circulation of images of virtual systems points to some of the ways in which the production of technology can be located not only in engineering labs but also various realms of mass media and public culture. Focusing on images of gloves and goggles, this paper describes some of the pathways through which images of virtual systems have traveled.
Poster Session
icon_mobile_dropdown
Spatial-light-modulator-based three-dimensional multiplanar display
Mark A. A. Neil, Edward G. S. Paige, Leon O. D. Sucharov
A three-dimensional multiplanar display system is described and demonstrated that utilizes a silicon back-plane, ferroelectric liquid crystal spatial light modulator (FLCSLM) as an active optical element. The FLCSLM is used to generate a sequence of binary, phase-only Fresnel zone plates of various focal lengths which image a synchronized object screen into series of depth planes forming a volume display. Binary zone plates used as lenses have some serious disadvantages, notably unwanted foci. These problems are examined and solutions are put forward. A similar FLCSLM acting as an amplitude modulator is used for the object transparency. Gray levels can be achieved through temporal dithering and color by time multiplexing sources of different wavelengths. The high refresh rate of silicon backplane FLCSLMs allows a high number of depth planes to be addressed at video frame rates. The performance of the system makes it particularly suitable for use in a head mounted display. Head mounted displays commonly used in virtual reality produce depth cue conflicts for the user. These conflicts, particularly the lack of the correct accommodation (focusing) cues, have given rise to serious health concern. The lack of focus cues can be addressed through the use of the display system demonstrated here. The independent control that this system gives over the various depth cues gives it the potential to be a new and useful tool in examining the user's response to different virtual environments.
Usefulness of observer-controlled camera angle in telepresence systems depends on the nature of the task: passive perceptual judgments compared to perceptual motor performance
Joerg W. Huber, Ian R. L. Davies
Many telepresence systems have the capacity to 'slave' camera orientation to the observer's head movements. This results in mimicking the changes in the visual field that would be produced by the observer's head movements. In theory this should result in enhanced depth perception, but in practice the effect often appears to be weak. We report a series of experiments that explore the benefit of providing observer generated motion information (OGMI) across a range of perceptual and perceptual motor tasks. Experiment 1 found that when OGMI was the sole source of depth information observers were able to exploit it in a simple depth judgement task. However, Experiment 2 found that it was unimportant that the motion information was generated by the observer; rather relative motion within the image was sufficient. Experiment 3 used a simple depth adjustment task and found that if subjects first did the task without OGMI they did not benefit by its subsequent availability, suggesting that use of OGMI is not automatic. Experiments 4 and 5 used tracking tasks and found no gain from making OGMI available relative to static viewing of the video image. Overall the results confirm that OGMI confers only weak gains on the accuracy of spatial tasks and that the magnitude of the gains are task dependent.
Stereoscopic Camera Systems
icon_mobile_dropdown
Stereoscopic camera system for live-action and sports productions
Craig Adkins
By all accounts I am a cowboy cameraman. I began my career as a ski bum in the Swiss Alps at the age of 18. Once I had picked up a Super 8 camera and filmed my first skiing sequence, the mold was pretty much set. I have concentrated my direction on shooting live action in extreme environments ever since. I shun the use of tripods and my experience in artificial lighting is minimal. My technical knowledge of the equipment and medium I use is limited to practical use dictated by weather and terrain conditions. When confronted with a frozen film gate in a raging blizzard, perched atop a 200 foot drop, there are only so many options. This brief personal history prefaces the underlying theme of the camera system I designed and have been using in the field for the past year. I do not claim to be a deep well of information on stereoscopic theory and technology nor a maverick inventor. Rather, I am someone who saw stereo video and realized it was a significant improvement in existing visual technology. The resulting experience I have gained from building a stereo video camera system is purely empirical, helped by other's shared insights and my own trial and error.
Poster Session
icon_mobile_dropdown
Directional display
Hakan Lennerstad
The directional display contains and shows several images -- which particular image is visible depends on the viewing direction. This is achieved by packing information at high density on a surface, by a certain back illumination technique, and by explicit mathematical formulas which eliminate projection deformations and make it possible to automatize the production of directional displays. The display is illuminated but involves no electronic components. Patent is pending for the directional display. Directional dependency of an image can be used in several ways. One is to achieve three-dimensional effects. In contrast to that of holograms, large size and full color involve no problems. Another application of the technique is to show moving sequences. Yet another is to make a display more directionally independent than conventional displays. It is also possible and useful in several contexts to show different text in different directions with the same display. The features can be combined.