Proceedings Volume 4660

Stereoscopic Displays and Virtual Reality Systems IX

cover
Proceedings Volume 4660

Stereoscopic Displays and Virtual Reality Systems IX

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 23 May 2002
Contents: 15 Sessions, 58 Papers, 0 Presentations
Conference: Electronic Imaging 2002
Volume Number: 4660

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Human Factors in Stereoscopic Imaging
  • Stereoscopic Video
  • Digital Stereoscopic Imaging
  • Integral 3D Imaging
  • Volumetric 3D Imaging
  • Autostereoscopic Displays I
  • Autostereoscopic Displays II
  • Autostereoscopic Displays III
  • Stereoscopic Display Applications
  • Stereoscopic Camera Systems
  • Poster Pop Session
  • Autostereoscopic Displays I
  • Poster Pop Session
  • Metrics
  • Senses
  • Performance
  • Software and Systems
  • Stereoscopic Display Applications
  • Autostereoscopic Displays III
  • Metrics
  • Performance
Human Factors in Stereoscopic Imaging
icon_mobile_dropdown
Effect of overlap rate between a stereoscopic image pair on work performance in VR environment
Kazunori Shidoji, Katsuya Matsunaga, Kazuaki Goshi, et al.
We did a tracking experiment in the virtual space to investigate the effect of an overlap rate between right and left images of a stereoscopic image pair on work performance. Twelve subjects tried to track a target moving around in a three-dimensional virtual space by the cursor that they controlled with a 3-D mouse. The high overlap rate condition was 100% overlapped and the low overlap rate condition was 50% overlapped. The convergence point of the virtual video cameras was fixed during the experiment. We measured the difference of the position between the target and the cursor in horizontal, vertical, and depth axes. The results showed: (1) the difference between the target and the cursor in the low overlap rate condition was larger than that in the high overlap rate condition; (2) the difference in depth was larger than these in horizontal and vertical axes; (3) the difference in depth occurred when the target moved far away from the subjects.
Proposal of a new method for depth accuracy in a virtual world
Shintaro Kawahara, Toshiaki Sugihara, Tsutomu Miyasato, et al.
Stereoscopic displays, such as HMDs or lenticular screens, have a functional problem that is the disorder of the convergence and the accommodation. This disorder affects the depth accuracy. In this paper, we propose a new practical evaluation method of the depth accuracy in a virtual world, using by the stereoscopic display, and discuss a result of the evaluation. In this method, we focused the disorder of the convergence and the accommodation. To evaluate this disorder, we used HMD with out developed function, called the accommodative compensation. The accommodative compensation can regulate the depth position of the displayed image. In this evaluation, we used a subjective assessment method applying moving stimulus images, and showed stimulus images to subjects that changed the relationship of the convergence and the accommodation, by using the accommodative compensation function of the HMD. We quantitatively evaluated the effects of disorder of the convergence and the accommodation. The main index of the evaluation was the time difference of the ball manipulating actions at the ball game. We assume that such time differences indicate the tolerance of the perception of depth. Results suggest that the disorder may affect the depth accuracy.
Viewing stereoscopic images comfortably: the effects of whole-field vertical disparity
Filippo Speranza, Laurie M. Wilcox
Stereoscopic images while providing enhanced depth and image quality can cause moderate discomfort. In this paper, we present the results of two experiments aimed at investigating one possible source of discomfort: whole-field vertical disparities. In both experiments, we asked viewers to rate their comfort level while viewing a 3D feature film in which the left and right images were vertically misaligned. The feature film was presented on a large theater type screen. In Experiment 1, the vertical offset was changed randomly on a scene-by-scene basis resulting in an average vertical disparity of 31 minutes or arc at the closest viewing distance. The results showed that whole- field vertical disparities produced a marginal increase in discomfort that became only slightly more pronounced with time. In Experiment 2, we alternated periods of low, medium and high levels of whole-field vertical disparity. At the closest distance, the mean vertical disparity was 15, 30, or 62 minutes of arc for the low, medium and high disparity conditions, respectively. In this experiment, discomfort increased with vertical disparity, but again only marginally even after prolonged exposure. We conclude that whole-field vertical disparities cannot be a major contributor to the discomfort experienced by observers when viewing stereoscopic images.
Stereoscopic Video
icon_mobile_dropdown
Stereoscopic camera design
David J. Montgomery, Christopher K. Jones, James N. Stewart, et al.
It is clear from the literature that the majority of work in stereoscopic imaging is directed towards the development of modern stereoscopic displays. As costs come down, wider public interest in this technology is expected to increase. This new technology would require new methods of image formation. Advances in stereo computer graphics will of course lead to the creation of new stereo computer games, graphics in films etc. However, the consumer would also like to see real-world stereoscopic images, pictures of family, holiday snaps etc. Such scenery would have wide ranges of depth to accommodate and would need also to cope with moving objects, such as cars, and in particular other people. Thus, the consumer acceptance of auto/stereoscopic displays and 3D in general would be greatly enhanced by the existence of a quality stereoscopic camera. This paper will cover an analysis of existing stereoscopic camera designs and show that they can be categorized into four different types, with inherent advantages and disadvantages. A recommendation is then made with regard to 3D consumer still and video photography. The paper will go on to discuss this recommendation and describe its advantages and how it can be realized in practice.
Parallax distribution for ease of viewing in stereoscopic HDTV
Shinji Ide, Hirokazu Yamanoue, Makoto Okui, et al.
In order to identify the conditions which make stereoscopic images easier to view, we analyzed the psychological effects using a stereoscopic HDTV system, and examined the relationship between this analysis and the parallax distribution patterns. First, we evaluated the impression of 3-D pictures of the standard 3-D test chart and past 3-D video programs using some evaluation terms. Two factors were thus extracted, the first related to the sense of presence and the second related to ease of viewing. Secondly, we applied principal component analysis to the parallax distribution of the stereoscopic images used in the subjective evaluation tests, in order to extract the features of the parallax distribution, then we examined the relationship between the factors and the features of the parallax distribution. The results indicated that the features of the parallax distribution are strongly related to ease of viewing, and for ease of viewing 3-D images, the upper part of the screen should be located further away from the viewer with less parallax irregularity, and the entire image should be positioned at the back.
Stereoscopic DVD creation
Daniel Dupont, John A. Rupkalvis
Stereoscopic DVDs can now be created in many different formats including alternate image, for use with alternate image viewing devices such as alternate field and alternate frame type LCD glasses. DVD welcomes al forms of stereoscopic content and provides a dynamic method for presentation as well as distribution. Because of its universal compatibility, there are specific standards and specifications that must be adhered to in preparing your content for DVD. Alternate field presentations are especially vulnerable to these compression schemes but encoders can be manipulated to maintain content integrity. The navigational capabilities of the DVD specification leave a tremendous amount of creative liberties. This freedom led to the development of the zDVDtm. The zDVDtm Is a DVD disc that allows the viewer to seamlessly switch between watching the program in standard 2D or stereoscopic 3D.
Development of software for editing of stereoscopic 3-D movies
This paper describes the development of software for non- linear editing of stereoscopic 3-D movies. The purpose of the software is to simplify the creation of stereoscopic 3-D movies as well as reduce production costs. The software has the following functions: 1) Separate a field-sequential movie file into right and left movie files. 2) Display right and left movie files on time base. 3) Adjust horizontal and vertical disparities. 4) Adjust image size and rotation. 5) Correct inverted fields. 6) Measure the theoretical distance of a presented image. 7) Adjust movies created using the parallel recording method. 8) Combine right and left movie files into a field-sequential 3-D movie file. This paper reports the results of the development of the software and discusses the usefulness of the software for editing stereoscopic 3-D movies.
Characterizing sources of ghosting in time-sequential stereoscopic video displays
Andrew J. Woods, Stanley S. L. Tan
A common artefact of time-sequential stereoscopic video displays is the presence of some image ghosting or crosstalk between the two eye views. In general this happens because of imperfect shuttering of the Liquid Crystal Shutter (LCS) glasses used, and the afterglow of one image into another due to phosphor persistence. This paper describes a project that has measured and quantified these sources of image ghosting and developed a mathematical model of stereoscopic image ghosting. The primary parameters which have been measured for use in the model are: the spectral response of the red, green and blue phosphors for a wide range of monitors, the phosphor decay rate of same, and the transmission response of a wide range of LCS glasses. The model compares reasonably well with perceived image ghosting. This paper aims to provide the reader with an improved understanding of the mechanisms of stereoscopic image ghosting and to provide guidance in reducing image ghosting in time-sequential stereoscopic displays.
Digital Stereoscopic Imaging
icon_mobile_dropdown
Rapid 2D-to-3D conversion
Philip V. Harman, Julien Flack, Simon Fox, et al.
The conversion of existing 2D images to 3D is proving commercially viable and fulfills the growing need for high quality stereoscopic images. This approach is particularly effective when creating content for the new generation of autostereoscopic displays that require multiple stereo images. The dominant technique for such content conversion is to develop a depth map for each frame of 2D material. The use of a depth map as part of the 2D to 3D conversion process has a number of desirable characteristics: 1. The resolution of the depth may be lower than that of the associated 2D image. 2. It can be highly compressed. 3. 2D compatibility is maintained. 4. Real time generation of stereo, or multiple stereo pairs, is possible. The main disadvantage has been the laborious nature of the manual conversion techniques used to create depth maps from existing 2D images, which results in a slow and costly process. An alternative, highly productive technique has been developed based upon the use of Machine Leaning Algorithm (MLAs). This paper describes the application of MLAs to the generation of depth maps and presents the results of the commercial application of this approach.
Adaptive disparity estimation and intermediate view reconstruction for multiview 3D imaging system
Kyung-Hoon Bae, Song-Taek Lim, Eun-Soo Kim
In this paper, a new 3D intermediate views reconstruction technique using an adaptive disparity estimation algorithm is proposed and its performance is analyzed by comparison to that of the conventional disparity estimation algorithms. In the proposed algorithm, in order to effectively synthesize the intermediate views the matching window size is selected according to the feature value of the input stereo image. By doing this, the mismatching probability of the disparity can be reduced through coarse matching in the similar area and fine matching in the area having large feature values such as in the edge part of object. From some experimental results, it is found that the proposed algorithm improves the PSNR of the reconstructed intermediate views about 3-4 dB on the average than that of the conventional algorithms.
Cross-switching in asymmetrical coding for stereoscopic video
Wa James Tam, Lew B. Stelmach, Filippo Speranza, et al.
Asymmetrical coding is a technique that can be used to reduce the bandwidth required for transmission and storage of stereoscopic video images. This technique is based on observations that a high level of perceived stereoscopic image quality can be maintained when the quality of the video stream to one eye is reduced. To address issues surrounding eye dominance and viewing comfort, we proposed to balance the inputs to the two eyes by cross-switching the image quality in the two streams over time. Here, we report two experiments on the visibility of cross-switches, for video sequences and random-dot stereograms. In both experiments, we manipulated a) the degree of asymmetry in quality of the video streams by varying image blur, and b) the timing of the cross-switch (either at a scene-cut or during a continuous scene). The viewers' task was to indicate whether the first of the second of a pair of stereoscopic presentations contained a cross-switch. We found that the cross-switch was masked by a scene cut, and that ease of detection depended on the degree of asymmetrical blur. We conclude that asymmetrical coding combined with cross-switching at scene cuts is a practical bandwidth-reduction technique for stereoscopic video.
Real-time view interpolation system for a super multiview 3D display: processing implementation and evaluation
The 3D display using super high-density multi-view images is expected as a display that enables to reproduce natural stereoscopic view. In this super multi-view display system, viewpoints are sampled at an interval narrower than the diameter of a pupil of an eye. With the parallax produced by the single eye, this system has a possibility to pull out the accommodation of an eye to an object image. This research aims at a real-time view interpolation system for the super multi-view 3D display. We have developed an evaluation system. Multi-view images of the object are captured by multi-view camera by convergence capturing in order to prevent resolution degradation. The main processing of the data processing is view interpolation and rectification. View interpolation is implemented on a high- speed image processing board using DSP chips or SIMD parallel processor chips. As for the view interpolation algorithm, we adopted an adaptive filtering technique on EPI. By this technique, multi-view images are interpolated adaptively by the most suitable filters on the EPIs. Rectification is a preprocessing and changes the multi-view images of the multi-view camera in convergence capturing into the ones in parallel capturing. By using rectified multi-view images, it is enabled to attain improvement in the processing speed that the view interpolation processing should be performed in only horizontal line.
Integral 3D Imaging
icon_mobile_dropdown
Full parallax image generation
Jung-Young Son, Vladmir V. Saveljev, Yong-Jin Choi, et al.
In designing an autostereoscopic imaging system, the parameters related with viewing zone are the main to be determined first for the system. The viewing zone is a spatial region where viewers can perceive stereoscopic image through the image displayed on the screen of the system. To define the viewing zone parameters such as width, distance from the image screen, depth range, and shape, an optical configuration composed of an image mask aligned on the top of a point light source array is used. This configuration shows that a lenticular or a parallax barrier plate is just doing the same role as the array and the obtainable image depth is related with the viewing zone parameters.
Pixels grouping and shadow cache for faster integral 3D ray tracing
Osama Youssef, Amar Aggoun, Wayne H. Wolf, et al.
This paper presents for the first time, a theory for obtaining the optimum pixel grouping for improving the coherence and the shadow cache in integral 3D ray-tracing in order to reduce execution time. A theoretical study of the number of shadow cache hits with respect to the properties of the lenses and the shadow size and its location is discussed with analysis for three different styles of pixel grouping in order to obtain the optimum grouping. The first style traces rows of pixels in the horizontal direction, the second traces similar pixels in adjacent lenses in the horizontal direction, and the third traces columns of pixels in the vertical direction. The optimum grouping is a combination of all three dependant up on the number of cache hits in each. Experimental results show validation of the theory and tests on benchmark scenes show that up to a 37% improvement in execution time can be achieved by proper pixel grouping.
Depth extraction from unidirectional integral image using a modified multibaseline technique
ChunHong Wu, Amar Aggoun, Malcolm McCormick, et al.
Integral imaging is a technique capable of displaying images with continuous parallax in full natural color. This paper presents a modified multi-baseline method for extracting depth information from unidirectional integral images. The method involves first extracting sub-images from the integral image. A sub-image is constructed by extracting one pixel from each micro-lens rather than a macro-block of pixels corresponding to a micro-lens unit. A new mathematical expression giving the relationship between object depth and the corresponding sub-image pair displacement is derived by geometrically analyzing the three-dimensional image recording process. A correlation- based matching technique is used fo find the disparity between two sub-images. In order to improve the disparity analysis, a modified multi-baseline technique where the baseline is defined as the distance between two corresponding pixels in different sub-images is adopted. The effectiveness of this modified multi-baseline technique in removing the mismatching caused by similar patterns in object scenes has been proven by analysis and experiment results. The developed depth extraction method is validated and applied to both photographic and computer generated unidirectional integral images. The depth estimation solution gives a precise description of object thickness with an error of less than 1.0% from the photographic image in the example.
Viewing-angle-enhanced integral imaging using lens switching
The integral photography which is also called integral imaging (II) is an attractive autostereoscopic display method for its many advantages such as continuous viewpoints and no need for any viewing aids. In spite of many advantages of II, the narrow viewing angle has been a bottleneck of it. In this paper, we propose a method to enhance the viewing angle of II by opening and shutting the elemental lenses sequentially. We prove our idea by using a mask that has patterns of on/off. It has vertical or horizontal apertures in an array form, whose interval matches that of the lenses in the II. Both theoretical discussion and experimental result are presented.
Objective quality measurement of integral 3D images
Matthew C. Forman, Neil A. Davies, Malcolm McCormick
At De Montfort University the imaging technologies group have developed an integral imaging system capable of real time capture and replay. The system has many advantages compared with other 3D capture and display techniques, however one issue that has not been adequately addressed is the measurement of the fidelity of replayed 3D images where some distortion has occurred. This paper presents a method for producing a viewing angle-dependent PSNR metric based on extraction of optical model data as conventional images. The technique produces image quality measurements which are more relevant to the volume spatial content of an integral image than a conventional fidelity metric applied to the raw, optically encoded spatial distribution. Comparisons of the previous, single metric with the new angle-dependent metric are made when used in assessing the performance of a 3D-DCT based compression scheme, and the utility of the extra information provided by the angle dependent PSNR is considered.
Volumetric 3D Imaging
icon_mobile_dropdown
Volumetric three-dimensional display using projection CRT
Jang Doo Lee, Hyung Wook Jang, Hui Nam, et al.
Volumetric 3D display systems have been expected to be able to reproduce natural view without eye fatigue. We have developed a prototype of the volumetric 3D monochromatic display system. It consists of a flat screen rotating at the speed of 1200 rpm(rotations per minute) and a synchronized projection engine. The projection engine can project 300 image slices per one rotation of the screen, so that we can see the continuous real three-dimensional image by residual images to human eyes. For this prototype, we use a 7 inch green projection CRT that is used in projection TVs as a two-dimensional image engine. In order to rapidly scan 3D display space, vector (random) scanning rather than normal raster scanning is used. We specially designed a lenticular type screen that has such a shape as to achieve constant gain over 180 degrees horizontal viewing angle. To collimate and redirect the light rays, Fresnel lenses and projection lens system are used, and to hide mirrors from viewer's viewing zone, off-axis optics are used. To obtain a stable volumetric three-dimensional image, the projected two-dimensional images and the positions of the screen should be synchronized. We devise a new algorithm that reconstructs the contour data of image slices from real three-dimensional shape. Vibration and noise is minimized. Brightness is not a problem and high enough resolution for relatively simple images can be achieved. A new version of full color and the higher resolution is on the way.
Live 3-D video in volumetric display
Jung-Young Son, Serguei A. Shestak, Vladmir P. Huschyn, et al.
A live volumetric image is created from a plane image taken by a CCD camera with the aid of depth-wise segmented images from a high-speed B/W camera. The segmented images are obtained by a rotating screen in the image volume of an objective with a large aperture objective. By processing the output signal from the B/W camera, the boundary of each segmented image from the camera is extracted. This boundary information is used to control the frame size of CCD camera image, which is projected through CRT projector to another rotating screen to generate volumetric image. The rotating screen displays image with frame speed of 30Hz. Each volumetric image is consisted of 8 layers. The rotating screen has a form of rotating cylinder with 8 slanted leaves along its circumference with an equal distance.
FELIX 3D display: an interactive tool for volumetric imaging
Knut Langhans, Detlef Bahr, Daniel Bezecny, et al.
The FELIX 3D display belongs to the class of volumetric displays using the swept volume technique. It is designed to display images created by standard CAD applications, which can be easily imported and interactively transformed in real-time by the FELIX control software. The images are drawn on a spinning screen by acousto-optic, galvanometric or polygon mirror deflection units with integrated lasers and a color mixer. The modular design of the display enables the user to operate with several equal or different projection units in parallel and to use appropriate screens for the specific purpose. The FELIX 3D display is a compact, light, extensible and easy to transport system. It mainly consists of inexpensive standard, off-the-shelf components for an easy implementation. This setup makes it a powerful and flexible tool to keep track with the rapid technological progress of today. Potential applications include imaging in the fields of entertainment, air traffic control, medical imaging, computer aided design as well as scientific data visualization.
Autostereoscopic Displays I
icon_mobile_dropdown
Development of 3D display system by a fanlike array of projection optics
Toshio Honda, Daisuke Nagai, Masaya Shimomatsu
We have been developing a 3-D displaying method satisfying Super Multi-View condition that is the method of producing natural 3D images by displaying multi-perspective images with fine perspective-pitch, which is narrower than the pupil diameter of viewers eyes. TO realize this method, we propose a new 3D display system which uses many projection optics and a concave mirror.
Stereodisplay with neural network image processing
A. Loukianitsa, Andrey N. Putilin
The system for visualization of multi aspect images on the base of multilayer LCD and neural network image processing was proposed and investigated experimentally. The optical scheme consists from at least two sequentially positioned LCD screens one behind another and computer system, that calculates the information to be illuminated on the LCD. As against to the conventional parallax barrier schemes the information about the aspects of the stereo image are being distributed in both LCD, so the image quality may be much higher than in conventional autostereoscopic displays.
Autostereoscopic Displays II
icon_mobile_dropdown
Multiviewpoint autostereoscopic dispays from 4D-Vision GmbH
Alexander Schmidt, Armin Grasnick
4D-vision has developed a new patented technology for affordable autostereoscopic displays at almost every size. The basic concept of these screens is a wavelength-selective filter array which is mounted in front of a flat panel like TFT or plasma. Due to this filter, subpixels of an image are spread out into different directions, depending on their wavelength. The images based on the 4D-vision technology contain eight perspective of a scene. Parts of these views are provided to the observers, creating a plurality of correct stereo pairs in front of the screen. So multiple observers get very good images at the same time, and they even can move in front of the display without losing the 3D impression.
Design and fabrication of a micromirror array for autostereoscopic 3D displays
Jun Yan, Stephen T. Kowel, Hyoung J. Cho, et al.
We designed and fabricated the first, to the best of our knowledge, micromirror array for autostereoscopic 3D display systems. We conducted the optical and Micro-Electrical- Mechanical-System (MEMS) design concurrently, and fabricated several 20x20 micromirror arrays, with micromirror size of 460x460 microns. Both electrostatic and magnetic actuation methods were used to achieve deflection angles of +/- 0.8 degrees. We used these micromirror arrays with backlit transparencies to build a 2-view (left and right) autostereoscopic 3-D display system.
New autostereoscopic display technology: the SynthaGram
StereoGraphics Corporation has introduced a new flat-panel autostereoscopic display, the SynthaGram. It produces bright, clear and satisfying three-dimensional images that may be viewed from a substantial angle of view by many observers. A progression of perspective views, like that used by the parallax panoramagram, is created either by computer or photographic means. Each view is sampled at the sub-pixel level and mapped by means of a process called Interzigging. The resultant Interzigged image is a sub-pixel map of spatial information. The map is displayed on a flat panel screen - in the present case a liquid crystal display. A lenticular screen overlays the flat-panel display, but the direction of the lenticule boundaries are angled to the vertical. The technology, we believe, is the basis for electronic autostereoscopic display solutions for many applications.
Reduction of the thickness of lenticular stereoscopic display using full-color LED panel
Hirotsugu Yamamoto, Syuji Muguruma, Yoshio Hayasaki, et al.
The goals of our research are to realize stereoscopic large displays for the general public in the open air. These goals impose certain design constraints. It is preferable to view stereoscopic images without any special glasses. Multiple viewing areas are necessary to crowds of viewers. The thickness of the stereoscopic large display is a problem for installations. We have developed a stereoscopic full color LED display using parallax barrier and demonstrated multiple viewing areas. However, the thickness of that stereoscopic display, which is consisted of 8-mm pitch LED panel, becomes over 2 min the case that the viewing distance is designed for 20 m. In order to reduce the thickness of stereoscopic LED display, we propose use of dual lenticular sheets. The first lenticular sheet performs multiple imaging, which allows multiple viewing areas without diffusion screen. The second lenticular sheets separates the raster images into the right and left eye positions. The thickness, which is defined as the distance between the LED panel and the second lenticular sheet, can be reduced below 50cm for the viewing distance of 20 m. Optimal parameters are discussed to reduce the thickness.
Autostereoscopic field-sequential display with full freedom of movement or Let the display wear the shutter glasses!
Yosh Mantinband, Hillel Goldberg, Ilan Kleinberger, et al.
This novel autostereoscopic technology can be used with any light-emitting display, such as CRT, and is intended for single-user applications. Its characteristics include: Full resolution, full freedom of movement with no mechanical latency, and compatibility with existing stereoscopic media. The hardware is contained within a relatively thin layer, which can be produced as a stand-alone product to be placed in front of an existing screen, or built into the front of a standard display device at assembly. This layer allows us to time-share the display between the two eyes (temporal multiplexing). Each image of the stereo pair is shown at the full native resolution of the display. By using self- contained, electronic controls to dynamically adjust the optical geometry within the layer, we control what each eye sees at any moment. This allows full freedom of movement with no mechanical latency. The same information about viewer position can be used to provide a natural look-around effect. In summary, we have created a field-sequential stereo display functionally identical to that provided by LCS glasses - but without the glasses system is compatible with all existing field-sequential stereoscopic media and software.
Autostereoscopic Displays III
icon_mobile_dropdown
Analysis of the viewing zone of multiview autostereoscopic displays
The viewing zone of a multi-view autostereoscopic display can be shown to be completely determined by four parameters: the width of the screen, the optimal distance of the viewer from the screen, the width over which an image can be seen across the whole screen at this optimal distance (the eye box width), and the number of views. A multi-view display's viewing zone can thus be completely described without reference to the internal implementation of the device. These results can be used to determine what can be seen from any position in front of the display. This paper presents a summary of the equations derived in an earlier paper. These equations allow us to analyze an autostereoscopic display, as specified by the above parameters. We build on this work by using the derived equations to analyze the configurations of the extant models of the Cambridge autostereoscopic display: 10' 8- and 16-view, 25' 28-view, 50' 15-view displays and an experimental 25' 7-view display.
Autostereoscopic display with eye tracking
Takao Tomono, Kyung Hoon, Yong Soo Ha, et al.
Auto-stereoscopic 21-inch display with eye tracking having wide viewing zone and bright image was fabricated. The image of display is projected to retinal through several optical components. We calculated optical system for wider viewing zone by using Inverse-Ray Trace Method. The viewing zone of first model is 155mm (theoretical value: 161mm). We could widen viewing zone by controlling paraxial radius of curvature of spherical mirror, the distance between lenses and so on. The viewing zone of second model is 208mm. We used two spherical mirrors to obtain twice brightness. We applied eye-tracking system to the display system. Eye recognition is based on neural network card based on ZICS technology. We fabricated Auto-stereoscopic 21-inch display with eye tracking. We measured viewing zone based on illumination area. The viewing zone was 206mm, which was close to theoretical value. We could get twice brightness also. We could see 3D image according to position without headgear.
Development of a color 3D display visible to plural viewers at the same time without special glasses by using a ray-regenerating method
Goro Hamagishi, Takahisa Ando, Masahiro Higashino, et al.
We have newly developed a few kinds of new auto-stereoscopic 3D displays adopting a ray-regenerating method. The method is invented basically at Osaka University in 1997. We adopted this method with LCD. The display has a very simple construction. It consists of LC panel with a very large number of pixels and many small light sources positioned behind the LC panel. We have examined the following new technologies: 1) Optimum design of the optical system. 2) Suitable construction in order to realize very large number of pixels. 3) Highly bright back-light system with optical fiber array to compensate the low lighting efficiency. The 3D displays having wide viewing area and being visible for plural viewers were realized. But the cross-talk images appeared more than we expected. By changing the construction of this system to reduce the diffusing factors of generated rays, the cross-talk images are reduced dramatically. Within the limitation of the pixel numbers of LCD, it is desirable to increase the pinhole numbers to realize the realistic 3D image. This research formed a link in the chain of the national project by NEDO (New Energy and Industrial Technology Development Organization) in Japan.
Stereoscopic Display Applications
icon_mobile_dropdown
Virtual view generation of natural panorama scenes by setting representation
Kunio Yamada, Kenji Mochizuki, Takeshi Naemura, et al.
This paper proposes a technique to generate virtual views of a natural panorama scene. The scene is captured by an original 3-camera system. The images are stitched into a stereo panorama and the depth is estimated. The texture panorama is segmented into regions, each of which can be regarded to be approximated as a plane. The planar parameter set of the region for setting representation is calculated depending on the depth data. According to the representation the virtual views are generated using center panorama texture, and left and right panoramas are used for occlusion compensation.
Visualization in aerospace research with a large wall display system
Yuichi Matsuo
National Aerospace Laboratory of Japan has built a large- scale visualization system with a large wall-type display. The system has been operational since April 2001 and comprises a 4.6x1.5-meter (15x5-foot) rear projection screen with 3 BARCO 812 high-resolution CRT projectors. The reason we adopted the 3-gun CRT projectors is support for stereoscopic viewing, ease with color/luminosity matching and accuracy of edge-blending. The system is driven by a new SGI Onyx 3400 server of distributed shared-memory architecture with 32 CPUs, 64Gbytes memory, 1.5TBytes FC RAID disk and 6 IR3 graphics pipelines. Software is another important issue for us to make full use of the system. We have introduced some applications available in a multi- projector environment such as AVS/MPE, EnSight Gold and COVISE, and been developing some software tools that create volumetric images with using SGI graphics libraries. The system is mainly used for visualization fo computational fluid dynamics (CFD) simulation sin aerospace research. Visualized CFD results are of our help for designing an improved configuration of aerospace vehicles and analyzing their aerodynamic performances. These days we also use it for various collaborations among researchers.
Stereoscopic Camera Systems
icon_mobile_dropdown
Multiple-view stereoscopic line-scan imaging
J. Paul Owain Evans, Hock Woon Hon
A novel multiple view line-scan imaging technique that can be applied to transmission x-ray imaging as well as reflected light cameras is presented. In either case an area array image sensor is treated as a contiguous set of precisely arranged line-scan devices utilizing a single perspective center. IN the case of reflected light the perspective center is the nodal point of a lens whilst in the x-ray case it is the focal spot of an x-ray source. The line-scan images are accumulated in digital memory whilst the object under inspection is linearly translated through the field of view of the camera. In this way a number of perspective images, typically 6 to 16 are produced. The 3D information inherent in the perspective views can be visualized as a smooth object rotation or as a dynamic binocular stereoscopic sequence of views.
Development of an electro-optical 3D adapter for stereoscopic video recording
We describe the development and evaluation of an electro- optical 3D adapter for recording stereoscopic 3D images with a standard video camcorder. The adapter uses a combination of liquid crystal shutters and a half prism to record right and left images in each field of an NTSC signal. The purpose of this study was to develop a simple and usable 3D recording system. We investigated the usability of a conventional -model 3D adapter and examined solutions to the problems we found. This adapter has the following characteristics: 1) The 3D recordings were made using the parallel method. 2) The frame of the adapter does not obstruct light in any part of the images. 3) A correcting lens is used in close-up recordings to equalize the sizes of the right and left images. 4) The vertical disparity of each image is easy to adjust. 5) The base length can be adjusted to between 65mm and 90mm.
Poster Pop Session
icon_mobile_dropdown
Do observers exploit binocular disparity information in motor tasks within dynamic telepresence environments?
Mark F. Bradshaw, K. M. Elliott, Simon J. Watt, et al.
Increasingly, binocular disparity has become commonplace in telepresence systems despite the additional cost of its provision. Experiments comparing performance under monocular and binocular viewing are often cited as justification for its use. Here we question whether this experimental comparison and provide an important set of data which compares performance on a motor task under binocular, monocular and bi-ocular (where both eyes receive the same view) conditions. Binocular cues were found to be particularly important in the control of the transport component. In the binocular conditions peak velocity scaling with object distance was greater than in the other conditions, and in the bi-ocular condition, where the binocular distance cues conflicted with pictorial information, no scaling was evident. For the grasp component, even in the presence of conflicting size and depth information, grip scaling remained equivalent in all conditions. For the transport component at least, binocular cues appear important and the decrease in performance observed in behavioral studies under monocular conditions is not attributable to lack of information in one eye but rather to the lack of binocular depth cues. Therefore in the design of telepresence systems to be used in telemanipulation tasks, the use of stereoscopic display technology seems justified.
Ronchi retarder gratings as polarization modulators
Mauricio Ortiz-Gutierrez, Arturo Olivares-Perez, Mario Perez-Cortes, et al.
We show Ronchi grating made of cellophane; this device has the particularity of modulate the polarization state of an arbitrary polarizes source. The grating period can be designed to obtain two linear perpendicular polarization states, horizontal and vertical, or circular, right and left if the source has linear or circular polarization state respectively. With this grating, we can modulate or demodulate images for stereoscopic applications.
Autostereoscopic display with real-image virtual screen and light filters
Hideki Kakeya, Yoshiki Arakawa
A reality-enhanced autostereoscopic display system is presented. In this system, the viewers who do not wear any special glasses can perceive 3D images within their hands reach with little sense of incongruity. The feature of this system is combination of real image generation and parallax presentation. Real image of the display in the back is generated in the air by using Fresnel lenses, which has made it possible to narrow artificial parallax to display 3D objects in the workspace near the viewer without interfering the viewers' motion. Smaller artificial parallax leads to 3D perception with more reality and less eyestrain than the conventional 3D displays. For parallax presentation a mobile filter which plays the role of stereoscopic goggles is set between the display in the back and the Fresnel lenses and is controlled so that it follows the motion of the viewer to keep on presenting different images to each eye. To present undistorted 3D space the optical path including refraction by Fresnel lenses is calculated and the image on the screen is updated based on it. Real-time undistorted image presentation to unrestricted eye positions is realized by using texture mapping technique.
Adaptive hierarchical stereo matching using object segmentation and window warping
In this paper, we propose an adaptive stereo matching algorithm to treat stereo matching problems in projective distortion regions. Since the disparities in the projective distortion region can not be estimated in terms of fixed- size block matching algorithm, an adaptive window warping method with hierarchical matching process is used to compensate perspective distortions. In addition, a probability model, based on the statistical distribution of matched errors and constraint functions, is adopted to handle the uncertainty of matching points. Since the proposed window warping process is based on a statistical window warping step with the reliability estimation of matching points, any relaxation process need not to use. As a result, overall processing time is reduced, compared with conventional stereo matching algorithm including a relaxation step, and improved matching results are obtained. Experimental results on both disparity map and 3D model view show that the proposed matching algorithm is effective for various images, even if the image has projective distortion regions and repeated patterns.
Autostereoscopic Displays I
icon_mobile_dropdown
Eye tracking for autostereoscopic displays using web cams
Markus Andiel, Siegbert Hentschke, Thorsten Elle, et al.
The tracking of observer's eyes positions in front of a 3D display is necessary to ensure a correct autostereoscopic view of position-dependent 3D images. We present a new real- time eye tracking system using two commercially available web cams detecting the observer's eyes in x, y and z direction. The entire system can be installed on a standard PC together with an autostereoscopic display. In a first process eye-candidates are detected by implementation of a fast pattern recognition. The additional use of the color information of the web cams provides more useful information on finding eye-candidates. In a second step from eye-pairs that are nearest to a monitor world-coordinates are calculated. Signal transmission and processing delays are compensated by an adaptive predictor. Thus the entire system is cheaper, smaller in size and it can be installed on a standard PC. In addition the tracking software can also support other applications, e.g. to set up a teleconference system in conjunction with an autostereoscopic monitor.
Poster Pop Session
icon_mobile_dropdown
3D display system using monocular multiview displays
Kunio Sakamoto, Kazuki Saruta, Kazutoki Takeda
A 3D head mounted display (HMD) system is useful for constructing a virtual space. The authors have researched the virtual-reality systems connected with computer networks for real-time remote control and developed a low-priced real-time 3D display for building these systems. We developed a 3D HMD system using monocular multi-view displays. The 3D displaying technique of this monocular multi-view display is based on the concept of the super multi-view proposed by Kajiki at TAO (Telecommunications Advancement Organization of Japan) in 1996. Our 3D HMD has two monocular multi-view displays (used as a visual display unit) in order to display a picture to the left eye and the right eye. The left and right images are a pair of stereoscopic images for the left and right eyes, then stereoscopic 3D images are observed.
Metrics
icon_mobile_dropdown
Shape and motion measurement of time-varying objects based on spatio-temporal image analysis for multimedia applications
Michal Emanuel Pawlowski, Malgorzata Kujawinska
The basic methodologies used in animation are presented and their most significant problems connected with combining real and virtual worlds are recognized. The optical method of shape and movement determination is proposed for fast virtual object generation. Combination of fringe projection technique with photogrammetry is used to calculate the shape and position of the object points. The object surface during the measurement is illuminated by DMD projector and observed by two spatially separated CCD detectors. The time varying fringe pattern observed on the object surface is analyzed by spatial carrier phase shifting algorithm to determine the actual shape. The analysis of fiducial point's positions on two CCD detectors during the measurement provides an information about their points 3D co-ordinates within the measurement volume. Combined information about actual object shape and its position in time (as a rigid body motion) during the measurement enables to generate a virtual model of the object together with the description of its movement. The concept described above is experimentally tested and exemplary results of measurement of human body parts are presented. The brief error analysis is presented. The further works to implement this technique are discussed.
Interactive stereo electron microscopy enhanced with virtual reality
E. Wes Bethel, S. Jacob Bastacky, Ken Schwartz
An analytical system is presented that is used to take measurements of objects perceived in stereo image pairs obtained from a scanning electron microscope (SEM). Our system operates by presenting a single stereo view that contains stereo image data obtained form the SEM, along with geometric representations of two types of virtual measurement instruments, a protractor and a caliper. The measurements obtained form this system are an integral part of a medical study evaluating surfactant, a liquid coating the inner surface of the lung which makes possible the process of breathing. Measurements of the curvature and contact angle of submicrom diameter droplets of a fluorocarbon deposited on the surface of airways are performed in order to determine surface tension of the air/liquid interface. This approach has been extended to a microscopic level from the techniques of traditional surface science by measuring submicrometer rather than millimeter diameter droplets, as well as the lengths and curvature of cilia responsible for movement of the surfactant, the airway's protective liquid blanket. An earlier implementation of this approach for taking angle measurements from objects perceived in stereo image pairs using virtual protractor is extended in this paper to include distance measurements and to use a unified view model. The system is built around a unified view model that is derived from microscope-specific parameters, such as focal length, visible area and magnification. The unified view model ensures that the underlying view models and resultant binocular parallax cues are consistent between synthetic and acquired imagery. When the view models are consistent, it is possible to take measurements of features that are not constrained to lie within the projection plane. The system is first calibrated using non-clinical data of known size and resolution. Using the SEM, stereo image pairs of grids and spheres of known resolution are created to calibrate the measurement system. After calibration, the system is used to take distance and angle measurements of clinical specimens.
Senses
icon_mobile_dropdown
Virtual haptic exploratory visualization of line graphs and charts
Jonathan C. Roberts, Keith M. Franklin, Jonathan Cullinane
This paper describe ongoing research investigating how visualizations, especially line-graphs and charts, may be represented by haptics both to understand the structure and the values associated with the graphical realization. Much of the current research has focused mainly on the structure of the line-graph; some more recent work has used sound to depict the value of the curve. There has been limited work on multiple curves and more complex charts, with problems occurring between the crossover points of (say) a curve. We use the PHANToM Haptic interface to feel objects within the virtual world. Our investigations are focusing on a three- stage methodology: a) unguided exploration, where the user may wander and explore the haptic visualization in their own time, b) constrained navigation, where the user's point of interest is constrained to a particular path, but the user can still explore within these constraints, and c) Tours, where the user is completely guided round a predefined path.
Is audio useful in immersive visualization?
In this article I provide from localization experiments in virtual environment. I define common tasks (orientation, localization, and navigation) in immersive visualization. The above mentioned tasks will be examined with user tests. Two localization experiments have been accomplished. In the first localization experiment the localization accuracy was significantly better (p<<0.01 in ANOVA) with loudspeaker reproduction than with headphone reproduction (nonindividualized HRTF's). The second experiment indicated, that localization accuracy is depending on signal (P<<0.01 in ANOVA). Although the absolute lower limit for auditory localization accuracy in front is one degree for the azimuth, the reality is much worse. For example the screens and room reverberation deteriorate loudspeaker reproduction accuracy. Current results suggest that at least in some tasks the audio is useful addition to immersive visualization tasks.
Small-scale tactile graphics for virtual reality systems
John W. Roberts, Oliver T. Slattery, Brett Swope, et al.
As virtual reality technology moves forward, there is a need to provide the user with options for greater realism for closer engagement to the human senses. Haptic systems use force feedback to create a large-scale sensation of physical interaction in a virtual environment. Further refinement can be created by using tactile graphics to reproduce a detailed sense of touch. For example, a haptic system might create the sensation of the weight of a virtual orange that the user picks up, and the sensation of pressure on the fingers as the user squeezes the orange. A tactile graphic system could create the texture of the orange on the user's fingertips. IN the real wold, a detailed sense of touch plays a large part in picking up and manipulating small objects. Our team is working to develop technology that can drive a high density fingertip array of tactile simulators at a rapid refresh rate, sufficient to produce a realistic sense of touch. To meet the project criteria, the mechanism must be much lower cost than existing technologies, and must be sufficiently lightweight and compact to permit portable use and to enable installation of the stimulator array in the fingertip of a tactile glove. The primary intended applications for this technology are accessibility for the blind and visually impaired, teleoperation, and virtual reality systems.
Performance
icon_mobile_dropdown
Combining a multithreaded scene graph system with a tiled display environment
E. Wes Bethel, Randall J. Frank, J. Dean Brederson
This paper highlights the technical challenges of creating an application that combines a multithreaded scene graph system for rendering with a software environment for management of tiled display environments. Scene graph systems simplify and streamline graphics applications by providing data management and rendering services. Software for tiles display environments simplifies use of multiple displays by performing such tasks as opening windows on displays, gathering and processing input device events, and orchestrating the execution of application rendering code. We explore technical issues in the contest of an application that integrates both software tools, and formulate suggestions for the future development of such systems.
Object-oriented framework for rapid game prototyping
Alexandre Passos, Richard P. Simpson
Small game development groups and companies are faced with two important challenges in today's economy: creating a good game prototype as showcase for game publishers and meeting the time to market deadlines. These two challenges are sometimes the factors that will separate a successful group from one that is not. The learning curve in design and implementation is a significant component of these two challenges. If the learning curve is too steep then deadlines may not be met and the overall quality of the software is lowered. This study presents a new game- programming library called PGL that addresses the timing and learning factors that exist in game development.
Latency meter: a device end-to-end latency of VE systems
Dorian Miller, Gary Bishop
The effectiveness of virtual environment systems depends critically on the end-to-end delay between the user's motion and the update of the display. When the user moves, the graphics system must update the images on the display to reflect the proper projection of the virtual world on their field of vision. Significant delay in this update is perceived as swimming of the virtual world; objects in the virtual wold appear to follow the user's motions. We are developing a standalone instrument that quickly estimates end-to-end latency without requiring electrical connections or changes to the VE software. We believe that a method for easily monitoring latency will change the way programmers and users work. Our latency meter works by observing the user's motion and the display's response using high-speed optical sensors. When the user rocks back and forth, the display exhibits a similar but delayed rocking of objects in the user's field of vision. We process the signals from the optical sensors to extract the times of very slow iamge change corresponding to the times when the user is nearly stopped (just before reversing direction). By correlating a sequence of these turn-around points in the two signals we can accurately estimate the end-to-end system delay.
Software and Systems
icon_mobile_dropdown
Arbitrary view image generation by model-based interpolation in the ray space
Makoto Sekitoh, Teruyuki Kutsuna, Toshiaki Fujii, et al.
In recent years, research on arbitrary view image generation using multi-cameras is attracting wide attention. The arbitrary view image generation technique is classified into Model-Based Rendering (MBR) and Image-Based Rendering (IBR). Here, we propose a new method based on IBR using MBR interpolation. This method has the concept which uses a model properly according to camera density, for interpolation IBR technology. Especially we showed the concrete realization method of the algorithm in the middle camera density. And, the computer simulation of arbitrary view image generation using our algorithm is performed. It is verified that this method is simple and useful to generate arbitrary view image at high speed. Moreover, algorithm was mounted in the system using PC cluster. The system using PC cluster has performed this algorithm in real time.
Application of computer-generated models using low-bandwidth vehicle data
Neil J. Heyes
One of the main issues with remote teleoperation of vehicles is that during visual operation, one relies on fixed camera positions that ultimately constrain the operator's view of the real world. The paper describes a solution that has been developed at QinetiQ where the operator his given a unique virtual perspective of the vehicle and the surrounding terrain as the vehicle operates. This system helps to solve problems that are generic to remote systems, such as reduction of high data transmission rates and providing 360 degree(s) three dimensional operator view positions regardless of terrain features, light levels and near real time operation. A summary of technologies is listed that could be applied to different types of vehicles and placed in many different situations in order to enhance operator spatial awareness.
Low-cost projection-based virtual reality display
This paper describes the construction of a single screen, projection-based VR display using commodity, or otherwise low-cost components. The display is based on Linus PCs, and uses polarized stereo. Our aim is to create a system that is accessible to the many museums and schools that do not have large budgets for exploring new technology. In constructing this system we have been evaluating a number of options for the screens, projectors, and computer hardware.
Design of an ultralight head-mounted projective display (HMPD) and its applications in augmented collaborative environments
Hong Hua, Chunyu Gao, Leonard Brown, et al.
Head-mounted displays (HMDs) are widely used for 3D visualization tasks such as surgical planning, scientific visualization, or engineering design. Even though the HMD technologies have undergone great development, tradeoffs in capability and limitation exist. The concept of head-mounted projective displays (HMPDs) is an emerging technology on the boundary of conventional HMDs and projective displays such as the CAVE technology. It has been recently demonstrated to yield 3D visualization capability with potentially a large FOV, lightweight optics, low distortion, as well as correct occlusion of virtual objects by real objects. As such, the HMPD has been proposed as an alternative to stereoscopic displays for 3D visualization applications. In this paper, a brief review the HMPD technology is followed by the presentation of a recent design and implementation of a compact HMPD prototype based on an ultra-light design of projective optics using diffractive optical element (DOE) and plastic components. Finally, we will include applications of the HMPD technology being developed across three universities for augmented visualization tasks and distributed collaboration in augmented environments.
Cyber entertainment system using an immersive networked virtual environment
Masayuki Ihara, Shinkuro Honda, Minoru Kobayashi, et al.
Authors are examining a cyber entertainment system that applies IPT (Immersive Projection Technology) displays to the entertainment field. This system enables users who are in remote locations to communicate with each other so that they feel as if they are together. Moreover, the system enables those users to experience a high degree of presence, this is due to provision of stereoscopic vision as well as a haptic interface and stereo sound. This paper introduces this system from the viewpoint of space sharing across the network and elucidates its operation using the theme of golf. The system is developed by integrating avatar control, an I/O device, communication links, virtual interaction, mixed reality, and physical simulations. Pairs of these environments are connected across the network. This allows the two players to experience competition. An avatar of each player is displayed by the other player's IPT display in the remote location and is driven by only two magnetic sensors. That is, in the proposed system, users don't need to wear any data suit with a lot of sensors and they are able to play golf without any encumbrance.
Stereoscopic displays for virtual reality in the car manufacturing industry: application to design review and ergonomic studies
Guillaume Moreau, Philippe Fuchs
In the car manufacturing industry the trend is to drastically reduce the time-to-market by increasing the use of the Digital Mock-up instead of physical prototypes. Design review and ergonomic studies are specific tasks because they involve qualitative or even subjective judgements. In this paper, we present IMAVE (IMmersion Adapted to a VEhicle) designed for immersive styling review, gaps visualization and simple ergonomic studies. We show that stereoscopic displays are necessary and must fulfill several constraints due to the proximity and size of the car dashboard. The duration fo the work sessions forces us to eliminate all vertical parallax, and 1:1 scale is obviously required for a valid immersion. Two demonstrators were realized allowing us to have a large set of testers (over 100). More than 80% of the testers saw an immediate use of the IMAVE system. We discuss the good and bad marks awarded to the system. Future work include being able to use several rear-projected stereo screens for doors and central console visualization, but without the parallax presently visible in some CAVE-like environments.
Stereoscopic Display Applications
icon_mobile_dropdown
Real-time image-based rendering for stereo views of vegetation
Rendering of detailed vegetation for real-time applications has always been difficult because of the high polygon count in 3D models. Generating correctly warped images for nonplanar projection surfaces often requires even higher degrees of tessellation. Generating left and right eye views for stereo would further reduce the frame rate since information for one eye view cannot be used to redraw the vegetation for the other eye view. We describe an image based rendering approach that is a modification fo an algorithm for monoscopic rendering of vegetation proposed by Aleks Jakulin. The Jakulin algorithm pre-renders vegetation models from 6 viewpoints; rendering from an arbitrary viewpoint is achieved by compositing the nearest two slicings. Slices are alpha blended as the user changes viewing positions. The blending produces visual artifacts that are not distracting in a monoscopic environment but are very distracting in a stereo environment. We have modified the algorithm so it displays all pre-rendered images simultaneously and slicings are partitioned and rendered in a back-to-front order. This approach improves the quality of the stereo, maintains the basic appearance of the vegetation and reduces visual artifacts but it increases rendering time slightly and produces a rendering that is not totally faithful to the original vegetation model.
Autostereoscopic Displays III
icon_mobile_dropdown
Improved quality three-dimensional integral imaging and projection using nonstationary optical components
Ju-Seog Jang, Bahram Javidi
We propose the use of synchronously moving micro-optics (lenslet arrays) for image pickup and display in 3D integral imaging to overcome the upper resolution limit imposed in the Nyquist sampling theorem. With the proposed technique, we show experimentally that the viewing resolution can be improved without reducing the 3D viewing aspect of the reconstructed image. In addition, both fields of view and view angle of the reconstructed image can be improved without decreasing the viewing resolution, if the non- stationary lenslet array technique is used. For this purpose, the use of low fill-factor lenslet arrays is discussed.
Metrics
icon_mobile_dropdown
Micro-archiving and interactive virtual insect exhibit
Scott S. Fisher, Tatsuya Saito, Ian E. McDowall, et al.
This system has been in development at Keio University in Japan and pulls together several techniques including Micro Archiving and interactive stereoscopic displays. The exhibit, shown at Siggraph, engages visitors who are invited to visualize and interact with microscopic structures that cannot be seen with the naked eye, but that commonly exist in our everyday surroundings. The exhibit presents a virtual world in which dead specimens of bugs come back to life as virtual bugs, and freely walk around - visitors interact with these virtual bugs and examine the virtual models in detail.
Performance
icon_mobile_dropdown
Usability engineering: domain analysis activities for augmented-reality systems
Joseph Gabbard, J. Edward Swan II, Deborah Hix, et al.
This paper discusses our usability engineering process for the Battlefield Augmented Reality System (BARS). Usability engineering is a structured, iterative, stepwise development process. Like the related disciplines of software and systems engineering, usability engineering is a combination of management principals and techniques, formal and semi- formal evaluation techniques, and computerized tools. BARS is an outdoor augmented reality system that displays heads- up battlefield intelligence information to a dismounted warrior. The paper discusses our general usability engineering process. We originally developed the process in the context of virtual reality applications, but in this work we are adapting the procedures to an augmented reality system. The focus of this paper is our work on domain analysis, the first activity of the usability engineering process. We describe our plans for and our progress to date on our domain analysis for BARS. We give results in terms of a specific urban battlefield use case we have designed.