Proceedings Volume 8384

Three-Dimensional Imaging, Visualization, and Display 2012

cover
Proceedings Volume 8384

Three-Dimensional Imaging, Visualization, and Display 2012

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 7 May 2012
Contents: 10 Sessions, 41 Papers, 0 Presentations
Conference: SPIE Defense, Security, and Sensing 2012
Volume Number: 8384

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 8384
  • Electro- and Digital Holography I
  • Electro- and Digital Holography II
  • Applications of 3D Images I
  • 3D Displays and Related I
  • Integral Imaging
  • 3D Imaging
  • Applications of 3D Images II
  • 3D Displays and Related II
  • Poster Session
Front Matter: Volume 8384
icon_mobile_dropdown
Front Matter: Volume 8384
This PDF file contains the front matter associated with SPIE Proceedings Volume 8384, including the Title Page, Copyright information, Table of Contents, and Conference Committee listing.
Electro- and Digital Holography I
icon_mobile_dropdown
Holographic 3D display using MEMS spatial light modulator
This paper presents a new holographic three-dimensional display technique that increases both viewing zone angle and screen size. In this study, a spatial light modulator (SLM) employing microelectromechanical systems (MEMS) technology is used for high-speed image generation. The images generated by the MEMS SLM are demagnified horizontally and magnified vertically using an anamorphic imaging system. The vertically enlarged images, which are elementary holograms, are aligned horizontally by a galvano scanner. Reconstructed images with a screen size of 4.3 in and a horizontal viewing zone angle of 15° are generated at a frame rate of 60 fps. The reconstructed images are improved by two methods: one reduces blur caused by scan and focus errors, and the other improves grayscale representation. In addition, accommodation responses of eyes to the reconstructed images are explained.
Face and eye tracking for sub-hologram-based digital holographic display system
Sub-hologram based holographic display method is one of the most practical approaches for realizing big size holographic display. However, this method needs highly accurate face and eye tracking function in real-time to enable precise steering of backlight and generation of corresponding sub-hologram for each video frame. We theoretically estimated several parameters, such as accuracy, speed and distance from an observer, required for the eye tracking function and developed an eye tracking system whose objective is accurate and fast 3D positioning of left and right pupils of an observer. Experimental results show that the system obtains accurate 3D pupil positions with an error less than 3 mm at 30 frames per second under disturbing conditions such as more than 2m distance and an observer wearing glasses. Therefore, our implementation could be sufficiently applied to the sub-hologram based display system.
Spatial light modulator-based phase-shifting Gabor holography
We present a modified Gabor-like setup able to recover the complex amplitude distribution of the object wavefront from a set of in-line recorded holograms. The proposed configuration is characterized by the insertion of a condenser lens and a spatial light modulator (SLM) into the classical Gabor configuration. The phase-shift is introduced by the SLM that modulates the central spot (DC term) in an intermediate plane, without an additional reference beam. Experimental results validate the proposed method and produce superior results to the Gabor method.
3D visual systems using integral photography camera, camera array, and electronic holography display
This paper introduces two 3D visual systems toward ultra-realistic communication. The first system includes integral photography video camera that uses a lens array and a 4K2K-resolution video camera for the capture of ray information at slightly separated locations. The second system includes camera array that uses 300 cameras to capture ray information at more sparse locations than integral photography. Both systems use electronic holography as an ideal 3D display. These systems are characterized in that the ray-based image sensors are used to capture 3D objects under natural light and electronic holography is used to reconstruct the 3D objects.
Is it worth using an array of cameras to capture the spatio-angular information of a 3D scene or is it enough with just two?
H. Navarro, A. Dorado, G. Saavedra, et al.
An analysis and comparison of the lateral and the depth resolution in the reconstruction of 3D scenes from images obtained either with a classical two view stereoscopic camera or with an Integral Imaging (InI) pickup setup is presented. Since the two above systems belong to the general class of multiview imaging systems, the best analytical tool for the calculation of lateral and depth resolution is the ray-space formalism, and the classical tools of Fourier information processing. We demonstrate that InI is the optimum system to sampling the spatio-angular information contained in a 3D scene.
Electro- and Digital Holography II
icon_mobile_dropdown
An autofocusing algorithm for digital holograms
P. Ferraro, P. Memmolo, C. Distante, et al.
We propose an algorithm for the automatic estimation of the in-focus image and the recovery of the correct reconstruction distance for digital holograms. We tested the proposed approach applying it to stretched digital holograms. In fact, by stretching an hologram with a variable elongation parameter, it is possible to change the in-focus distance of the reconstructed image. In this way, the reliability of proposed algorithm can be verified at different distances dispensing with the recording of different holograms. Experimental results are shown with the aim to demonstrate the usefulness of the proposed method and a comparative analysis has been performed with respect to other algorithms developed for digital holography.
3D microscopic imaging at 193nm with single beam Fresnel intensity sampling and iterative phase retrieval
Arun Anand, Ahmad Faridian, Vani K. Chhaniwal, et al.
3D imaging requires the retrieval of both amplitude and phase of the wavefront interacting with the object. Quantitative phase contrast imaging technique like digital holography uses the interference of object and a known reference wavefront for whole field reconstructions. And for higher lateral resolution, uses of shorter wavelengths become necessary. For short wavelength sources, due to short coherence lengths, it becomes very difficult to implement a two-beam interferometric setup. We have developed a technique for reconstructing the amplitude and phase of object wavefront from the volume diffraction field by sampling it at several axial positions and implementing the scalar diffraction integral iteratively. This technique is extended to 3D microscopic imaging at 193 nanometers.
Applications of 3D Images I
icon_mobile_dropdown
Combining perspective and color visual cryptography for securing color image display
Jacques Machizaud, Thierry Fournel
In this work, we propose to extend the secure information display introduced by Yamamoto et al. [1] to full color images. Yamamoto's technique makes use of black and transparent mask as decoding shadow image of a visual cryptography scheme sharing 3 bits multi-color messages. By combining perspective setup together with a color visual cryptography (VC) scheme which does not use any mask, we can securely display color images. A satisfying color VC scheme is used which can be printed on a transparency film. When printed, colors act as filters [2] and allow a wider color gamut for the message which is not limited to saturated color as in [1] because of the black and transparent mask as decoding shadow image. In our implementation of the two-out-of-two visual cryptography scheme which shares a secret message into two color shadow images, the first one is projected onto a glass diffuser and the second one is printed on a transparency. A registration method is used in order to overcome the difficulty of shadow image alignment. As the two shadow images are superposed with an air layer, the message disappears when the angular position is not close to the ideal one. Examples with binary colored messages and with color images are provided to show the extension. By moving the detector (or the eyes) angularly around the right position, perspective effects can be perceived.
Unknown sensor position estimation in axially distributed sensing and 3D imaging
An axially distributed sensing system is a 3D sensing and imaging where the sensors are distributed along the optical axis. In this system, a prior knowledge of exact sensor positions was required for 3D volume image reconstruction. In this paper, we overview unknown sensor position estimation method and present an axially distributed sensing with unknown sensor positions. Experiments illustrate the feasibility of the proposed system and show this new system may improve the visual quality of 3D reconstructed images.
Atmospherical wavefront phases using the plenoptic sensor (real data)
L. F. Rodríguez-Ramos, I. Montilla, J. P. Lüke, et al.
Plenoptic cameras have been developed the last years as a passive method for 3d scanning, allowing focal stack capture from a single shot. But data recorded by this kind of sensors can also be used to extract the wavefront phases associated to the atmospheric turbulence in an astronomical observation. The terrestrial atmosphere degrades the telescope images due to the diffraction index changes associated to the turbulence. Na artificial Laser Guide Stars (Na-LGS, 90km high) must be used to obtain the reference wavefront phase and the Optical Transfer Function of the system, but they are affected by defocus because of the finite distance to the telescope. Using the telescope as a plenoptic camera allows us to correct the defocus and to recover the wavefront phase tomographically, taking advantage of the two principal characteristics of the plenoptic sensors at the same time: 3D scanning and wavefront sensing. Then, the plenoptic sensors can be studied and used as an alternative wavefront sensor for Adaptive Optics, particularly relevant when Extremely Large Telescopes projects are being undertaken. In this paper, we will present the first observational wavefront phases extracted from real astronomical observations, using punctual and extended objects, and we show that the restored wavefronts match the Kolmogorov atmospheric turbulence.
Multiple objects tracking in unknown background using Bayesian estimation in 3D space
In this paper, we overview tracking methods of 3D occluded objects in 3D integral imaging. Two methods based on Summation of Absolute Difference (SAD) algorithm and Bayesian framework, respectively, are presented. For the tracking method based on SAD, we calculate SAD between pixels of consecutive frames of a moving object for 3D tracking. For the tracking method based on Bayesian framework, posterior probabilities of the reconstructed scene background and the 3D objects are calculated by defining their pixel intensities as Gaussian and Gamma distributions, respectively, and by assuming appropriate prior distributions for estimated parameters. Multi-objects tracking is achieved by maximizing the geodesic distance between the log-likelihood of the background and the objects. Experimental results demonstrate 3D tracking of occluded objects.
High-accuracy real-time pedestrian detection system using 2D and 3D features
David R. Chambers, Clay Flannigan, Benjamin Wheeler
We present a real time stereo-vision pedestrian detector implementation with a very high accuracy, the 2D component of which attains 99% recall with less than 10-6 false positives per window on the INRIA persons dataset. We utilize a sequence of classifiers which use different features, beginning with Haar-like features and a Haar-like feature implementation adapted to disparity images, and performing a final verification with Histogram-of-Oriented Gradient (HOG) features. We present a 2D Haar-like feature implementation that utilizes 2x2 kernel filters at multiple scales rather than integral images, and combines a quickly trained preliminary adaBoost classifier with a more accurate SVM classifier. We also show how these Haar-like features may be computed from a partially incomplete stereo disparity image in order to make use of 3-dimensional data. Finally, we discuss how these features, along with the HOG features, are computed rapidly and how the classifiers are combined in such a way as to enable real-time implementation with higher detection rates and lower false positive rates than typical systems. Our overall detector is a practical combination of speed and detection performance, operating on 544x409 image (10,425 windows) at a frame rate of 10-20fps, depending on scene complexity. The detector's overall false positive rate is less than 10-6, corresponding to about one false positive every 10-60s when testing on our non-training data. Additionally, the detector has shown usefulness for detecting other object types, and has been implemented for traffic cones, telephone poles, and vehicles.
3D Displays and Related I
icon_mobile_dropdown
Ray-space acquisition system for 3DTV: 100-camera and ray-based acquisition systems
3D TV requires multiple view images and it is very important to adjust parameters used for capturing and display of multiview images, which includes size of view images, focal length, and camera/viewpoint interval. However, the parameters usually vary from systems to systems and that causes a problem regarding interconnectivity between capturing and display devices. The Ray-Space method provides one of the solutions to such problems raised in 3D TV data capturing, transmission, storing, and display. In this paper, we first review the Ray-Space method and describe its relationship with 3D TV. Then, we introduce 3 types of Ray-Space acquisition systems: 100-camera system, space/time-division system, and portable multi-camera system. We also describe test data set provided for MPEG (Moving Picture Experts Group) Multiview Video Coding and 3D Video activities.
Interactive 3D display simulator for autostereoscopic smart pad
There is growing interest of displaying 3D images on a smart pad for entertainments and information services. Designing and realizing various types of 3D displays on the smart pad is not easy for costs and given time. Software simulation can be an alternative method to save and shorten the development. In this paper, we propose a 3D display simulator for autostereoscopic smart pad. It simulates light intensity of each view and crosstalk for smart pad display panels. Designers of 3D display for smart pad can interactively simulate many kinds of autostereoscopic displays interactively by changing parameters required for panel design. Crosstalk to reduce leakage of one eye's image into the image of the other eye, and light intensity for computing visual comfort zone are important factors in designing autostereoscopic display for smart pad. Interaction enables intuitive designs. This paper describes an interactive 3D display simulator for autostereoscopic smart pad.
Virtual 3D interactive system with embedded multiwavelength optical sensor array and sequential devices
We proposed a virtual 3D-touch system by bare finger, which can detect the 3-axis (x, y, z) information of finger. This system has multi-wavelength optical sensor array embedded on the backplane of TFT panel and sequentail devices on the border of TFT panel. We had developed reflecting mode which can be worked by bare finger for the 3D interaction. A 4-inch mobile 3D-LCD with this proposed system was successfully been demonstrated already.
Integral Imaging
icon_mobile_dropdown
An overview of 3D visualization with integral imaging in photon starved conditions
Recently it was demonstrated that three-dimensional (3D) object recognition and visualization is possible with integral imaging in photon counting condition or under very low illumination conditions. We present an overview of the reconstruction techniques, imaging performance and compressive sensing ability of integral imaging in photon starved condition.
Improved resolution in far-field integral imaging
H. Navarro, A. Dorado, G. Saavedra, et al.
In multi-view three-dimensional imaging, to capture the elemental images of distant objects, the use of a field-like lens that projects the reference plane onto the microlens array is necessary. In this case, the spatial resolution of reconstructed images is determined by the spatial density of microlenses in the array. In this paper we report a simple method, based on the realization of double snapshots, to double the 2D pixel density of reconstructed scenes. Experiments are reported to support the proposed approach.
Experiments on axially distributed three-dimensional imaging techniques
Eric P. Flynn, Bahram Javidi
The utilization of three-dimensional imaging in science, defense, and industry is becoming increasingly relevant. Lowcost approaches (through appropriate choice of imagers) integrated with fast computer processing speed, allow threedimensional reconstruction of target objects to become readily accessible. Among techniques in 3D image reconstruction, Axially Distributed Sensing (ADS) utilizes elemental images captured only along the sensor's optical axis, greatly simplifying the procedure for proper three-dimensional reconstruction. Axially Distributed Sensing is procedurally different than conventional Integral Imaging techniques (in which sensor(s) are positioned transversely along the field of view), yet fundamentally related through the use of multiple elemental images and geometric optics. More interestingly, three-dimensional image reconstruction allows the imaging of target objects behind occlusions, or in front of difficult (to optically separate) backgrounds. It will be shown that two primary techniques can be utilized in axially distributed sensing to capture the elemental image information needed for reconstruction. This will be demonstrated in a controlled laboratory environment.
Functional three-dimensional imaging based on integral imaging technique
Functional three-dimensional imaging using integral imaging technique is introduced. Among several functionalities, main focus is made on the depth selective capturing and incoherent hologram synthesis. The light ray field information captured by the integral imaging is analyzed and filtered in the spatial / angular frequency domain for depth selective capturing. The light ray field information can also be used to first generate a set of orthographic view or refocused images and then they are utilized to synthesize the hologram of the captured three-dimensional object.
Integral imaging system with enlarged horizontal viewing angle
Masato Miura, Jun Arai, Tomoyuki Mishina, et al.
We developed a three-dimensional (3-D) imaging system with an enlarged horizontal viewing angle for integral imaging that uses our previously proposed method for controlling the ratio of the horizontal to vertical viewing angles by tilting the lens array used in a conventional integral imaging system. This ratio depends on the tilt angle of the lens array. We conducted an experiment to capture and display 3-D images and confirmed the validity of the proposed system.
Automatic target recognition of 3D objects under photon starved condition using advanced correlation filters
In this paper, an overview of automatic target recognition for three-dimensional (3D) passive photon counting integral imaging system using maximum average correlation height filters is presented. Poisson distribution is adapted for generation photon counting images. For estimation of the 3D scene from photon counting images, maximum likelihood estimation is used. The advanced correlation filter is synthesized with ideal training images. Using this filter, we prove that automatic target recognition may be implemented under photon starved conditions. Since integral imaging may reduce the effect of occlusion and obscuration, the advanced correlation filter may detect and recognize a 3D object under photon starved environment. To demonstrate the ability of 3D photon counting automatic target recognition, experimental results are presented.
3D Imaging
icon_mobile_dropdown
3D shape measurement using deterministic phase retrieval and a partially developed speckle field
For deterministic phase retrieval, the problem of insignificant axial intensity variations upon defocus of a smooth object wavefront is addressed. Our proposed solution is based on the use of a phase diffuser facilitating the formation of a partially-developed speckle field (i.e., a field with both scattered-wave and unperturbed-wave components). The smooth test wavefront impinges first on the phase diffuser producing the speckle field. Then two speckle patterns with different defocus are recorded at the output plane of a 4f-optical filtering setup with a spatial light modulator (SLM) in the common Fourier domain. The local variations of the recorded speckle patterns and the defocus distance approximate the axial intensity derivative which, in turn, is required to recover the wavefront phase via the transport of intensity equation (TIE). The SLM setup reduces the speckle recording time and the TIE allows direct (i.e., non-iterative) calculation of the phase. The pre-requisite partially-developed speckle field in our technique facilitates high image contrast and significant axial intensity variation. Wavefront reconstruction for the 3D refractive test object used demonstrates the effectiveness of the technique.
Three-dimensional imaging and visualization of camouflaged objects by use of axially distributed sensing method
In this paper, we present an overview of three-dimensional imaging and visualization of camouflaged objects using axially distributed sensing. The axially distributed sensing method collects three-dimensional information for a camouflaged object. Using the corrected elemental images, three-dimensional slice images are visualized using the digital reconstruction algorithm based on inverse ray projection model. In addition, we introduce the analysis of the depth resolution in our axially distributed sensing structure. The optical experiments are performed to capture longitudinal elemental images of a camouflaged object and to visualize the three-dimensional slice images with digital reconstruction.
New synthesizing 3D stereo image based on multisegmented method
Woonchul Ham, Luubaatar Badarch, Enkhbaatar Tumenjargal, et al.
In this paper, we introduce the harwdare/software technology used for implementing 3D stereo image capturing system which was built by using two OV3640 CMOS camera modules and camera interface hardware implemented in S3C6410 MCP. We also propose multi-segmented method to capture an image for better 3D depth feeling. An image is composed of 9 segmented sub images each of which is captured by using two degree of freedom in DC servo per each left and right CMOS cameras module for the improving the focusing problem in each segmented sub image. First, we analyze the whole image. We hope and sure that this new method will improve the comfortable 3D depth feeling even though its synthesizing method is a little complicated.
SSVEP-based BCI for manipulating three-dimensional contents and devices
Sungchul Mun, Sungjin Cho, Mincheol Whang, et al.
Brain Computer Interface (BCI) studies have been done to help people manipulate electronic devices in a 2D space but less has been done for a vigorous 3D environment. The purpose of this study was to investigate the possibility of applying Steady State Visual Evoked Potentials (SSVEPs) to a 3D LCD display. Eight subjects (4 females) ranging in age between 20 to 26 years old participated in the experiment. They performed simple navigation tasks on a simple 2D space and virtual environment with/without 3D flickers generated by a Flim-Type Patterned Retarder (FPR). The experiments were conducted in a counterbalanced order. The results showed that 3D stimuli enhanced BCI performance, but no significant effects were found due to the small number of subjects. Visual fatigue that might be evoked by 3D stimuli was negligible in this study. The proposed SSVEP BCI combined with 3D flickers can allow people to control home appliances and other equipment such as wheelchairs, prosthetics, and orthotics without encountering dangerous situations that may happen when using BCIs in real world. 3D stimuli-based SSVEP BCI would motivate people to use 3D displays and vitalize the 3D related industry due to its entertainment value and high performance.
Applications of 3D Images II
icon_mobile_dropdown
Visual tools for human guidance in manual operations
Many operations that are done manually, such as assembly operations, can be difficult to instruct to someone working in an unstructured environment that is not already familiar with the operation. The typical approach is to take pictures of the system and attempt to provide instructions using the pictures with some annotations. We have explored a variety of visual aids that might be used to provide a more real-time feedback to guide such manual operations. These methods include indirect feedback tools, such as signals or graphs to be interpreted, as well as direct methods that provide a simulated or real view of the operation as the user works. This paper will explore some of the pros and cons of these methods, and present some very preliminary results that suggest future directions for this work.
Cylindrical liquid crystal lenses system for autostereoscopic 2D/3D display
Chih-Wei Chen, Yi-Pai Huang, Yu-Cheng Chang, et al.
The liquid crystal lenses system, which could be electrically controlled easily for autostereoscopic 2D/3D switchable display was proposed. The High-Resistance liquid crystal (HRLC) lens utilized less controlled electrodes and coated a high-resistance layer between the controlled-electrodes was proposed and was used in this paper. Compare with the traditional LC lens, the HR-LC Lens could provide smooth electric-potential distribution within the LC layer under driving status. Hence, the proposed HR-LC Lens had less circuit complexity, low driving voltage, and good optical performance also could be obtained. In addition, combining with the proposed driving method called dual-directional overdriving method, the above method could reduce the switching time by applying large voltage onto cell. Consequently, the total switching time could be further reduced to around 2second. It is believed that the LC lens system has high potential in the future.
Global-constrained hidden Markov model applied on wireless capsule endoscopy video segmentation
Accurate analysis of wireless capsule endoscopy (WCE) videos is vital but tedious. Automatic image analysis can expedite this task. Video segmentation of WCE into the four parts of the gastrointestinal tract is one way to assist a physician. The segmentation approach described in this paper integrates pattern recognition with statiscal analysis. Iniatially, a support vector machine is applied to classify video frames into four classes using a combination of multiple color and texture features as the feature vector. A Poisson cumulative distribution, for which the parameter depends on the length of segments, models a prior knowledge. A priori knowledge together with inter-frame difference serves as the global constraints driven by the underlying observation of each WCE video, which is fitted by Gaussian distribution to constrain the transition probability of hidden Markov model.Experimental results demonstrated effectiveness of the approach.
Measurement method with moving image sensor in autostereoscopic display
Generally, autostereoscopy has some shortcomings as a fusible stereo condition. Since viewer is keep a viewing distance viewer and autostereoscopic display. The previous measurement method to measure the characteristics of the autostereoscopic display has a problem. We proposed a moving image sensor method for measuring auto-stereoscopic display. By using this method, the intensity distribution can be measured using the correct optimum view distance (OVD). In addition, OVD around the crosstalk can be found.
3D Displays and Related II
icon_mobile_dropdown
Viewing zones of IP and MV
The central and side viewing zones of pixel cell and elemental image based contact-type multiview 3-D imaging methods, can be combined with between viewing zone to form a bigger viewing zone, i.e., a combined viewing zone. The combined viewing zone of the elemental image based method has the same features as the viewing region in front of the viewing zone cross section in that of the pixel cell based method. The combined viewing zone of the pixel cell based method has almost 2 times of the viewing regions of viewing differently composed images with a pixel from each of pixel cells/elemental images in the display panel. The front and behind viewing regions in the pixel cell based method's combined viewing zone has a symmetrical relationship. The light intensity distribution supports these is facts.
Crosstalk minimization in autostereoscopic multiveiw 3D display by eye tracking and fusion (overlapping) of viewing zones
Sung-Kyu Kim, Seon-Kyu Yoon, Ki-Hyuk Yoon
An autostereoscopic 3D display provides the binocular perception without eye glasses, but induces the low 3D effect and dizziness due to the crosstalk effect. The crosstalk related problems give the deterioration of 3D effect, clearness, and reality of 3D image. A novel method of reducing the crosstalk is designed and tested; the method is based on the fusion of viewing zones and the real time eye position. It is shown experimentally that the crosstalk is effectively reduced at any position around the optimal viewing distance.
fVisiOn: glasses-free tabletop 3D display to provide virtual 3D media naturally alongside real media
Shunsuke Yoshida
A novel glasses-free tabletop 3D display, named fVisiOn, floats virtual 3D objects on an empty, flat, tabletop surface and enables multiple viewers to observe raised 3D images from any angle at 360° Our glasses-free 3D image reproduction method employs a combination of an optical device and an array of projectors and produces continuous horizontal parallax in the direction of a circular path located above the table. The optical device shapes a hollow cone and works as an anisotropic diffuser. The circularly arranged projectors cast numerous rays into the optical device. Each ray represents a particular ray that passes a corresponding point on a virtual object's surface and orients toward a viewing area around the table. At any viewpoint on the ring-shaped viewing area, both eyes collect fractional images from different projectors, and all the viewers around the table can perceive the scene as 3D from their perspectives because the images include binocular disparity. The entire principle is installed beneath the table, so the tabletop area remains clear. No ordinary tabletop activities are disturbed. Many people can naturally share the 3D images displayed together with real objects on the table. In our latest prototype, we employed a handmade optical device and an array of over 100 tiny projectors. This configuration reproduces static and animated 3D scenes for a 130° viewing area and allows 5-cm-tall virtual characters to play soccer and dance on the table.
3D display simulator based on mixed reality
We propose a 3D display simulator based on mixed reality technology. Proposed simulator system calculates light distribution and crosstalk using parameters required for the design of multi-displays, and projects the light distribution onto the ground from the top of the viewing zone. Mixed reality merges real and virtual worlds to produce new environments and visualizations. In this paper, this mixed reality is exploited to simulate multi-view display by merging physically simulated virtual space and real space where viewers belong to. Projected light distribution is scaled to the corresponding display size. Thus, the viewer experiences the real light distribution in the real space like living room. Crosstalk information is also provided through the interaction between a viewer's local position in real space and calculated virtual space. Our proposed simulator makes it possible to design and measure light distribution finding out optimized viewing zone without implementing the display in real space.
ATSC 8-VSB and M/H hybrid 3DTV system development for terrestrial broadcasting services
Sung-Hoon Kim, Jooyoung Lee, Jin Soo Choi, et al.
This paper presents 8-VSB & M/H hybrid 3DTV system for ATSC terrestrial 3DTV broadcasting services. The system transmits MPEG-2 encoded left images through HD main channel (8-VSB) and H.264 encoded right images through mobile channel (M/H) simultaneously. Basically hybrid 3DTV support stereoscopic 3D HD services composed of mixed quality left/right images for 3D image rendering. For more comfortable 3D service and human factors under hybrid 3DTV service environment, we also propose new video quality enhancement technologies with small amount of disparity map information. In this paper, we propose 8-VSB & M/H hybrid 3DTV system which enables stereoscopic 3D HD, 2D HD fixed and 2D mobile broadcasting concurrently within 6MHz bandwidth, and the proposed system will provide maximum channel flexibility and extended service functionalities as well as fully backward compatibility with legacy 2D receivers.
Poster Session
icon_mobile_dropdown
Auto-converging stereo cameras for 3D robotic tele-operation
Polaris Sensor Technologies has developed a Stereovision Upgrade Kit for TALON robot to provide enhanced depth perception to the operator. This kit previously required the TALON Operator Control Unit to be equipped with the optional touchscreen interface to allow for operator control of the camera convergence angle adjustment. This adjustment allowed for optimal camera convergence independent of the distance from the camera to the object being viewed. Polaris has recently improved the performance of the stereo camera by implementing an Automatic Convergence algorithm in a field programmable gate array in the camera assembly. This algorithm uses scene content to automatically adjust the camera convergence angle, freeing the operator to focus on the task rather than adjustment of the vision system. The autoconvergence capability has been demonstrated on both visible zoom cameras and longwave infrared microbolometer stereo pairs.
Analysis on the space-bandwidth product of digital holography for video hologram recording and reconstruction
In this paper, we present an analysis on space bandwidth product of digital hologram. The condition for clear reconstruction of in-axis and off-axis digital hologram case is derived. The correlation efficiency and modulate transfer function (MTF) are then used for quantitative analysis of the reconstruction object. The presented analysis is verified by simulation result and then is applied to record and reconstruct video hologram.
3D resolution in computationally reconstructed integral photography
Zahra Kavehvash, Khashayar Mehrany, Saeed Bagheri, et al.
In this research we have proposed a new definition for three-dimensional (3-D) integral imaging resolution. The general concept of two-dimensional (2-D) resolution used also for 3-D is failed to describe the 3-D resolvability completely. Thus, the researches focused on resolution improvement in 3-D integral imaging systems, didn't investigate thoroughly the effect of their method on the 3-D quality. The effect has only been shown on the 2-D resolution of each lateral reconstructed image. The newly introduced 3-D resolution concept has been demonstrated based on ray patterns, the cross-section between them and the sampling points. Consequently the effect of resulting sampling points in 3-D resolvability has been discussed in different lateral planes. Simulations has been performed which confirm the theoretical statements.
Three-dimensional stereoscopic display system on the table-top
Ki-Hyuk Yoon, Hyoung Lee, Sung-Kyu Kim
3D Display is generally designed to show 3D stereoscopic image to viewer at the center position of the display. But, some interactive 3D technology needs to interact with multiple viewers and each stereoscopic image such as an imaging demonstration. In this case, the display panel on the table is more convenient for multiple viewers. In this paper, we introduce the table-top stereoscopic display that has the potential to combine this interactive 3D technology. This display system enables two viewers to see the other images simultaneously on the table-top display and each viewer to stereoscopic images on it. Also, this display has first optical sheet to make multiple viewers see each image and second optical sheet to make them see stereoscopic images. We use a commercial LCD display, design the first optical sheet to make two viewers see each image, and design the second optical sheet to make each viewer see each stereoscopic image. The viewing zone from our display system is designed and easy to be viewed from children to adult to look at three dimensional stereoscopic images very well. We expect our 3D stereoscopic display system on the table-top can be applied for the interactive 3D display applications in the near future.
Method of crosstalk reduction using lenticular lens
Generally non-glass type three dimensional stereoscopic display systems should be considering human factor. Human factor include the crosstalk, motion parallax, types of display, lighting, age, unknown aspects around the human factors issues and user experience. Among these human factors, the crosstalk is very important human factor because; it reduces 3D effect and induces eye fatigue or dizziness. In these reason, we considered method of reduction crosstalk in three dimensional stereoscopic display systems. In this paper, we suggest method of reduction crosstalk using lenticular lens. Optical ray derived from projection optical system, converted to viewing zone shape by convolution of two apertures. In this condition, we can minimize and control the beam width by optical properties of lenticular lens (refractive, pitch, thickness, radius of curvature) and optical properties of projector (projection distance, optical features). In this processing, Gaussian distribution type shape is converted to rectangular distribution type shape. According to the beam width reduction will be reduce crosstalk, and it was verified used to lenticular lens.
Computer generated hologram of deep 3D scene from the data captured by integral imaging
Various techniques to visualize a 3-D object/scene have been proposed until now; stereoscopic display, parallax barrier, lenticular approach, integral imaging display, and holographic display. Application for a real existing 3-D scene is one of important issues. In this paper, at first the fundamental limitation of integral imaging display for deep 3-D scene is discussed. Then a two main types of holographic display; digital holography approach that digitally capturing an interference pattern and a computer generated hologram (CGH) approach from a set of perspective images are overviewed with describing the radical advantages and disadvantages.