Show all abstracts
View Session
- Front Matter: Volume 8738
- Devices for 3D Imaging
- 3D Image Processing I
- Digital Holography I
- 3D Imaging Systems and Related I
- Holographic Display
- Digital Holography II
- 3D Image Processing II
- 3D Image Processing III
- Poster Session
Front Matter: Volume 8738
Front Matter: Volume 8738
Show abstract
This PDF file contains the front matter associated with SPIE Proceedings Volume 8738, including the Title Page, Copyright information, Table of Contents, and the Conference Committee listing.
Devices for 3D Imaging
Flickerless 3D shutter glasses for full-resolution stereoscopic display
Show abstract
Ambient light inside viewing field of shutter-glasses 3DTV system
can cause perceivable flicker due to high brightness of the light
source. Omitting front polarizer of shutter glasses can be a
solution for improving ambient light flicker, but it makes
noticeable ghosting whenever 3D viewers tilt their heads. In this
paper, we propose the new flicker-free shutter glasses
compensated for viewers head tilt using tilt sensor. The crosstalk
level, inserted by the shutter is below 1.6% within the tilt angle
range from 0 to ±50°.
Liquid crystal lens for axially distributed three-dimensional sensing
Show abstract
In this paper, we present a novel three-dimensional (3D) sensing system which can demonstrate 3D acquisition. The
proposed system is using an electronically tunable liquid crystal (LC) lens with axially distributed sensing method.
Therefore, multiple 2D images with slightly different perspectives by varying the focal lengths of the LC lens without
mechanical movements of an image sensor can be recorded. And then the 3D images are further reconstructed according
to the ray-back projection algorithm. The preliminary functionalities are also demonstrated in this paper. We believe that
our proposed system may useful for a compact 3D sensing camera system.
3D Image Processing I
Global View and Depth (GVD) format for FTV/3DTV
Show abstract
FTV (Free-viewpoint Television) is 3DTV with infinite number of views and ranked as the top of visual media. It
enables to view a 3D world by freely changing the viewpoint. MPEG has been promoting the international
standardization of FTV since 2001. The first phase of FTV is MVC (Multi-view Video Coding) and the second phase of
FTV is 3DV (3D Video). MVC completed in 2009 encodes multiple camera views efficiently and has been adopted by
Blu-ray 3D. 3DV is a standard that targets serving a variety of 3D displays and is currently in progress. 3DV employs
MVD (Multi-View and Depth) for data format. MVD is a set of views and depths at various viewpoints. 3DV sends
MVD data at a few viewpoints and synthesizes many views at other viewpoints to be displayed on various types of
multi-view displays at the receiver side. The 3DV activity moved to the Joint Collaborative Team on 3D Video Coding
Extension Development (JCT-3V) of MPEG and ITU in July 2012. We propose GVD (Global View and Depth) as an
alternative data format. GVD consists of base view, base depth, residual views and residual depths. GVD is a compact
3D expression compared to MVD since redundancy of MVD is removed in GVD. GVD has been accepted in JCT-3V.
Elemental images for integral-imaging display
Show abstract
One of the differences between the near-field integral imaging (NInI) and the far-field integral imaging
(FInI), is the ratio between number of elemental images and number of pixels per elemental image. While
in NInI the 3D information is codified in a small number of elemental images (with many pixels each), in
FInI the information is codified in many elemental images (with only a few pixels each). The later codification
is similar that the one needed for projecting the InI field onto a pixelated display when aimed to
build an InI monitor. For this reason, the FInI cameras are specially adapted for capturing the InI field
with display purposes. In this contribution we research the relations between the images captured in NInI
and FInI modes, and develop the algorithm that permits the projection of NInI images onto an InI monitor.
Digital Holography I
Usage of moving nanoparticles for improved holographic recording
Show abstract
Metal nanoparticles are used for different applications in holographic configurations. The metal nanoparticles are placed
close to an object and encode it by a time varying random mask. A decoding mask is computed and used to obtain super-resolution digital hologram and eliminate the twin image and DC from a digital hologram. The method is also shown to be applicable for other optical methods.
Alternative approach to develop digital hologram interaction system by bounding volumes for identifying object collision
Show abstract
Digital holography technology has been considered a powerful method for reconstructing real objects and displaying
completed 3D information. Although many studies on holographic displays have been conducted, research on interaction
methods for holographic displays is still in an early stage. For developing an appropriate interaction method for digital
holograms, a two-way interaction which is able to provide natural interaction between humans and holograms should be
considered. However, digital holography technology is not yet fully developed to make holograms capable of naturally
responding to human behaviors. Thus, the purpose of this study was to propose an alternative interaction method capable
of applying it to interacting with holograms in the future. In order to propose an intuitive interaction method based on
computer-generated objects, we utilized a depth camera, Kinect, which provides depth information per pixel. In doing so,
humans and environment surrounding them were captured by the depth camera. The captured depth images were
simulated on a virtual space and computer graphic objects were generated on the same virtual space. Detailed location
information of humans was continuously extracted to provide a natural interaction with the generated objects. In order to
easily identify whether two objects were overlapped or not, bounding volumes were generated around both humans and
objects, respectively. The local information of the bounding volumes was correlated with one another, which made it
possible for humans to control the computer-generated objects. Then, we confirmed a result of interaction through
computer generated holograms. As a result, we obtained extreme reduction of computation time accuracy within 80%
through bounding volume.
3D Imaging Systems and Related I
Generation of flat viewing zone in DFVZ autostereoscopic multiview 3D display by weighting factor
Show abstract
A new method is introduced to reduce three crosstalk problems and the brightness variation in 3D image by means
of the dynamic fusion of viewing zones (DFVZ) using weighting factor. The new method effectively generates the
flat viewing zone at the center of viewing zone. The new type autostereoscopic 3D display can give less brightness
variation of 3D image when observer moves.
Effect of viewing distance on 3D fatigue caused by viewing mobile 3D content
Show abstract
With an advent of autostereoscopic display technique and increased needs for smart phones, there has been a significant
growth in mobile TV markets. The rapid growth in technical, economical, and social aspects has encouraged 3D TV
manufacturers to apply 3D rendering technology to mobile devices so that people have more opportunities to come into
contact with many 3D content anytime and anywhere. Even if the mobile 3D technology leads to the current market
growth, there is an important thing to consider for consistent development and growth in the display market. To put it
briefly, human factors linked to mobile 3D viewing should be taken into consideration before developing mobile 3D
technology. Many studies have investigated whether mobile 3D viewing causes undesirable biomedical effects such as
motion sickness and visual fatigue, but few have examined main factors adversely affecting human health. Viewing
distance is considered one of the main factors to establish optimized viewing environments from a viewer's point of
view. Thus, in an effort to determine human-friendly viewing environments, this study aims to investigate the effect of
viewing distance on human visual system when exposing to mobile 3D environments. Recording and analyzing
brainwaves before and after watching mobile 3D content, we explore how viewing distance affects viewing experience
from physiological and psychological perspectives. Results obtained in this study are expected to provide viewing
guidelines for viewers, help ensure viewers against undesirable 3D effects, and lead to make gradual progress towards a
human-friendly mobile 3D viewing.
Expanding the degree of freedom of observation on depth-direction by the triple-separated slanted parallax barrier in autostereoscopic 3D display
Show abstract
Autostereoscopic multi-views 3D display system has a narrow freedom of degrees to the observational directions such as
horizontal and perpendicular direction to the display plane than the glasses on type. In this paper, we proposed an
innovative method that expanding a width of formed viewing zone on the depth direction keeping with the number of
views on horizontal direction by using the triple segmented-slanted parallax barrier (TS-SPB) in the glasses-off type of
3D display. The validity of the proposal is verified by optical simulation based on the environment similar to an actual
case. In benefits, the maximum number of views to display on horizontal direction is to be 2n and the width of viewing
zone on depth direction is to be increased up to 3.36 times compared to the existing one-layered parallax barrier system.
Light intensity simulation in real space by viewing locations for autostereoscopic display design
Show abstract
Autostereoscopy is a common method for providing 3D perception to viewers without glasses. They produce 3D images
with a wide perspective, and can achieve the effect of observing different images visible on the same plane from
difference point of view. In autostereoscopic displays, crosstalk occurs when incomplete isolation of the left and right
images so that one leakage into the other. This paper addresses a light intensity simulator that can calculate crosstalk
according to variable viewing positions by automatically tracking heads of viewers. In doing so, we utilize head tracking
technique based on infrared laser sensors to detect the observers' viewing positions. Preliminary results show that the
proposed system was appropriate to be operated in designing the autostereoscopic displays ensuring human safety.
Holographic Display
Study on basic problems in real-time 3D holographic display
Show abstract
In recent years, real-time three-dimensional (3D) holographic display has attracted more and more attentions. Since a
holographic display can entirely reconstruct the wavefront of an actual 3D scene, it can provide all the depth cues for
human eye’s observation and perception, and it is believed to be the most promising technology for future 3D display.
However, there are several unsolved basic problems for realizing large-size real-time 3D holographic display with a wide
field of view. For examples, commercial pixelated spatial light modulators (SLM) always lead to zero-order intensity
distortion; 3D holographic display needs a huge number of sampling points for the actual objects or scenes, resulting in
enormous computational time; The size and the viewing zone of the reconstructed 3D optical image are limited by the
space bandwidth product of the SLM; Noise from the coherent light source as well as from the system severely degrades
the quality of the 3D image; and so on. Our work is focused on these basic problems, and some initial results are
presented, including a technique derived theoretically and verified experimentally to eliminate the zero-order beam
caused by a pixelated phase-only SLM; a method to enlarge the reconstructed 3D image and shorten the reconstruction
distance using a concave reflecting mirror; and several algorithms to speed up the calculation of computer generated
holograms (CGH) for the display.
A holographic display based on spatial multiplexing
Show abstract
A DMD chip is capable of displaying holographic images with a gray level and of reconstructing its image only in the
space defined by the diffraction pattern induced from its pixel arrangement structure. 2 X 5 DMD chips are combined on
a board to generate a spatially multiplexed reconstructed image of 10cmX2cm. Each DMD chip generates an image
piece with the size of 2cm (Horizontal) X 1cm (vertical). The reconstructed image reveals the features of original object
image including the gray level but noises from several sources are also laden with it.
Computer-generated hologram for 3D scene from multi-view images
Show abstract
Recently, the computer generated hologram (CGH) calculated from real existing objects is more actively investigated to
support holographic video and TV applications. In this paper, we propose a method of generating a hologram of the
natural 3-D scene from multi-view images in order to provide motion parallax viewing with a suitable navigation range.
After a unified 3-D point source set describing the captured 3-D scene is obtained from multi-view images, a hologram
pattern supporting motion-parallax is calculated from the set using a point-based CGH method. We confirmed that 3-D
scenes are faithfully reconstructed using numerical reconstruction.
Spherically-arranged piecewise planar hologram for capturing a diffracted object wave field in 360 degree
Show abstract
We present a new method to record and reconstruct a diffracted object wave field in all directions. For this, we
are going to use spherically-arranged holograms instead of a single spherical hologram. Our spherically-arranged
holograms are constructed to store all components of plane waves propagating in all directions. One can use the
well-known efficient FFT-based diffraction formulae such as Fresnel transform and angular spectrum method in
generation and reconstruction of our spherically-arranged holograms. It is possible to synthesize a new hologram
with an arbitrary position and orientation without the geometry of the object. Numerical experiments are
presented to show the effectiveness of our method.
A 3D visual conformity of holographic content on the stereo hologram display
Show abstract
Viewing sub-regions which are working as basic image cells in the viewing zone of electro-holographic display based on
a stereo hologram are defined and the composition of images viewed at these regions are found. Each of these subregions
can work as a basic image cell which provides a distinct image different from those of other sub-regions, though
each of them can be divided into pieces of different compositions. When the numbers of pixels in each pixel cell and
pixel cells in a panel, increase, most of these pieces will disappear because their sizes are smaller than the blurring caused
by diffraction effect. Furthermore, more than two sub-regions will within the pupil size of viewers’ each eye. This might
induce a continuous parallax to viewers to create the supermultiview condition.
Digital Holography II
Observation of femtosecond light pulse propagation by using digital light-in-flight recording by holography
Show abstract
We demonstrate motion pictures of femtosecond light pulse propagation. We adopted digital light-in-flight recording by
holography as a technique for observation of femtosecond light pulse propagation. We recorded and reconstructed a
moving picture of femtosecond light pulse propagating on a diffuser plate on which a test chart pattern was printed. The
center wavelength and the duration of the light pulse were 800 nm and 96 fs, respectively. We successfully observed
femtosecond light pulse propagation for 530 fs by the technique.
Synthesis and 3D display of multi-wavelengths digital holograms through adaptive transformation
Show abstract
We propose a reconstructing method of digital color holograms based on the stretching techniques. With a simple affine
transformation on the Fresnel reconstructions, we are able to manage the focus of the digital color reconstructions of the
same object in order to obtain a synthetic single digital hologram in which three different colors are multiplexed. In
addition, a 3D scene is synthesized combining multiple optically recorded digital color holograms of different objects.
Numerical analysis and display tests are used to evaluate the effectiveness of the proposed method.
3D Image Processing II
Compressive sensing for improved depth discrimination in 3D holographic reconstruction
Show abstract
Compressive holography has attracted significant interest since its introduction three about years ago. In this
paper, we present an overview of our work on the ability to reconstruct a 3D volume from its 2D recorded compressive
hologram. Using the single-exposure on-line (SEOL) setup, we show how CS applied to this naturally underdetermined
problem enables the improved sectioning (or depth discrimination) of the reconstructed volume, when compared with
standard in-line holography. We also present the mathematical guarantees for the reconstruction of the 3D volume
features from its single 2D hologram and their physical implications for sectioning of 3D volume.
Advantage of diverging radial type for mobile stereo camera
Show abstract
Distortions in the perceived image characteristics for three different camera arrangements of parallel, converging, and
diverging are different according to each focal length, focus distance, field of view angle, color, magnification, and
camera aligning direction. The distortions in perceived image for the parallel and converging arrangements have been
researched commercially available stereoscopic TV based on high speed LCD, shutter glasses, and mobile devices.
However, the distortion in the perceived image for diverging arrangement is not well known. This paper discusses the
distortion in perceived image characteristics of diverging type stereo camera according to the magnification determining
the enlargement and reduction of a camera image, and they are compared with those of other camera arrangements such
as parallel and converging types. Also, the distortion induces the image closer to the viewers for the diverging type while
away for the converging. The inducement is more prominent as the camera distance between two component cameras of
the stereo camera for the diverging type. Furthermore, the effect of diverging angle on disparity will be considered that
the inter-camera distance can be made as small as possible.
3D Image Processing III
Concealed object segmentation and three-dimensional localization with passive millimeter-wave imaging
Show abstract
Millimeter waves imaging draws increasing attention in security applications for weapon detection under clothing. In this
paper, concealed object segmentation and three-dimensional localization schemes are reviewed. A concealed object is
segmented by the k-means algorithm. A feature-based stereo-matching method estimates the longitudinal distance of the concealed object. The distance is estimated by the discrepancy between the corresponding centers of the segmented
objects. Experimental results are provided with the analysis of the depth resolution.
Three-dimensional polarimetric imaging based on integral-imaging techniques
Show abstract
In this paper, we overview a 3D polarimetric imaging system by using integral imaging techniques under natural
illumination conditions. To obtain polarimetric information of objects, the Stokes polarization parameters are first measured
and then utilized to calculate degree of polarization of the objects. Based on degree of polarization information of each 2D
image, a modified computational reconstruction method is presented to perform 3D polarimetric image reconstruction. The
system may be used to detect or classify objects with distinct polarization signatures in 3D space. Experimental results also
show the proposed system may mitigate the effect of occlusion in 3D reconstruction.
Real-time motion artifacts compensation of ToF sensors data on GPU
Show abstract
Over the last decade, ToF sensors attracted many computer vision and graphics researchers. Nevertheless, ToF devices suffer from severe motion artifacts for dynamic scenes as well as low-resolution depth data which strongly justifies the importance of a valid correction. To counterbalance this effect, a pre-processing approach is introduced to greatly improve range image data on dynamic scenes. We first demonstrate the robustness of our approach using simulated data to finally validate our method using sensor range data. Our GPU-based processing pipeline enhances range data reliability in real-time.
Poster Session
Analysis of three-dimensional image using Tutte polynomial for polyhedral graphs
Show abstract
All three-dimensional image, could be represented with a polyhedral graphs, where the number of edges and vertices is
proportional to the quality of the image, and this image could be stored in an algebraic expression like a Tutte
polynomial, allowing the reconstruction of any three-dimensional image. The Tutte polynomial is calculated using the
package Graph Theory of Maple 16, which has been optimized for polyhedral graphs with a lot of edges and vertices, so
this could be very useful with three-dimensional complex images or three-dimensional HD image. In this paper, I will
present some examples of the useful Tutte polynomial, and for future work, I will investigate the use of Bollobás-
Riordan polynomial.
Recognition of 3D facial expression from posed data
Show abstract
Although recognition of facial expression in 3D facial images has been an active research area, most of the prior works
are limited to using full frontal facial images. These techniques primarily project 3D facial image on 2D and manually
select landmarks in 2D projection to extract relevant features. Face recognition in 2D images can be challenging due to
unconstrained conditions such as head pose, occlusion, and resulting loss of data. Similarly, pose variation in 3D facial
imaging can also result in loss of data. In most of the current 3D facial recognition works, when 3D posed face data are
projected onto 2D, additional data loss may render 2D facial expression recognition even more challenging. In
comparison, this work proposes novel feature extraction directly from the 3D facial posed images without the need of
manual selection of landmarks or projection of images in 2D space. This feature is obtained as the angle between
consecutive 3D normal vectors on the vertex points aligned either horizontally or vertically across the 3D facial image.
Our facial expression recognition results show that the feature obtained from vertices aligned vertically across the face
yields the best accuracy for classification with an average 87.8% area under the ROC. The results further suggest that the
same feature outperforms its horizontal counterpart in recognizing facial expressions for pose variation between 35º - 50º
with average accuracy of 80% - 60%, respectively.
A new system parameters analysis method to improve image quality in digital microscopic hologram reconstruction
Show abstract
In order to obtain the non-overlapping and high-quality reconstructed image, this paper analyzes the system parameters
in digital holographic microscopy. Nowadays a few scholars have analyzed the system parameters which need to satisfy
the sampling theorem and spectrum separation conditions. In this paper, not only the sampling theorem and spectrum
separation but also the size relationship between the reconstructed plane and the magnified image are studied. Then
relationships of system parameters are proposed. First, the maximum object size is directly proportional to the
wavelength and microscope objective focal length, inversely proportional to the sampling interval. Second, the minimum
magnification is described accurately. Finally, the paper gives the range of recoding distance. Experiments further
demonstrate the proposed conclusion’s validity.
Coherent scattering stereoscopic microscopy for mask inspection of extreme ultra-violet lithography
Show abstract
Recently, mask inspection for extreme ultraviolet lithography has been in the spotlight as the next-generation lithography
technique in the field of semiconductor production. This technology is used to make semiconductors more delicate even
as they become tinier. In mask inspection, defect sizes and locations are major factors for aggravating mask defects
which cause errors on wafer patterns. This paper addresses a simulated solution of coherent scattering stereoscopic
microscopy for considering the mitigation of mask defects. To perform the inspection of mask defects for the
stereoscopic microscopy, we construct a stereo aerial image with a disparity map produced by a Hybrid input-output
algorithm and disparity estimation methods. Preliminary results show that mask inspection by coherent scattering
stereoscopic microscopy is expected to be performed in a more accurate way compared to 2D mask inspection.
Automated analysis of 3D morphology of human red blood cells via off-axis digital holographic microscopy
Inkyu Moon
Show abstract
In this paper we overview an automated method for the analysis of clinical parameters of human red blood cells (RBCs).
The digital holograms of mature RBCs are recorded by CCD camera with off-axis interferometry setup and the
quantitative phase images of RBCs are formed by a numerical reconstruction technique. For automated investigation of
the 3D morphology and mean corpuscular hemoglobin of RBCs, the unnecessary background in the RBCs phase images
are removed by marker-controlled watershed segmentation algorithm. Then, characteristic properties of each RBC such
as projected cell surface, average phase, mean corpuscular hemoglobin (MCH) and (MCH) surface density is
quantitatively measured. Finally, the equality of covariance matrixes and mean vectors of these features for different
kinds of RBCs are experimentally analyzed using statistical test scheme. Results show that these characteristic
parameters of RBCs can be used as feature pattern to discriminate between RBC populations that differ in shape and
hemoglobin content.
CSpace high-resolution volumetric 3D display
Show abstract
We are currently in the process of developing a static-volume 3D display, CSpace® display, that has the capability to
produce images of much larger size than any other static-volume display currently under development, with up to nearly
800 million voxel resolution. A key component in achieving the size and resolution of the display is the optical system
that transfers the pixel data from a standard DMD projection unit to the voxel size required by the display with high
contrast and minimal distortion. The current optical system is capable of such performance for only small image sizes,
and thus new designs of the optical system must be developed. We report here on the design and testing of a new optical
projection system with the intent of achieving performance close to that of a telecentric lens. Theoretical analysis with
Zemax allowed selection of appropriate lens size, spacing, and focal length, and identified the need for tilting the
assembly to produce the desired beam properties. Experimental analysis using the CSpace® prototype showed that the
improved beam parameters allowed for higher resolution and brighter images than those previously achieved, though
their remains room for further improvement of the design. Heating of the DMD and its housing components were also
addressed to minimize heating effects on the optical system. A combination of a thermo-electric cooler and a small fan
produced sufficient cooling to stabilize the temperature of the system to acceptable levels.