3D multi-view system using electro-wetting liquid lenticular lenses
Author(s):
Yong Hyub Won;
Junoh Kim;
Cheoljoong Kim;
Dooseub Shin;
Junsik Lee;
Gyohyun Koo
Show Abstract
Lenticular multi-view system has great potential of three dimensional image realization. This paper introduces a fabrication of liquid lenticular lens array and an idea of increasing view points with a same resolution. Tunable liquid lens array can produce three dimensional images by using electro-wetting principle that changes surface tensions by applying voltage. The liquid lenticular device consists of a chamber, two different liquids and a sealing plate. To fabricate the chamber, an <100> silicon wafer is wet-etched by KOH solution and a trapezoid shaped chamber can be made after a certain time. The chamber having slanted walls is advantageous for electro-wetting achieving high diopter. Electroplating is done to make a nikel mold and poly methyl methacrylate (PMMA) chamber is fabricated through an embossing process. Indium tin oxide (ITO) is sputtered and parylene C and Teflon AF1600 is deposited for dielectric and hydrophobic layer respectively. Two immiscible liquids are injected and a glass plate as a sealing plate is covered with polycarbonates (PC) gaskets and sealed by UV adhesive. Two immiscible liquids are D.I water and a mixture of 1-chloronaphthalene and dodecane. The completed lenticular lens shows 2D and 3D images by applying certain voltages. Dioptric power and operation speed of the lenticular lens array are measured. A novel idea that an increment of viewpoints by electrode separation process is also proposed. The left and right electrodes of lenticular lens can be induced by different voltages and resulted in tilted optical axis. By switching the optical axis quickly, two times of view-points can be achieved with a same pixel resolution.
Use of display technologies for augmented reality enhancement
Author(s):
Kevin Harding
Show Abstract
Augmented reality (AR) is seen as an important tool for the future of user interfaces as well as training applications. An important application area for AR is expected to be in the digitization of training and worker instructions used in the Brilliant Factory environment. The transition of work instructions methods from printed pages in a book or taped to a machine to virtual simulations is a long step with many challenges along the way. A variety of augmented reality tools are being explored today for industrial applications that range from simple programmable projections in the work space to 3D displays and head mounted gear. This paper will review where some of these tool are today and some of the pros and cons being considered for the future worker environment.
A horizontal parallax table-top floating image system with freeform optical film structure
Author(s):
Ping-Yen Chou;
Yi-Pai Huang;
Chien-Chung Liao;
Chuan-Chung Chang;
Fu-Ming Fleming Chuang;
Chao-Hsu Tsai
Show Abstract
In this paper, a new structure of horizontal parallax light field 3D floating image display system was proposed. The structure consists of pico-projectors, Fresnel lens, micro-lens array and sub-lens array with freeform shape. By the functions of optical components, each light field of projectors could be controlled as a fan ray, which has high directivity in horizontal and wide scattered angle in vertical. Furthermore, according to the reverse light tracing and integral image display technique, horizontal parallax floating 3D could be demonstrated in the system. Simulated results show that the proposed 3D display structure has a good image quality and the crosstalk is also limited below 22.9%. Compared with other 3D technologies, this structure could have more benefits, including displaying real high resolution floating image, unnecessary of physical hardware on the image plane, scalability of large size system, without the noise from spinning component, and so on.
Display of travelling 3D scenes from single integral-imaging capture
Author(s):
Manuel Martinez-Corral;
Adrian Dorado;
Seok-Min Hong;
Jorge Sola-Pikabea;
Genaro Saavedra
Show Abstract
Integral imaging (InI) is a 3D auto-stereoscopic technique that captures and displays 3D images. We present a method for easily projecting the information recorded with this technique by transforming the integral image into a plenoptic image, as well as choosing, at will, the field of view (FOV) and the focused plane of the displayed plenoptic image. Furthermore, with this method we can generate a sequence of images that simulates a camera travelling through the scene from a single integral image. The application of this method permits to improve the quality of 3D display images and videos.
A study on the effects of RGB-D database scale and quality on depth analogy performance
Author(s):
Sunok Kim;
Youngjung Kim;
Kwanghoon Sohn
Show Abstract
In the past few years, depth estimation from a single image has received increased attentions due to its wide applicability in image and video understanding. For realizing these tasks, many approaches have been developed for estimating depth from a single image based on various depth cues such as shading, motion, etc. However, they failed to estimate plausible depth map when input color image is derived from different category in training images. To alleviate these problems, data-driven approaches have been popularly developed by leveraging the discriminative power of a large scale RGB-D database. These approaches assume that there exists appearance- depth correlation in natural scenes. However, this assumption is likely to be ambiguous when local image regions have similar appearance but different geometric placement within the scene. Recently, a depth analogy (DA) has been developed by using the correlation between color image and depth gradient. DA addresses depth ambiguity problem effectively and shows reliable performance. However, no experiments are conducted to investigate the relationship between database scale and the quality of the estimated depth map. In this paper, we extensively examine the effects of database scale and quality on the performance of DA method. In order to compare the quality of DA, we collect a large scale RGB-D database using Microsoft Kinect v1 and Kinect v2 on indoor and ZED stereo camera on outdoor environments. Since the depth map obtained by Kinect v2 has high quality compared to that of Kinect v1, the depth maps from the database from Kinect v2 are more reliable. It represents that the high quality and large scale RGB-D database guarantees the high quality of the depth estimation. The experimental results show that the high quality and large scale training database leads high quality estimated depth map in both indoor and outdoor scenes.
Real-time geometric scene estimation for RGBD images using a 3D box shape grammar
Author(s):
Andrew R. Willis;
Kevin M. Brink
Show Abstract
This article describes a novel real-time algorithm for the purpose of extracting box-like structures from RGBD image data. In contrast to conventional approaches, the proposed algorithm includes two novel attributes: (1) it divides the geometric estimation procedure into subroutines having atomic incremental computational costs, and (2) it uses a generative “Block World” perceptual model that infers both concave and convex box elements from detection of primitive box substructures. The end result is an efficient geometry processing engine suitable for use in real-time embedded systems such as those on an UAVs where it is intended to be an integral component for robotic navigation and mapping applications.
Free segmentation in rendered 3D images through synthetic impulse response in integral imaging
Author(s):
M. Martínez-Corral;
A. Llavador;
E. Sánchez-Ortiga;
G. Saavedra;
B. Javidi
Show Abstract
Integral Imaging is a technique that has the capability of providing not only the spatial, but also the angular information of three-dimensional (3D) scenes. Some important applications are the 3D display and digital post-processing as for example, depth-reconstruction from integral images. In this contribution we propose a new reconstruction method that takes into account the integral image and a simplified version of the impulse response function (IRF) of the integral imaging (InI) system to perform a two-dimensional (2D) deconvolution. The IRF of an InI system has a periodic structure that depends directly on the axial position of the object. Considering different periods of the IRFs we recover by deconvolution the depth information of the 3D scene. An advantage of our method is that it is possible to obtain nonconventional reconstructions by considering alternative synthetic impulse responses. Our experiments show the feasibility of the proposed method.
3D in natural random refractive distortions
Author(s):
Marina Alterman;
Yoav Y. Schechner
Show Abstract
Random distortions naturally affect images taken through atmospheric turbulence or wavy water. They pose new 3D recovery problems. Distortions are caused by the volumetric field of turbulent air or the 3D shape of water waves. We show methods that recover these 3D distorting media. Moreover, it is possible to triangulate objects beyond the refracting medium. Applications include sensing and study of random refractive media in nature, and enhanced imaging including possibilities for a virtual periscope.
Estimation of the degree of polarization in low-light 3D integral imaging
Author(s):
Artur Carnicer;
Bahram Javidi
Show Abstract
The calculation of the Stokes Parameters and the Degree of Polarization in 3D integral images requires a careful manipulation of the polarimetric elemental images. This fact is particularly important if the scenes are taken in low-light conditions. In this paper, we show that the Degree of Polarization can be effectively estimated even when elemental images are recorded with few photons. The original idea was communicated in [A. Carnicer and B. Javidi, “Polarimetric 3D integral imaging in photon-starved conditions,” Opt. Express 23, 6408–6417 (2015)]. First, we use the Maximum Likelihood Estimation approach for generating the 3D integral image. Nevertheless, this method produces very noisy images and thus, the degree of polarization cannot be calculated. We suggest using a Total Variation Denoising filter as a way to improve the quality of the generated 3D images. As a result, noise is suppressed but high frequency information is preserved. Finally, the degree of polarization is obtained successfully.
Full-color 3D display using binary phase modulation and speckle reduction
Author(s):
Osamu Matoba;
Kazunobu Masuda;
Syo Harada;
Kouichi Nitta
Show Abstract
One of the 3D display systems for full-color reconstruction by using binary phase modulation is presented. The improvement of reconstructed objects is achieved by optimizing the binary phase modulation and accumulating the speckle patterns by changing the random phase distributions. The binary phase pattern is optimized by the modified Frenel ping-pong algorithm. Numerical and experimental demonstrations of full color reconstruction are presented.
Evaluation of the use of 3D printing and imaging to create working replica keys
Author(s):
Jeremy Straub;
Scott Kerlin
Show Abstract
This paper considers the efficacy of 3D scanning and printing technologies to produce duplicate keys. Duplication of keys, based on remote-sensed data represents a significant security threat, as it removes pathways to determining who illicitly gained access to a secured premises. Key to understanding the threat posed is the characterization of the easiness of gaining the required data for key production and an understanding of how well keys produced under this method work. The results of an experiment to characterize this are discussed and generalized to different key types. The effect of alternate sources of data on imaging requirements is considered.
Space/time averaging of scattered coherence functions
Author(s):
Damien P. Kelly
Show Abstract
A new optical technique for understanding, analyzing and developing optical systems is presented. This approach is statistical in nature, where information about an object under investigation is discovered by examining deviations from a known reference statistical distribution.
Resolution of electro-holographic image
Author(s):
Jung-Young Son;
Oleksii Chernyshov;
Hyoung Lee;
Beom-Ryeol Lee;
Min-Chul Park
Show Abstract
The resolution of the reconstructed image from a hologram displayed on a DMD is measured with the light field images along the propagation direction of the reconstructed image. The light field images reveal that a point and line image suffers a strong astigmatism but the line focusing distance differences for lines with different directions. This will be astigmatism too. The focusing distance of the reconstructed image is shorter than that of the object. The two lines in transverse direction are resolved when the gap between them is around 16 pixels of the DMD’s in use. However, the depth direction is difficult to estimate due to the depth of focus of each line. Due to the astigmatism, the reconstructed image of a square appears as a rectangle or a rhombus.
Generalized phase-shifting color digital holography
Author(s):
Takanori Nomura;
Takaaki Kawakami;
Kazuma Shinomura
Show Abstract
Two methods to apply the generalized phase-shifting digital holography to color digital holography are proposed. One is wave-splitting generalized phase-shifting color digital holography. This is realized by using a color Bayer camera. Another is multiple exposure generalized phase-shifting color digital holography. This is realized by the wavelength-dependent phase-shifting devices. Experimental results for both generalized phase-shifting color digital holography are presented to confirm the proposed methods.
Full-color holographic 3D imaging system using color optical scanning holography
Author(s):
Hayan Kim;
You Seok Kim;
Taegeun Kim
Show Abstract
We propose a full-color holographic three-dimensional imaging system that composes a recording stage, a transmission and processing stage and reconstruction stage. In recording stage, color optical scanning holography (OSH) records the complex RGB holograms of an object. In transmission and processing stage, the recorded complex RGB holograms are transmitted to the reconstruction stage after conversion to off-axis RGB holograms. In reconstruction stage, the off-axis RGB holograms are reconstructed optically.
Wavefront printing technique with overlapping approach toward high definition holographic image reconstruction
Author(s):
K. Wakunami;
R. Oi;
T. Senoh;
H. Sasaki;
Y. Ichihashi;
K. Yamamoto
Show Abstract
A hologram recording technique, generally called as “wavefront printer”, has been proposed by several research groups for static three-dimensional (3D) image printing. Because the pixel number of current spatial light modulators (SLMs) is not enough to reconstruct the entire wavefront in recording process, typically, hologram data is divided into a set of subhologram data and each wavefront is recorded sequentially as a small sub-hologram cell in tiling manner by using X-Y motorized stage. However since previous works of wavefront printer do not optimize the cell size, the reconstructed images were degraded by obtrusive split line due to visible cell size caused by too large cell size for human eyesight, or by diffraction effect due to discontinuity of phase distribution caused by too small cell size. In this paper, we introduce overlapping recording approach of sub-holograms to achieve both conditions: enough smallness of apparent cell size to make cells invisible and enough largeness of recording cell size to suppress diffraction effect by keeping the phase continuity of reconstructed wavefront. By considering observing condition and optimization of the amount of overlapping and cell size, in the experiment, the proposed approach showed higher quality 3D image reconstruction while the conventional approach suffered visible split lines and cells.
Random phase-free computer holography and its applications
Author(s):
Tomoyoshi Shimobaba;
Takashi Kakue;
Tomoyoshi Ito
Show Abstract
Random phase is required in computer-generated hologram (CGH) to widely diffuse object light and to avoid its concentration on the CGH; however, the random phase causes considerable speckle noise in the reconstructed image and degrades the image quality. We introduce a simple and computationally inexpensive method that improves the image quality and reduces the speckle noise by multiplying the object light with the designed convergence light. We furthermore propose the improved method of the designed convergence light with iterative method to reduce ringing artifacts. Subsequently, as the application, a lensless zoomable holographic projection is introduced.
Integral imaging acquisition and processing for visualization of photon counting images in the mid-wave infrared range
Author(s):
P. Latorre-Carmona;
F. Pla;
B. Javidi
Show Abstract
In this paper, we present an overview of our previously published work on the application of the maximum likelihood (ML) reconstruction method to integral images acquired with a mid-wave infrared detector on two different types of scenes: one of them consisting of a road, a group of trees and a vehicle just behind one of the trees (being the car at a distance of more than 200m from the camera), and another one consisting of a view of the Wright Air Force Base airfield, with several hangars and different other types of installations (including warehouses) at distances ranging from 600m to more than 2km. Dark current noise is considered taking into account the particular features this type of sensors have. Results show that this methodology allows to improve visualization in the photon counting domain.
Benchmarking real-time RGBD odometry for light-duty UAVs
Author(s):
Andrew R. Willis;
Laith R. Sahawneh;
Kevin M. Brink
Show Abstract
This article describes the theoretical and implementation challenges associated with generating 3D odometry estimates (delta-pose) from RGBD sensor data in real-time to facilitate navigation in cluttered indoor environments. The underlying odometry algorithm applies to general 6DoF motion; however, the computational platforms, trajectories, and scene content are motivated by their intended use on indoor, light-duty UAVs. Discussion outlines the overall software pipeline for sensor processing and details how algorithm choices for the underlying feature detection and correspondence computation impact the real-time performance and accuracy of the estimated odometry and associated covariance. This article also explores the consistency of odometry covariance estimates and the correlation between successive odometry estimates. The analysis is intended to provide users information needed to better leverage RGBD odometry within the constraints of their systems.
iGRaND: an invariant frame for RGBD sensor feature detection and descriptor extraction with applications
Author(s):
Andrew R. Willis;
Kevin M. Brink
Show Abstract
This article describes a new 3D RGBD image feature, referred to as iGRaND, for use in real-time systems that use these sensors for tracking, motion capture, or robotic vision applications. iGRaND features use a novel local reference frame derived from the image gradient and depth normal (hence iGRaND) that is invariant to scale and viewpoint for Lambertian surfaces. Using this reference frame, Euclidean invariant feature components are computed at keypoints which fuse local geometric shape information with surface appearance information. The performance of the feature for real-time odometry is analyzed and its computational complexity and accuracy is compared with leading alternative 3D features.
Stereoscopic depth of field: why we can easily perceive and distinguish the depth of neighboring objects under binocular condition than monocular
Author(s):
Kwang-Hoon Lee;
Min-Chul Park
Show Abstract
In this paper, we introduce a high efficient and practical disparity estimation using hierarchical bilateral filtering for realtime view synthesis. The proposed method is based on hierarchical stereo matching with hardware-efficient bilateral filtering. Hardware-efficient bilateral filtering is different from the exact bilateral filter. The purpose of the method is to design an edge-preserving filter that can be efficiently parallelized on hardware. The proposed hierarchical bilateral filtering based disparity estimation is essentially a coarse-to-fine use of stereo matching with bilateral filtering. It works as follows: firstly, the hierarchical image pyramid are constructed; the multi-scale algorithm then starts by applying a local stereo matching to the downsampled images at the coarsest level of the hierarchy. After the local stereo matching, the estimated disparity map is refined with the bilateral filtering. And then the refined disparity map will be adaptively upsampled to the next finer level. The upsampled disparity map used as a prior of the corresponding local stereo matching at the next level, and filtered and so on. The method we propose is essentially a combination of hierarchical stereo matching and hardware-efficient bilateral filtering. As a result, visual comparison using real-world stereoscopic video clips shows that the method gives better results than one of state-of-art methods in terms of robustness and computation time.
Time multiplexed pinhole array based lensless three-dimensional imager
Author(s):
Ariel Schwarz;
Jingang Wang;
Amir Shemer;
Zeev Zalevsky;
Bahram Javidi
Show Abstract
We present an overview of multi variable coded aperture (MVCA) for lensless three-dimensional integral imaging (3D II) systems. The new configuration is based on a time multiplexing method using a variable pinholes array design. The system provides higher resolution 3D images with improved light intensity and signal to noise ratio as compared to single pinhole system. The MVCA 3D II system configuration can be designed to achieve high light intensity for practical use as micro lenslets arrays. This new configuration preserves the advantages of pinhole optics while solving the resolution limitation problem and the long exposure time of such systems. The three dimensional images are obtained with improved resolution, signal to noise ratio and sensitivity efficiency. This integral imaging lensless system is characterized by large depth of focus, simplicity and low cost. In this paper we present numerical simulations as well as experimental results that validate the proposed lensless imaging configuration.
Light field display and 3D image reconstruction
Author(s):
Toru Iwane
Show Abstract
Light field optics and its applications become rather popular in these days. With light field optics or light field thesis, real 3D space can be described in 2D plane as 4D data, which we call as light field data. This process can be divided in two procedures. First, real3D scene is optically reduced with imaging lens. Second, this optically reduced 3D image is encoded into light field data. In later procedure we can say that 3D information is encoded onto a plane as 2D data by lens array plate. This transformation is reversible and acquired light field data can be decoded again into 3D image with the arrayed lens plate. "Refocusing" (focusing image on your favorite point after taking a picture), light-field camera's most popular function, is some kind of sectioning process from encoded 3D data (light field data) to 2D image.
In this paper at first I show our actual light field camera and our 3D display using acquired and computer-simulated light field data, on which real 3D image is reconstructed. In second I explain our data processing method whose arithmetic operation is performed not in Fourier domain but in real domain. Then our 3D display system is characterized by a few features; reconstructed image is of finer resolutions than density of arrayed lenses and it is not necessary to adjust lens array plate to flat display on which light field data is displayed.
Increasing the depth of field in Multiview 3D images
Author(s):
Beom-Ryeol Lee;
Jung-Young Son;
Sumio Yano;
Ilkwon Jung
Show Abstract
A super-multiview condition simulator which can project up to four different view images to each eye is introduced. This simulator with the image having both disparity and perspective informs that the depth of field (DOF) will be extended to more than the default DOF values as the number of simultaneously but separately projected different view images to each eye increase. The DOF range can be extended to near 2 diopters with the four simultaneous view images. However, the DOF value increments are not prominent as the image with both disparity and perspective with the image with disparity only.
Three-dimensional far-infrared imaging by using perspective thermal images
Author(s):
Daisuke Barada
Show Abstract
This paper proposes a method to obtain three-dimensional thermal radiation distribution. In the method, multiple oblique projection thermal images are obtained by moving a target object and three-dimensional thermal radiation distribution is reconstructed based on projection-slice theorem. In experiment, incandescent light bulbs or a plant is used as a sample object. The three-dimensional position measured is coincided with actual position and the principle is experimentally verified.
A method of quantifying moirés on 3D displays
Author(s):
Gwangsoon Lee;
Eung-Don Lee;
Yang-Su Kim;
Namho Hur;
Jung-Young Son
Show Abstract
A method of quantifying the amount of moirés in contact-type 3-D displays is described. The color moirés in the displays are induced by the periodic blocking of a part of each pixel on the panel by the boundary lines or the barrier lines consisting of the viewing zone forming optics. The method starts calculating the intensity of an image laden with moirés and that of the image with no moirés. The moirés contrast is defined as the intensity difference of the two images. The contrast values match well with those from the simulated moirés for the crossing angle range of 0° to 20°.
Liquid crystal lens array for 3D microscopy and endoscope application
Author(s):
Yi-Pai Huang;
Po-Yuan Hsieh;
Amir Hassanfiroozi;
Chao-Yu Chu;
Yun Hsuan;
Manuel Martinez;
Bahram Javidi
Show Abstract
In this paper, we demonstrate two liquid crystal (LC) lens array devices for 3D microscope and 3D endoscope applications respectively. Compared with the previous 3D biomedical system, the proposed LC lens arrays are not only switchable between 2D and 3D modes, but also are able to adjust focus in both modes. The multi-function liquid crystal lens (MFLC-lens) array with dual layer electrode has diameter 1.42 mm, which is much smaller than the conventional 3D endoscope with double fixed lenses. The hexagonal liquid crystal micro-lens array (HLC-MLA) instead of fixed micro-lens array in 3D light field microscope can extend the effective depth of field from 60 um to 780 um. To achieve the LC lens arrays, a high-resistance layer needs to be coated on the electrodes to generate an ideal gradient electric-field distribution, which can induce a lens-like form of LC molecules. The parameters and characteristics of high-resistance layer are investigated and discussed with an aim to optimize the performance of liquid crystal lens arrays.
Requirement for measurement of accommodation response based image blur due to the integral photography
Author(s):
Sumio Yano;
Hiromichi Imai;
Min-Chul Park
Show Abstract
In the first part of this paper, the principle and the development of IP display using computer software were described. Next, the measurement results of accommodation response for the developed IP display were described. As a result, the accommodation response was linearly changed as the depth position of the visual target moved in and out of the range of the depth of focus. On the other hand, the influences generated by the image blur for the accommodation response were investigated experimentally using stereoscopic images. The results showed that the accommodation response was coincident to the convergence point of stereoscopic images with less than 3 cpd spatial resolution. Based on these results, the considerations of the measurement results of the accommodation response for the development IP were examined. The requirements of the measurement condition of accommodation response for IP were also discussed.
3D augmented reality with integral imaging display
Author(s):
Xin Shen;
Hong Hua;
Bahram Javidi
Show Abstract
In this paper, a three-dimensional (3D) integral imaging display for augmented reality is presented. By implementing the pseudoscopic-to-orthoscopic conversion method, elemental image arrays with different capturing parameters can be transferred into the identical format for 3D display. With the proposed merging algorithm, a new set of elemental images for augmented reality display is generated. The newly generated elemental images contain both the virtual objects and real world scene with desired depth information and transparency parameters. The experimental results indicate the feasibility of the proposed 3D augmented reality with integral imaging.
Avalanche effect and bit independence behaviors of double random phase encoding schemes
Author(s):
Nishat Sultana;
Inkyu Moon
Show Abstract
In this paper, we present an overview of the avalanche and bit independence characteristics of double random phase encoding (DRPE) scheme in the virtual optical domains. DRPE apparently demonstrates outstanding bit independence property in both the Fourier and Fresnel domains. Experimental results validate that the DRPE performance in Fresnel domain surpasses the DRPE in Fourier domain by showing better avalanche effect characteristics. The avalanche effect result is remarkably poor for the DRPE in Fourier domain when only one bit of the plaintext or encryption key is altered. In contrast, DRPE in Fresnel domain shows adequate avalanche effect results regardless of how many numbers of bits are altered in the plaintext or in the encryption key.
Incoherent holography by a Michelson type interferometer with a lens for a radial shear
Author(s):
Kaho Watanabe;
Takanori Nomura
Show Abstract
The modified Michelson type interferometer with lenses for a radial shear to record incoherent holograms is proposed. It enables us to record a hologram by self-interference without coherent illumination such as a laser. The interferometer has two wave plates which can realize phase-shifting incoherent holography. The feature can avoid a very large bias term and the twin image, which are the inherent problem of incoherent holography by self-interference. The advantages of the proposed method using lenses and wave plates are easy adjustment of the zone plate and simplification of the optical system. A preliminary experiment using an LED as an incoherent object was performed to confirm the four step phase-shifting by wave plates.
Accurate characterization of mask defects by combination of phase retrieval and deterministic approach
Author(s):
Min-Chul Park;
Thibault Leportier;
Wooshik Kim;
Jindong Song
Show Abstract
In this paper, we present a method to characterize not only shape but also depth of defects in line and space mask patterns. Features in a mask are too fine for conventional imaging system to resolve them and coherent imaging system providing only the pattern diffracted by the mask are used. Then, phase retrieval methods may be applied, but the accuracy it too low to determine the exact shape of the defect. Deterministic methods have been proposed to characterize accurately the defect, but it requires a reference pattern. We propose to use successively phase retrieval algorithm to retrieve the general shape of the mask and then deterministic approach to characterize precisely the defects detected.
Modification of the reconstruction distance of Fresnel holograms for display with multiple spatial light modulators
Author(s):
Thibault Leportier;
Min-Chul Park;
Taegeun Kim
Show Abstract
In digital holography, spatial light modulators (SLMs) devices are used to display the holographic patterns. However, modulation is imperfect because SLMs cannot modulate phase and amplitude at the same time. Then undesired terms such as twin image can be observed in the image plane. One solution to remove twin image contribution without physical spatial filter is to perform complex modulation. Phase and amplitude modulation can be performed sequentially with two different SLMs. Similarly, real and imaginary part of hologram can be displayed and combined in an additive configuration through a polarizing beam splitter. In both case, a major problem is the alignment of the two display devices since misalignment as small as one pixel may degrade significantly quality of the reconstruction. For our experiment, we used data computed numerically to obtain separately real and imaginary part of hologram. Then, we focused on additive configuration where two SLMs are displaying real and imaginary part of hologram respectively.
Reconstruction distance of hologram is fixed and distance between SLM and beam splitter should be the same for the two devices. In this paper, we study the effect of having different reconstruction distance for the real and imaginary hologram. We performed simulations and explained the result with the scalar diffraction theory. A method to compensate numerically the reconstruction distance is proposed for on-axis configuration. This method can also be applied to modify reconstruction distance of Fresnel hologram displayed with a single SLM and has potential application in RGB holographic reconstruction
Local frequency estimation from intensity gradients in spatial carrier fringe pattern analysis
Author(s):
Ruihua Zhang;
Hongwei Guo
Show Abstract
Spatial carrier fringe pattern analysis is an effective tool in optical measurement, e.g. in interferometry and fringe projection technique. With it, the very large phase deformations in a spatial carrier fringe pattern may increases the bandwidth of fringe component thus leading to difficulties in retrieving its phase map. In order to overcome this problem, many local-adaptive methods have been developed for processing the spatial carrier fringe pattern with large phase variations, and in fact, the local spatial frequency estimation is central to these methods. This paper introduces a simple algorithm for estimating the local frequencies of a fringe pattern with spatial carrier. First, the intensity gradients of the fringe pattern are calculated, and then the standard deviations (SDs) of the intensity gradients at each pixel are estimated from its neighborhood. Finally the local frequencies are estimated from the SDs just calculated simply using an arccosine function. This algorithm is potential in developing effective techniques for retrieving phases from a spatial carrier fringe pattern with large phase variations. For example, we can recover the phase map by directly integrating the local frequencies or by use of an adaptive spatial carrier phase shifting algorithm (SCPS) with the local frequencies being the local phase shifts. It can also be used in Fourier transform method for exactly determining the carrier frequencies, or for extrapolating aperture in order to reduce the boundary effect. Combined with time-frequency techniques such as windowed Fourier transform and wavelet transform methods, it is helpful for alleviating the computational burdens.
Comparison of the impact of different key types on ease of imaging and printing for replica key production
Author(s):
Jeremy Straub
Show Abstract
Three-dimensional printing and scanning can be used to replicate keys. This is problematic due to the vast number of keyed locks installed and the effort required to convert these to a more resistant technology. This paper considers whether certain types of keys / locks may be more resistant to 3D scanning and printing. Data is collected and analyzed to determine whether a set of common features exist between keys that are more or less resistant to 3D scanning and printing-based duplication. Based on this analysis, the short, medium term and long term viability of keyed locks is considered.
The effect of object shape and laser beam shape on lidar system resolution
Author(s):
Hongchang Cheng;
Jingyi Wang;
Jun Ke
Show Abstract
In a LIDAR system, a pulsed laser beam is propagated to a scene, and then reflected back by objects. Ideally if the beam diameter and the pulse width are close to zero, then the reflected beam in time domain is similar to a delta function, which can accurately locate an object's position. However, in a practical system, the beam has finite size. Therefore, even if the pulse width is small, an object shape will make the reflected beam stretched along the time axis, then affect system resolution.
In this paper, we assume the beam with Gaussian shape. The beam can be formulated as a delta function convolved with a shape function, such as a rectangular function, in time domain. Then the reflected beam can be defined as a system response function convolved with the shape function. We use symmetric objects to analyze the reflected beam. Corn, sphere, and cylinder objects are used to find a LIDAR system's response function.
The case for large beam size is discussed. We assume the beam shape is similar to a plane wave. With this assumption, we get the simplified LIDAR system response functions for the three kinds of objects. Then we use tiny spheres to emulate an arbitrary object, and study its effect to the returned beam.
Compact and high resolution virtual mouse using lens array and light sensor
Author(s):
Zong Qin;
Yu-Cheng Chang;
Yu-Jie Su;
Yi-Pai Huang;
Han-Ping David Shieh
Show Abstract
Virtual mouse based on IR source, lens array and light sensor was designed and implemented. Optical architecture including lens amount, lens pitch, baseline length, sensor length, lens-sensor gap, focal length etc. was carefully designed to achieve low detective error, high resolution, and simultaneously, compact system volume. System volume is 3.1mm (thickness) × 4.5mm (length) × 2, which is much smaller than that of camera-based device. Relative detective error of 0.41mm and minimum resolution of 26ppi were verified in experiments, so that it can replace conventional touchpad/touchscreen. If system thickness is eased to 20mm, resolution higher than 200ppi can be achieved to replace real mouse.
Hierarchical bilateral filtering based disparity estimation for view synthesis
Author(s):
Hong-Chang Shin;
Gwangsoon Lee;
Won-Sik Cheong;
Namho Hur
Show Abstract
In this paper, we introduce a high efficient and practical disparity estimation using hierarchical bilateral filtering for real-time view synthesis. The proposed method is based on hierarchical stereo matching with hardware-efficient bilateral filtering. Hardware-efficient bilateral filtering is different from the exact bilateral filter. The purpose of the method is to design an edge-preserving filter that can be efficiently parallelized on hardware. The proposed hierarchical bilateral filtering based disparity estimation is essentially a coarse-to-fine use of stereo matching with bilateral filtering. It works as follows: firstly, the hierarchical image pyramid are constructed; the multi-scale algorithm then starts by applying a local stereo matching to the downsampled images at the coarsest level of the hierarchy. After the local stereo matching, the estimated disparity map is refined with the bilateral filtering. And then the refined disparity map will be adaptively upsampled to the next finer level. The upsampled disparity map used as a prior of the corresponding local stereo matching at the next level, and filtered and so on. The method we propose is essentially a combination of hierarchical stereo matching and hardware-efficient bilateral filtering. As a result, visual comparison using real-world stereoscopic video clips shows that the method gives better results than one of state-of-art methods in terms of robustness and computation time.
Three-dimensional integral imaging displays using a quick-response encoded elemental image array: an overview
Author(s):
A. Markman;
B. Javidi
Show Abstract
Quick-response (QR) codes are barcodes that can store information such as numeric data and hyperlinks. The QR code can be scanned using a QR code reader, such as those built into smartphone devices, revealing the information stored in the code. Moreover, the QR code is robust to noise, rotation, and illumination when scanning due to error correction built in the QR code design. Integral imaging is an imaging technique used to generate a three-dimensional (3D) scene by combining the information from two-dimensional (2D) elemental images (EIs) each with a different perspective of a scene. Transferring these 2D images in a secure manner can be difficult. In this work, we overview two methods to store and encrypt EIs in multiple QR codes. The first method uses run-length encoding with Huffman coding and the double-random-phase encryption (DRPE) to compress and encrypt an EI. This information is then stored in a QR code. An alternative compression scheme is to perform photon-counting on the EI prior to compression. Photon-counting is a non-linear transformation of data that creates redundant information thus improving image compression. The compressed data is encrypted using the DRPE. Once information is stored in the QR codes, it is scanned using a smartphone device. The information scanned is decompressed and decrypted and an EI is recovered. Once all EIs have been recovered, a 3D optical reconstruction is generated.