Proceedings Volume 10666

Three-Dimensional Imaging, Visualization, and Display 2018

cover
Proceedings Volume 10666

Three-Dimensional Imaging, Visualization, and Display 2018

Purchase the printed version of this volume at proceedings.com or access the digital version at SPIE Digital Library.

Volume Details

Date Published: 17 August 2018
Contents: 10 Sessions, 32 Papers, 19 Presentations
Conference: SPIE Commercial + Scientific Sensing and Imaging 2018
Volume Number: 10666

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 10666
  • 3D Imaging
  • 3D Image Acquisition and Processing I
  • 3D Visualization and Related Technologies
  • 3D Image Acquisition and Processing II
  • Digital Holography in Metrology and Imaging
  • Human Factor
  • 3D Image and Related Technology I
  • 3D Image and Related Technology II
  • Poster Session
Front Matter: Volume 10666
icon_mobile_dropdown
Front Matter: Volume 10666
This PDF file contains the front matter associated with SPIE Proceedings Volume 10666, including the Title Page, Copyright information, Table of Contents, Introduction (if any), and Conference Committee listing.
3D Imaging
icon_mobile_dropdown
Design options for 360 degree viewable table-top digital color holographic displays
We have designed and successfully implemented 360-degree viewable holographic display prototype systems. Core idea of the system design lies in the exploitation of fast operating speed of DMD for binary amplitude modulation of light field, being distributed to more than 1,000 viewpoints along the 360 degree viewing circumference. Slanted downward viewing angle and 360 degree viewable 3-dimensional(3D) image over the center of tabletop display is achieved by specially designed optics. As a result, solid-looking 3D moving color images of larger than 3 inches are rendered and observed by several viewers at the same time from different viewing positions. We have implemented and experimented several variations of the system. They are tiling of SLM modules(2x2 tiling with 4 DMDs for mono-color display and 1x2 tiling with 6 DMDs for color display), using different SLMs(DMD of pixel pitch 13.68μm and resolution 1,024x768, DMD of pixel pitch 10.8μm and resolution 1,920x1,200), and applying different structure of image floating optics((1)double parabolic mirrors, (2)one parabolic mirror and one beam splitter, (3)two spherical and one flat mirror). We report the result of various display system implementations based on several combinations of above-mentioned design options.
Enhanced 3D performance by biconvex electrowetting lenticular lens structure
In this paper, the drawbacks of the conventional electrowetting lenticular lens such as unstable operation, low dioptric power, high operating voltage, and low fill factor were resolved through a biconvex structure. In our previous study, there was only one interface between DI water and oil. However, an interface between ETPTA and oil was added to form a biconvex structure. The biconvex structure was fabricated by exploiting the phenomenon that the liquid ETPTA changes into a solid upon exposure to UV light. The amount of ETPTA was adjusted to control the curvature of the interface between the ETPTA and oil. Also, the volume of oil was controlled to realize zero dioptric power at 0V. The biconvex electrowetting lenticular lens has powerful optical properties, showing the highest dioptric power of 2000D with a 414.7um aperture diameter, and operating with a voltage 0-17V. The dioptric power was 0D at 0V, which means the shape of the lens is flat, and 2000D at 17V, which means the shape of the lens is sufficiently convex to view a 3D image. The viewing angle was measured as 46 degrees and the response time was measured as 0.83ms. Also, crosstalk of 16.18 % was measured. A 24- view image was tested by combining the fabricated 5-inch lenticular lens with a display (G Pro 2).
3D TV based on integral photography
M. Kawakita, H. Sasaki, N. Okaichi, et al.
We studied an integral three-dimensional (3D) TV based on integral photography to develop a new form of broadcasting that provides a strong sense of presence. The integral 3D TV can display natural 3D images that have motion parallax in the horizontal and vertical directions. However, a large number of pixels are required to obtain superior 3D images. To improve image quality, we applied ultra-high-definition video technologies to an integral 3D TV system. Furthermore, we are developing several methods for combining multiple cameras and display devices to improve the quality of integral 3D images.
Development of spatial light modulator with ultra fine pixel pitch for electronic holography (Conference Presentation)
Chi-Sun Hwang, Yong-Hae Kim, Gi Heon Kim, et al.
SLM (Spatial Light Modulator) with ultra fine pixel pitch (circa 1 micron meter) has been thought as a big issue to realize electronic hologram with wide viewing angle. Two types of approach are proposed for accomplish SLM panel with 1 micron meter pitch pixel. SLM with LC light modulator controlled by TFT based backplane constructed on glass substrate is proposed according to the methods of scaling down flat-panel display technology. Introduction of sub micron meter patterning processes, the SLM panel with 3 micron meter pitch pixel was successfully developed for the first time. The SLM with 2 inch diagonal length had the resolution of 16K by 2K. Hologram with depth was reconstructed with manufactured SLM. Oxide semiconductor TFTs of 1 micron meter channel length with high performance have been developed for the SLM. Technical issues to accomplish 1 micron meter pitch pixel will be discussed. PCM (phase change material) has been used for memory devices and information recording devices. In former case, information is recorded and read using electrical signal. For latter case, information is recorded and read using light (laser) signal. We propose a SLM with PCM such that information is recorded using electrical signal and read using light signal. The light modulation of PCM pattern recorded by pulsed laser was successfully demonstrated using reconstruction of hologram images. Operation of arrayed pixels with PCM pattern driven by Si MOSFET is under development. Technical challenges for SLM with PCM will be discussed.
3D Image Acquisition and Processing I
icon_mobile_dropdown
Ray-space processing for omnidirectional FTV
Masayuki Tanimoto, Hirokuni Kurokawa
FTV (Free-viewpoint Television) enables users to view a 3D scene by freely changing the viewpoint as if they were actually there. FTV is developed based on ray-space representation. Omnidirectional FTV is an FTV with very wide field of view (FOV), that is 360-degree FOV. 4D spherical ray-space is analyzed and applied to omnidirectional FTV. 4D spherical ray-space of “a group of rays through one point” is derived. It is extended to ray-space captured by many cameras on a circle. View generation of omnidirectional FTV needs rays that are not captured by real cameras. These rays are synthesized by interpolating the captured ray-space. It is done so that the intersections of the captured ray-space and the ray-space of rays emitted from a light source have the same color. Omnidirectional FTV with full parallax is realized by using the 4D ray-space processing.
Compressive sensing with a block-strategy for fast image acquisitions
Thibault Leportier, Vladyslav Selotkin, Myungha Kim, et al.
Compressive sensing is a recent technique that was developed for the reconstruction of large signals from a small number of measurements. It relies on the assumption that the signal to recover is sparse, and the performance of the reconstruction is depending on the level of sparsity. However, in practical case the sparsity of the image to recover is unknown and it is then difficult to estimate the number of measurements necessary to reconstruct the image with a satisfying quality. In this study, we examined a strategy where the image is reconstructed by CS in two steps. A first step with a small number of measurements to estimate the number of points needed, and a second step for the final reconstruction. In addition, we investigated the benefits to create a partition of the image of interest to estimate locally the number of measurements needed for the reconstruction. We demonstrated that our strategy could be used to reconstruct images presenting a PSNR similar to the one obtained with the conventional method, but with fewer measurements.
Privacy-enabled displays
David Carmona-Ballester, Juan M. Trujillo-Sevilla, Lara Díaz-García, et al.
In this work we have presented a brief insight into the capabilities of multilayer displays as to selectively display information in relation to the observers. We labeled the views of a light-field as blocked and non-blocked, and then a predefined text was assigned accordingly, modifying it to achieve a privacy criterion in the blocked case. Two ways to define the private views were presented. An evaluation of the output for both techniques was carried over in simulation, in both the spatial and frequency domain. Results showed that privacy was achievable and that each technique had an optimal operation point when taking into account the time-multiplexing capabilities of the multilayer display. Also, a trade-off between the quality of the blocked and non-blocked views was found.
Computational reconstruction technique in integral imaging with enhanced visual quality
In this paper, we propose a visual quality enhancement of 3D reconstruction algorithm in integral imaging. Conventional integral imaging has a critical problem that attenuates the visual quality of 3D objects when low-resolution elemental images are used. Although, PERT is one of the solutions, the size of 3D scenes is different from optical reconstruction since it is not considering space between back-projected pixels on reconstruction planes. Therefore, we consider this space and use convolution operator. Especially, convolution operator can be designed by considering aperture shapes. To support our proposed method, we carry out optical experiment and computer simulations.
3D Visualization and Related Technologies
icon_mobile_dropdown
Seeing the sound we hear: optical technologies for visualizing sound wave
Yasuhiro Oikawa, Kenji Ishikawa, Kohei Yatabe, et al.
Optical methods have been applied to visualize sound waves, and these have received a considerable amount of attention in both optical and acoustical communities. We have researched optical methods for sound imaging including laser Doppler vibrometry and Schlieren method. More recently, parallel phase-shifting interferometry with a high-speed polarization camera has been used, and it can take a slow-motion video of sound waves in the audible range. This presentation briefly reviews the recent progress in optical imaging of sound in air and introduces the applications including acoustic transducer testing and investigation of acoustic phenomena.
Optical 3D visualization under inclement weather conditions
In this paper, we propose an optical three-dimensional (3D) visualization under inclement weather conditions. These conditions include fog and night environments. For visualization under fog, we assume that fog is the unknown scattering media so that we use peplography technique which estimates the scattering media by Gaussian random process and detects ballistic photons from the scattering media by photon counting imaging. In addition, we use photon counting imaging with Bayesian estimation and adaptive statistical parameters for night vision. In this method, priori information of the scene can be assumed as Gamma distribution for calculation of posteriori distribution and adaptive statistical parameters can be calculated from the reconstructed 3D images. To obtain 3D information under inclement weather conditions, we use a passive 3D imaging technique such as integral imaging and computational reconstruction algorithm with 3D point cloud. Finally, we optimize these algorithms for real-time process and wearable devices. To support our proposed method, we implement preliminary experiments.
3D Image Acquisition and Processing II
icon_mobile_dropdown
Depth and width reproducibility of integral photography from multi-view stereoscopic image
Sumio Yano, Yuta Katayose, Hyoung Lee, et al.
Multi-view stereoscopic images were produced via the pick-up method with respect to the object set in the computer, and integral photography was generated from this multi-view stereoscopic image. When the multiview stereoscopic image was taken by pick-up, the optical axis of each camera of the array was aligned with one point in the front of the camera array. A calculation method was derived with respect to the depth position and width of the object displayed by integral photography generated using this method. Based on the derived calculation method, consideration was given to the distortion displayed and reproduced with respect to the depth position and width of the object in the prototyped integral photography.
Plenoptic imaging techniques for improving accuracy and robustness of object tracking
Dae Hyun Bae, Jae Woo Kim, Hae Chan Noh, et al.
Object tracking is a core technique in many computer vision applications. The problem becomes especially challenging when the target object is fully or even partially occluded. A recent work has shown the feasibility of utilizing plenoptic imaging techniques to resolve such occlusion problems. Specifically, it constructs focal stacks from plenoptic image sequences and selects an optimal image sequences from the stacks that can maximize the tracking accuracy. Even though the technique has proven the merit of using plenoptic images in the object tracking, there is still room for improvement. In this paper, we propose two simple but effective algorithms to improve both accuracy and robustness of object tracking based on plenoptic images. We first propose to use an image sharpening technique to reduce the blur that the refocused images inheritably have. The image sharpening makes the shape of objects more distinct, and thus a higher accuracy in the object tracking can be achieved. We also propose an adaptive bounding box proposal algorithm to overcome difficult cases where the size of the target object in the image space drastically changes. This improves the robustness in the object tracking compared to prior techniques which assumed fixed sized objects. We validate our proposed algorithms on two different scenarios, and the experimental results confirm the benefit of our method.
Forming aerial 3D images with smooth motion parallax in combination of arc 3D display with AIRR
Hirotsugu Yamamoto, Kazuki Kawai, Haruki Mizushina, et al.
This paper proposes a new way of forming an aerial three-dimensional (3D) image that gives viewers smooth motion parallax. The proposed aerial 3D display is composed of an arc 3D display and aerial imaging by retro-reflection (AIRR), which features a wide viewing angle, a large-size scalability, and a low cost with mass-productive process. The arc 3D display consists of arc-shaped scratches on a transparent plastic plate. The principle of the arc 3D display is based on directional scattering. When a light impinges an arc-shaped scratch, the light is scattered mainly to the radial direction. The position of the bright spot on an arc scratch depends on the pupil position. The distance between the bright spots for both eyes on an arc scratches is proportional to the radius of curvature and is equivalent to the binocular parallax. Thus, by changing the radius of curvature, we can show a 3D image by use of a single LED illumination. This paper proposes an optical system to form an aerial 3D image with AIRR. AIRR consists of a light source, a beam splitter, and a retro-reflector. Arc scratches are illuminated by a quasi-parallel light that is generated by a Fresnel lens and a lightemitting diode (LED). In order to extract the directional scattered lights, we place the retro-reflector parallel to the beam splitter. The transmitted light does not impinge the beam splitter. Only the scattered lights reflect on the beam splitter and form the aerial image of the arc 3D display.
Virtual reality for crime scene visualization
Philip Engström
The Swedish National Forensic Centre has been monitoring the development of 3D sensing technology for 10 years and has recently started using 3D laser scanning for measuring Swedish crime scenes. Once a crime scene is documented in 3D it is also possible to visualize it in 3D, which opens the possibility to use Virtual Reality (VR). VR have clear advantages over other visualization methods since it enables a person to virtually visit the scene of the crime in a natural manner, i.e. by means of physically walking around in the scene. One key aspect of VR is that it enables the user to understand the dimensions of the scene in a natural way. The demands for VR of the Swedish Police have been investigated and summed up to five key design guidelines. We also give an insight in which steps of the crime fighting process that will benefit most from VR. Real time streaming 360 cameras can also be used as a data source enabling a person to visit a scene without physically traveling there. We believe that this technology can deliver immersive VR experiences that can be very useful within our field.
Digital Holography in Metrology and Imaging
icon_mobile_dropdown
Digital holography under non paraxial conditions
S. Thibault, C. Pichette, M. Piché, et al.
The knowledge of the exact complex (amplitude and phase) wavefield scattered for different illuminating beams allows the computation of the 3D spatial distribution of a specimen. This work looks on how diffraction tomography can behave under high numerical aperture focusing conditions. Scalar theory is no longer valid in such system and nonparaxial vectorial field theory must be used. 'Numerical methods such as FDTD techniques can also be used to investigate the interaction of this field with the specimen.
Random amplitude or phase modulation for three-dimensional sensing and imaging
Three-dimensional imaging is very attractive for biomedical fields, industrial inspecting, and so on. Computational optical sensing and imaging is widely studied due to the recent development of imaging device, computational power, and so on. It is also used for three-dimensional sensing and imaging. In this area, random amplitude or phase modulation is introduced to improve its performance. In this paper, computational ghost imaging using a designed random amplitude modulation is presented. Owing to the designed random amplitude modulation, the number of measurements can be reduced. A phase modulation for twin image reduction of in-line digital holography is also presented. Owing to the random phase modulation, the overlapping an object image and its conjugate image can be reduced.
Automated quantification of cardiomyocytes beating profile with time-lapse digital holographic microscopy
Inkyu Moon, Keyvan Jaferzadeh
This paper overviews the time-lapse off-axis digital holographic microscopy (DHM) integrated with information processing algorithms for automatically measuring dynamic quantitative phase profiles of beating cardiomyocytes. The off-axis DHM provides time-lapse quantitative phase images (QPI) of cardiomyocytes at 10Hz for one minute. Experimental results show that multiple dynamic parameters of beating cardiomyocytes can be analyzed by the presented automated procedures specifically dedicated to process the time-lapse DHM phase images. The presented method can be useful in quantitative analysis between normal cardiomyocytes beating profile and all other abnormal activities and these multiple beating parameters of cardiomyocytes can be used to characterize the physiological state of cardiomyocytes.
Three-dimensional imaging based on common-path off-axis incoherent digital holography (Conference Presentation)
In this invited presentation, we will introduce our recent research on 3D imaging based on common-path off-axis incoherent digital holography. 3D imaging by single shot measurement is very attractive for biological imaging field. In the proposed technique, two focusing lenses with different diffraction gratings are implemented by a phase-mode spatial light modulator and then two diffracted incoherent light waves can interfere at the image sensor. This system can be compact and be stable. We will present several experiments including LED sources. We will discuss the potential of the biological imaging.
Human Factor
icon_mobile_dropdown
Monocular depth sense in a light field display
The accommodation and convergence responses in a light field display which can provide up to 8 images to each eye of viewers are investigated. The DOF (Depth of Field) increase with the increasing number of projected images is verified for both monocular and binocular viewings. 7 subjects with eye sights greater than 1.0 reveal that their responses can match to their real object responses as the number of images increased to 7 and more, though there are distinctive differences between objects. The matching performance of the binocular is more stable than that of the monocular viewing for the number of images less than 6. But the response stability of the accommodation increases as the number becomes more than 7.
Microstereopsis is good, but orthostereopsis is better: precision alignment task performance and viewer discomfort with a stereoscopic 3D display
John P. McIntire, Paul R. Havig, Lawrence K. Harrington, et al.
Two separate experiments examined user performance and viewer discomfort during virtual precision alignment tasks while viewing a stereoscopic 3D (S3D) display. In both experiments, virtual camera separation was manipulated to correspond to no stereopsis cues (zero separation), several levels of microstereopsis (20, 40, 60, and 80%), and orthostereopsis (100% of interpupillary distance). Viewer discomfort was assessed before and after each experimental session, measured subjectively via self-report on the Simulator Sickness Questionnaire (SSQ). Objective measures of binocular status (phoria and fusion ranges) and standing postural stability were additionally evaluated pre and postsessions. Overall, the results suggest binocular fusion ranges may serve as useful objective indicators of discomfort from S3D viewing, perhaps as supplemental measures to standard subjective reports. For the group as a whole, the S3D system was fairly comfortable to view, although roughly half of the participants reported some discomfort, ranging from mild to severe, and typically with the larger camera separations. Microstereopsis conferred significant performance benefits over the no-stereopsis conditions, so microstereoscopic camera separations might be of great utility for non-critical viewing applications. However, performance was best with near-orthostereoscopic or orthostereoscopic camera separations. Our results support the use of orthostereopsis for critical, high-precision manual spatial tasks performed via stereoscopic 3D display systems, including remote surgery, robotic interaction with dangerous or hazardous materials, and related teleoperative spatial tasks.
3D Image and Related Technology I
icon_mobile_dropdown
Non-line-of-sight 3D imaging (Conference Presentation)
In an optical Line-of-Sight (LOS) scenario, such as one involving a LIDAR system, the goal is to recover an image of a target in the direct path of the transmitter and receiver. In Non-Line-of-Sight (NLOS) scenarios the target is hidden from both the transmitter and the receiver by an occluder, i.e. a wall. Recent advancements in technology, computer vision and inverse light transport theory have shown that it is possible to recover an image of a hidden target by exploiting the temporal information encoded in multiple-scattered photons. The core idea is to acquire data using an optical system, composed of an ultra-fast laser that emits short pulses (in the order of femtoseconds) and a camera capable of recovering the photons time-of-flight information (a typical resolution is in the order of picoseconds). We reconstruct 3D images from this data based on the backprojection algorithm, a method typically found in the computational tomography field, which is parallelizable and memory efficient, although it only provides an approximate solution. Here we present improved backprojection algorithms for applications to large scale scenes with with a large number of scatterers and meters to hundreds of meters diameter. We apply these methods to the NLOS imaging of rooms and lunar caves.
Augmented reality integration of fused LiDAR and spatial mapping
Matthew B. Selleck, David Burke, Chase Johnston, et al.
Fusing 3-D generated scenes from multiple, spatially distributed sensors produces a higher quality data product with fewer shadows or islands in the data. As an example, while airborne LiDAR systems scan the exterior of a structure, a spatial mapping system generates a high resolution scan of the interior. Fusing the exterior and interior scanned data streams allows the construction of a fully realized 3D representation of the environment by asserting an absolute reference frame. The implementation of this fused system allows simultaneous real-time streaming of point clouds from multiple assets, tracking of personnel and assets in that fused 3D space, and visualizing it on a mixed-reality device. Several challenges that were solved: 1) the tracking and synchronization of multiple independent assets; 2) identification of the network throughput for large data sets; 3) the coordinate transformation of collected point cloud data to a common reference; and 4) the fused representation of all collected data. We leveraged our advancements in real-time point cloud processing to allow a user to view the singular fused 3D image on a HoloLens. The user is also able to show or hide the fused features of the image as well as alter it in six degrees of freedom and scaling. This fused 3D image allows a user to see a virtual representation of their immediate surroundings or allow remote users to gain knowledge of a distant location.
Characterizing three dimensional open cell structures without segmentation
Joseph H. Nurre, Thomas E. Dufresne, John H. Gideon
Foam cells, particle conglomerates, biological tissue slices and colloidal suspensions are just a few examples of collections that create an image with multiple touching or overlapping regions. The characterization of the open cell size of such a continuous structure is tedious and computationally intensive for large 3D data sets. Typically, it is accomplished by segmenting the cells with a watershed technique and aggregating the statistics of all regions found. This paper provides the mathematical foundation for a newly discovered relationship between the average pixel value of a Euclidean Distance Map (EDM) and the radius of a conic section. The implementation of this relationship allows for a computationally simple and accurate characterization of the aggregate diameter associated with these open cell structures without segmentation.
3D topography of reflective samples by single-shot digital holographic microscopy (Conference Presentation)
Jorge Garcia-Sucerquia, Raul Castañeda
In this contribution, the 3D-topography of a reflective sample is obtained by single-shot digital holographic microscopy. An off-axis digital holographic microscope operating in reflection mode and telecentric regimen is utilized to reproduce the 3D-topography of fully reflective microscopic sample. The main characteristics of the proposed method that make it different from other strategies for performing the same task are: i) the possibility of producing the 3D-topography by a single-shot, ii) the use of the complete field of view of the microscope, iii) to operate with sensitivity of λ/100, iv) to work without phase perturbations introduced by the illuminating-imaging system, and v) the no need of further numerical processing beyond the regularly required to recover the phase map of the sample. A complete analysis of the illuminating-imaging system through the use of the ABCD diffraction theory of the digital holographic microscope is presented. 3D-topographies of an USAF resolution
3D Image and Related Technology II
icon_mobile_dropdown
3D reconstructions from spectral light fields
The parametrization of light rays in form of light fields (LF) have become the standard and probably the most common way for the representation, analysis and processing of rays emitted from 3D objects or from 3D displays. Essentially, the LFs are 4D maps representing the spatial and angular distribution of the intensity of the rays. Nowadays, with the increasing availability of spectral imagers, the conventional LF can be augmented with the spectral information, yielding to what we call spectral light fields (SLFs). Spectral light fields refer to a 5D distribution of spatial, angular and spectral ray’s distribution. Thus, the SLF can be viewed as spectral radiance over a 2D manifold, or as 5D parameterization of a plenoptic function. In this paper we show the utility of the SLFs for digital 3D reconstruction. We show that the additional spectral domain provides important information that can be utilized to overcome 3D reconstruction artefacts caused by ambiguities in commonly captured LFs. We demonstrate the utilization of the SLFs for profilomety and refocusing.
Diffraction-free light sheets with arbitrary beam profiles (Conference Presentation)
Diffractive spreading is a fundamental property of light and inversely proportional to the beam waist of a propagating beam. For instance, a Gaussian beam at a wavelength of 800 nm focused into 7 micrometers full-width at half maximum (FWHM) at its beam waist would only have 137 micrometers Rayleigh range—the propagation distance from the waist for a beam to double its cross section. Here, we demonstrate a diffraction-free space-time light sheet (one-dimensional pulsed beam) with 7 micrometers FWHM propagating in free space for 25 mm while preserving its spatial features. By introducing a highly correlated spatio-temporal spectrum via a two-dimensional pulse shaper (consisting of a phase-only reflective spatial light modulator, a grating and a few cylindrical lenses), we generate various light sheets with arbitrary beam profiles at the pulse center and diffraction-free propagation distance of approximately 200 Rayleigh range that corresponds to its beam waist size at the pulse center. Arbitrary light-sheet profiles also include hollow sheets (bottle beams) and even Airy light sheets that are only transversely accelerating in the local time frame of the pulse and acceleration-free as a function of propagation distance. Moreover, we obtain the spatio-temporal beam profiles of the light sheets by experimentally measuring the complex spectra and performing computational two-dimensional Fourier transformations. Light sheets with arbitrary beam profiles and controllable spectrum properties may be instrumental in super-resolution light-sheet microscopy for 3D bio-imaging, nonlinear and multimodality spectroscopy, standoff detection of chemicals, and one-dimensional plasma and filamentation generation.
High-resolution spatial image display with multiple UHD projectors
Hayato Watanabe, Masahiro Kawakita, Naoto Okaichi, et al.
Light field displays can provide a naturally viewable three-dimensional (3D) image without the need for using special glasses. However, improving in the resolution of 3D images is difficult because considerable image information is required. Therefore, we propose two new light field display methods that use multiple ultra-high definition projectors to realize a reproduction of a high-resolution spatial image. One of the two proposed methods is based on integral imaging. Multi-elemental images are superimposed onto a lens array using multiple projectors placed at optimal positions. An integral 3D image with enhanced resolution and viewing angle can be reproduced by projecting each elemental image as collimated light rays at different predetermined angles. We prototyped a display system having six projector units and realized a resolution of approximately 100,000 pixels and viewing angle of approximately 30°. The other proposed method aiming at further resolution enhancement is based on multi-view projection. By constructing a new display optical system to reproduce a full parallax light field and by developing a special 3D screen with isotropic narrow diffusion characteristics of non-Gaussian shape, optical 3D images could be reconstructed, which was difficult with conventional methods. We prototyped a display system comprising two projector units and realized higher resolution of approximately 330,000 pixels as compared to our previous full parallax light field display systems.
3D integral microscopy based in far-field detection
G. Scrofani, J. Sola-Pikabea, A. Llavador, et al.
Lately, Integral-Imaging systems have shown very promising capabilities of capturing the 3D structure of micro- scopic and macroscopic scenes. The aim of this work is to provide an optimal design for 3D-integral microscopy with extended depth of field and enhanced lateral resolution. By placing an array of microlenses at the aperture stop of the objective, this setup provides a set of orthographic views of the 3D sample. Adopting well known integral imaging reconstruction algorithms it can be shown that the depth of field as well as spatial resolution are improved with respect to conventional integral microscopy imaging. Our claims are supported on theoretical basis and experimental images of a resolution test target, and biological samples.
Matching-based depth camera and mirrors for 3D reconstruction
Trong-Nguyen Nguyen, Huu-Hung Huynh, Jean Meunier
Reconstructing 3D object models is playing an important role in many applications in the field of computer vision. Instead of employing a collection of cameras and/or sensors as in many studies, this paper proposes a simple way to build a cheaper system for 3D reconstruction using only one depth camera and 2 or more mirrors. Each mirror is equivalently considered as a depth camera at another viewpoint. Since all scene data are provided by only one depth sensor, our approach can be applied to moving objects and does not require any synchronization protocol as with a set of cameras. Some experiments were performed on easy-to-evaluate objects to confirm the reconstruction accuracy of our proposed system.
Poster Session
icon_mobile_dropdown
Methods of voxel data rendering for visualizing on multi-layer volumetric displays
K. Osmanis, G. Valters, R. Zabels, et al.
For the visualization of naturally observable 3D scenes with a continuous range of observation angles on a multi-plane volumetric 3D display, specific data processing and rendering methods have to be developed and tailored to match the architecture of a display device. As one of the most important requirements is a capability of providing real-time visual feedback, the data processing pipeline has to be optimized for effective execution on general consumer-grade hardware. In this work technological aspects and limitations of volumetric 3D display based on a static multi-planar projection volume have been analyzed in the context of developing an effective real-time capable volumetric data processing pipeline. Basic architecture of data processing pipeline has been developed and tested. Initial results showed a very slow performance for the execution on central processing unit. Based on these results, the data processing pipeline was optimized to utilize acceleration of graphics processing unit (GPU), which resulted in a substantial decrease of execution times, reaching the goal of real-time capable volumetric refresh rates.
Three-dimensional object visualization and detection in low light illumination using integral imaging: an overview
We overview a recently published work that utilizes three-dimensional (3D) integral imaging (InIm) to capture 3D information of a scene in low illumination conditions using passive imaging sensors. An object behind occlusion is imaged using 3D InIm. A computational 3D reconstructed image is generated from the captured scene information at a particular depth plane, which showed the object without occlusion. Moreover, 3D InIm substantially increases the signal-to-noise ratio of the 3D reconstructed scene compared with a single two-dimensional (2D) image as readout noise is minimized. This occurs due to the 3D InIm reconstruction algorithm being naturally optimum in the maximumlikelihood sense in the presence of additive Gaussian noise. After 3D InIm reconstruction, facial detection using the Viola-Jones object detection framework is successful whereas it fails using a single two-dimensional (2D) elemental image.
Depth estimation of computational reconstruction in integral imaging by considering the pixel blink rate
In this paper, we propose a new high-resolution depth estimation algorithm in integral imaging which can obtain threedimensional (3D) images by using lenslet array. In conventional studies, a stereo-matching is used for depth estimation. However, it is not the best solution for integral imaging since the 3D images are usually low-resolution images. Therefore, we propose a pixel blink rate based algorithm using pixel of the elemental images rearrangement technique (PERT) in integral imaging. Through our optical experiment, the depth resolution by our technique is dramatically improved compared with a conventional method.
3D resolution enhancement of integral imaging using resolution priority integral imaging and depth priority integral imaging
In this paper, we propose a new passive image sensing and visualization of 3D objects using concept of both resolution priority integral imaging (RPII) and depth priority integral imaging (DPII) to improve lateral and depth resolutions of 3D images simultaneously. We suppose that elemental images are the most important information for 3D performance of integral imaging, since they include both lateral and depth resolutions of 3D objects. Therefore, all resolutions of the reconstructed 3D images are determined by these elemental images in pickup stage. In this paper, we analyze the lateral and depth resolutions that depend on the basic parameters of camera or lens for pickup. Then, we describe our proposed method. To support our proposed method, we carry out the computer simulation. In addition, we analyze how the surface light of 3D objects placed in arbitrary position can be expressed within the permitted range according to the setting of camera parameters. Finally, to evaluate the performance of our method, peak signal to noise ratio (PSNR) is calculated.
Depth resolution enhancement of computational reconstruction of integral imaging
In this paper, we propose a new computational reconstruction technique of integral imaging for depth resolution enhancement by using integer-valued and non-uniform shifting pixels. In a general integral imaging system, we can record and visualize (or display) 3D object using lenslet array. In previous studies, many reconstruction techniques such as computational volumetric reconstruction and pixel of elemental images rearrangement technique (PERT) have been reported. However, a conventional computational volumetric reconstruction technique has low visual quality and depth resolution because low resolution elemental images and uniformly distributed shifting pixels are used for reconstruction. On the other hand, our proposed method uses non-uniformly distributed shifting pixels for reconstruction instead of uniformly distributed shifting pixels in conventional computational volumetric reconstruction. Thus, the visual quality and depth resolution may be enhanced. Finally, our experimental results show the improvement of depth resolution and visual quality of the reconstructed 3D images.
Digital holographic sound imaging for frequency estimation of piezoelectric vibrator
Sudheesh K. Rajput, Osamu Matoba
Digital holography (DH) has been used for sound field imaging and visualization in which phase retarded object wave due to sound wave propagation in the region is recorded and temporal phase profile as sound data is obtained. Based on this characteristics of DH, we present frequency estimation system for vibrating objects such as piezoelectric vibrator. The method employed an off-axis DH geometry where object wave passes near to vibrating object, which superimpose with reference wave. The interference patterns are recorded in digital holograms using high speed image sensor as function of time. Finally, to estimate vibration frequency of piezoelectric vibrator, Fresnel reconstruction of free space propagation and some acousto-optic data processing techniques have been used. The method can detect the frequencies of piezoelectric vibrator with high accuracy. We presents experimental results for estimation of vibration frequency of piezoelectric vibrator.
An overview of flexible sensing integral imaging for three-dimensional profilometric reconstruction with occlusion removal
We overview a previously reported method for three-dimensional (3D) profilometric reconstruction with occlusion removal based on flexible sensing integral imaging. With flexible sensing, the field-of-view of the image system can be increased by randomly distributing a camera array on a non-planar surface. The camera matrices are estimated using the captured multi-perspective elemental images, and the estimated matrices are used for 3D reconstruction. Object recognition is then implemented on the reconstructed image by nonlinear correlation to detect the 3D position of the object. Finally, an algorithm is proposed to visualize the 3D profile of the object with occlusion removal.