Proceedings Volume 7690

Three-Dimensional Imaging, Visualization, and Display 2010 and Display Technologies and Applications for Defense, Security, and Avionics IV

cover
Proceedings Volume 7690

Three-Dimensional Imaging, Visualization, and Display 2010 and Display Technologies and Applications for Defense, Security, and Avionics IV

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 23 April 2010
Contents: 13 Sessions, 49 Papers, 0 Presentations
Conference: SPIE Defense, Security, and Sensing 2010
Volume Number: 7690

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Holographic Imaging I
  • 3D Displays and Related I
  • 3D Displays and Related II
  • Holographic Imaging II
  • 3D Image Acquisition
  • 3D Visualization and Processing I
  • 3D Visualization and Processing II
  • 3D Visualization and Processing III
  • Poster Session: Three-Dimensional Imaging, Visualization, and Display 2010
  • Military Display Systems and Applications I
  • Military Display Systems and Applications II
  • Stereoscopic Displays for Training and Operations
  • Head and Body-Worn Displays
Holographic Imaging I
icon_mobile_dropdown
Current research activities on holographic video displays
"True 3D" display technologies target replication of physical volume light distributions. Holography is a promising true 3D technique. Widespread utilization of holographic 3D video displays is hindered by current technological limits; research activities are targeted to overcome such difficulties. Rising interest in 3D video in general, and current developments in holographic 3D video and underlying technologies increase the momentum of research activities in this field. Prototypes and recent satisfactory laboratory results indicate that holographic displays are strong candidates for future 3D displays.
Speckle-based phase retrieval applied to 3D microscopy
Nowadays Digital holographic microscopy (DHM) is used for three dimensional imaging of micro-objects. But DHM is a technique which requires the interference between the object beam and a known background, known as the reference beam. The two-beam nature of this makes it prone to external vibrations as well as a tedious adjustment of beam ratios to achieve high fringe contrast. Iterative phase retrieval techniques reconstruct the wavefront from the intensity sampled at several axial planes. This method is an attractive alternative to digital holographic methods, mainly because it is a single beam technique. The wavefront reconstruction is achieved by using the sampled intensity matrices in an appropriate diffraction integral. Angular spectrum propagation approach applied to scalar diffraction theory is used for propagating the wavefront between sampling planes. Here an overview of phase retrieval technique from multiple intensity sampling applied to 3D microscopy is provided.
Using disparity in digital holograms for three-dimensional object segmentation
Digital holography allows one to sense and reconstruct the amplitude and phase of a wavefront reflected from or transmitted through a real-world three-dimensional (3D) object. However, some combinations of hologram capture setup and 3D object pose problems for the reliable reconstruction of quantitative phase information. In particular, these are cases where the twin image or noise corrupts the reconstructed phase. In such cases it is usual that only amplitude is reconstructed and used as the basis for metrology. A focus criterion is often applied to this reconstructed amplitude to extract depth information from the sensed 3D scene. In this paper we present an alternative technique based on applying conventional stereo computer vision algorithms to amplitude reconstructions. In the technique, two perspectives are reconstructed from a single hologram, and the stereo disparity between the pair is used to infer depth information for different regions in the field of view. Such an approach has inherent simplifications in digital holography as the epipolar geometry is known a priori. We show the effectiveness of the technique using digital holograms of real-world 3D objects. We discuss extensions to multi-view algorithms, the effect of speckle, and sensitivity to the depth of field of reconstructions.
Novel proposals in widefield 3D microscopy
E. Sanchez-Ortiga, A. Doblas, G. Saavedra, et al.
Patterned illumination is a successful set of techniques in high resolution 3D microscopy. In particular, structured illumination microscopy is based on the projection of 1D periodic patterns onto the 3D sample under study. In this research we propose the implementation of a very simple method for the flexible production of 1D structured illumination. Specifically, we propose the insertion of a Fresnel biprism after a monochromatic point source. The biprism produces a pair of twin, fully coherent, virtual point sources. After imaging the virtual sources onto the objective aperture stop, the expected 1D periodic pattern is produced into the 3D sample. The main advantage of using the Fresnel biprism is that by simply varying the distance between the biprism and the point source one can tune the period of the fringes while keeping their contrast.
3D Displays and Related I
icon_mobile_dropdown
Three-dimensional displays suitable for human visual field characteristics
We have developed several three-dimensional display systems that are matched to the human visual field characteristics. In this article, we describe our developed display systems, which are matched for the human communication in the closerange, medium-range, and distant rage categories.
LED projection architectures for stereoscopic and multiview 3D displays
LED-based projection systems have several interesting features: extended color-gamut, long lifetime, robustness and a fast turn-on time. However, the possibility to develop compact projectors remains the most important driving force to investigate LED projection. This is related to the limited light output of LED projectors that is a consequence of the relative low luminance of LEDs, compared to high intensity discharge lamps. We have investigated several LED projection architectures for the development of new 3D visualization displays. Polarization-based stereoscopic projection displays are often implemented using two identical projectors with passive polarizers at the output of their projection lens. We have designed and built a prototype of a stereoscopic projection system that incorporates the functionality of both projectors. The system uses high-resolution liquidcrystal- on-silicon light valves and an illumination system with LEDs. The possibility to add an extra LED illumination channel was also investigated for this optical configuration. Multiview projection displays allow the visualization of 3D images for multiple viewers without the need to wear special eyeglasses. Systems with large number of viewing zones have already been demonstrated. Such systems often use multiple projection engines. We have investigated a projection architecture that uses only one digital micromirror device and a LED-based illumination system to create multiple viewing zones. The system is based on the time-sequential modulation of the different images for each viewing zone and a special projection screen with micro-optical features. We analyzed the limitations of a LED-based illumination for the investigated stereoscopic and multiview projection systems and discuss the potential of a laser-based illumination.
Stereoscopic display technologies for FHD 3D LCD TV
Dae-Sik Kim, Young-Ji Ko, Sang-Moo Park, et al.
Stereoscopic display technologies have been developed as one of advanced displays, and many TV industrials have been trying commercialization of 3D TV. We have been developing 3D TV based on LCD with LED BLU (backlight unit) since Samsung launched the world's first 3D TV based on PDP. However, the data scanning of panel and LC's response characteristics of LCD TV cause interference among frames (that is crosstalk), and this makes 3D video quality worse. We propose the method to reduce crosstalk by LCD driving and backlight control of FHD 3D LCD TV.
Near-eye displays: state-of-the-art and emerging technologies
This paper will start with a brief review of the recent advancements in near-eye displays, then focus on the development and results of two emerging technologies aiming to address two critical issues related to near-eye displays: (a) a freeform optical technology promising near-eye displays with an ultimately compact form factor, close to a pair of eyeglasses rather than a traditional helmet style; and (b) a vari- and multi-focal technology promising more accurate rendering of depth cues than conventional stereoscopic displays.
3D Displays and Related II
icon_mobile_dropdown
Polarization imaging of a 3D object by use of digital holography and its application
A polarimetric imaging method of a 3D object by use of on-axis phase-shifting digital holography is presented. The polarimetric image results from a combination of two kinds of holographic imaging using orthogonal polarized reference waves. Experimental demonstration of a 3D polarimetric imaging is presented. Pattern recognition by use of polarimetric phase-shifting digital holography is also presented. Using holography, the amplitude and phase difference distributions between two orthogonal polarizations of 3D phase objects are obtained. This information contains both complex amplitude and polarimetric characteristics of the object, and it can be used for improving the discrimination capability of object recognition. Preliminary experimental results are presented to demonstrate the idea.
Depth cues in human visual perception and their realization in 3D displays
Over the last decade, various technologies for visualizing three-dimensional (3D) scenes on displays have been technologically demonstrated and refined, among them such of stereoscopic, multi-view, integral imaging, volumetric, or holographic type. Most of the current approaches utilize the conventional stereoscopic principle. But they all lack of their inherent conflict between vergence and accommodation since scene depth cannot be physically realized but only feigned by displaying two views of different perspective on a flat screen and delivering them to the corresponding left and right eye. This mismatch requires the viewer to override the physiologically coupled oculomotor processes of vergence and eye focus that may cause visual discomfort and fatigue. This paper discusses the depth cues in the human visual perception for both image quality and visual comfort of direct-view 3D displays. We concentrate our analysis especially on near-range depth cues, compare visual performance and depth-range capabilities of stereoscopic and holographic displays, and evaluate potential depth limitations of 3D displays from a physiological point of view.
High-definition 3D display for enhanced visualization
In this paper, we report on the development of a high definition stereoscopic liquid crystal display for use in a variety of applications. The display technology provides full spatial and temporal resolution on a liquid crystal display panel consisting of 1920×1200 pixels at 60 frames per second. Applications include training, mission rehearsal and planning, and enhanced visualization. Display content can include mixed 2D and 3D data. Source data can be 3D video from cameras, computer generated imagery, or fused data from a variety of sensor modalities. Recent work involving generation of 3D terrain from aerial imagery will be demonstrated. Discussion of the use of this display technology in military and medical industries will be included.
Holographic Imaging II
icon_mobile_dropdown
Exploring cell dynamics at nanoscale with digital holographic microscopy for diagnostic purposes
P. Marquet, D. Boss, J. Kühn, et al.
Digital holographic microscopy (DHM) is a technique that allows obtaining, from a single recorded hologram, quantitative phase image of living cell with interferometric accuracy. Specifically the optical phase shift induced by the specimen on the transmitted wave front can be regarded as a powerful endogenous contrast agent, depending on both the thickness and the refractive index of the sample. The quantitative phase images allow the derivation of highly relevant cell parameters, including dry mass density and spatial distribution. Thanks to a decoupling procedure, cell thickness and intracellular refractive index can be measured separately. Consequently, cell morphology, shape as well as cell membrane fluctuations can be accurately monitor. As far as red blood cell are considered, Mean corpuscular volume (MCV) and mean corpuscular hemoglobin concentration (MCHC), two highly relevant clinical parameters, have been measured non-invasively at a single cell level. The DHM nanometric axial and microsecond temporal sensitivities have permitted to measure the red blood cell membrane fluctuations (CMF) over the whole cell surface.
Generation, encoding, and presentation of content on holographic displays in real time
Enrico Zschau, Robert Missbach, Alexander Schwerdtner, et al.
This paper discusses our solution for driving holographic displays with interactive or video content encoded in real-time by using SeeReal's Sub-Hologram-technology in combination with off-the-shelf-hardware. Guidelines for correctly creating complex content including aspects regarding transparency in holograms from both the content side and the holography side are presented. The conventional approaches for generating computer generated holograms are discussed in comparison with our solution using Sub-Holograms, to rapidly reduce computation power. Finally the computingplatform and the specification of our 20 inch direct-view holographic prototype will be presented.
Fourier hologram generation from multiple incoherent defocused images
Jae-Hyeung Park, Seung-Woo Seo, Ni Chen, et al.
A novel method to capture a Fourier holography of the three-dimensional objects under regular incoherent illumination is proposed. Multiple images of the three-dimensional objects are captured by a camera while moving the focal plane along the optic axis over the whole object space. Captured multiple defocused images are processed considering the point spread function of the camera and the Fourier holography is finally synthesized. The principle is explained and verified experimentally.
Wavefront error analysis and compensation in a digital holographic microscope
Moonseok Kim, Sukjoon Hong, Kwangsup Soh, et al.
Digital holography (DH) has a big advantage to retrieve the three-dimensional (3D) information of the object from only one interference recording. Especially, the digital holographic microscope (DHM) using a microscope objective (MO) has been researched for 3D microscopy. The researches have progressed for compensation of aberrations and improvement of the resolution in the optical system in recent years. Most of small aberrations caused by a MO are compensated through various researches. However, the measured phase is distorted in the optical system, which has the significant wavefront deformation in illuminating wave larger than number of wavelengths. In this paper, the relation between illuminating wave and the reconstructed phase is studied based on the wave optics and the analysis is confirmed by the simulations. The analysis of the wavefront compensation is applied to a super-resolution DHM in theory and the technique for retrieving the distribution of the intensity and phase is demonstrated in simulation.
Fresnel patterns insertion on image for data encoding and robust perceptual image hashing
T. Fournel, A. Rivoire, J. M. Becker, et al.
A piling of Fresnel patterns is inserted in an image for data encoding and image hashing synchronization, dedicated to local authentication. Beyond, the problem is to preserve both decoding and image content perception. Besides this, the insertion must not too much alter perceptual image hashing.
3D Image Acquisition
icon_mobile_dropdown
Characteristics of diverging radial type stereoscopic camera
Jung-Young Son, Seok-Won Yeom, Dong-Soo Lee, et al.
A diverging-type stereo camera arrangement is introduced to use in hand-held mobile devices such as mobile phone, hand PC and introscopes. The arrangement allows making the inter-camera distance much smaller than that in the conventional stereo camera arrangements such as parallel and radial types by adjusting the diverging angle. Computer simulation shows that it can introduce more distortion than the parallel type but it can enhance the depth sense.
3D imaging system for biometric applications
Kevin Harding, Gil Abramovich, Vijay Paruchura, et al.
There is a growing interest in the use of 3D data for many new applications beyond traditional metrology areas. In particular, using 3D data to obtain shape information of both people and objects for applications ranging from identification to game inputs does not require high degrees of calibration or resolutions in the tens of micron range, but does require a means to quickly and robustly collect data in the millimeter range. Systems using methods such as structured light or stereo have seen wide use in measurements, but due to the use of a triangulation angle, and thus the need for a separated second viewpoint, may not be practical for looking at a subject 10 meters away. Even when working close to a subject, such as capturing hands or fingers, the triangulation angle causes occlusions, shadows, and a physically large system that may get in the way. This paper will describe methods to collect medium resolution 3D data, plus highresolution 2D images, using a line of sight approach. The methods use no moving parts and as such are robust to movement (for portability), reliable, and potentially very fast at capturing 3D data. This paper will describe the optical methods considered, variations on these methods, and present experimental data obtained with the approach.
High-efficiency acquisition of ray-space using radon transform
In this paper, we propose a method for high efficiency acquisition of Ray-Space for FTV (Free viewpoint TV). In this research, incomplete data is directly captured by a novel device, i.e. photodiode/lens array, and transformed to full information by Radon transform. We must capture the large amount of data in conventional acquisition of Ray-Space using multiple cameras. However Ray-space has redundancy because it consists of set of lines which depend on depth of objects. We use the Radon transform to exploit this redundancy. The Radon transform is set of projection data along different directions. Thus Ray-space can be reconstructed from projection data in limited range by the inverse Radon transform. Capturing the part of projection data correspond to capturing sums of several rays by 1 pixel. We have simulated reconstruction of Ray-space projection data which was computed by computer simulation of capturing device. As a result, by using fewer pixels than rays, we could reduce the information to reconstruct Ray-space.
All-around convergent view acquisition system using ellipsoidal mirrors
In this paper, we present a new image acquisition system for FTV (Free-viewpoint TV). The proposed system can capture the dynamic scene from all-around views. The proposed system consists of two ellipsoidal mirrors, a high-speed camera, and a rotating aslope mirror. As for two ellipsoidal mirrors, the size and the ellipticity are mutually different. The object is set in the focus of ellipsoidal mirror. The size of this system is smaller than that of early system since ellipsoidal mirrors can reduce virtual images. High-speed camera can acquire multi viewpoint images by mirror scanning. Here, we simulated this system with ray tracing and confirmed the principle.
3D Visualization and Processing I
icon_mobile_dropdown
Optical slicing of large scenes by synthetic aperture integral imaging
Héctor Navarro, Genaro Saavedra, Ainhoa Molina, et al.
Integral imaging (InI) technology was created with the aim of providing the binocular observers of monitors, or matrix display devices, with auto-stereoscopic images of 3D scenes. However, along the last few years the inventiveness of researches has allowed to find many other interesting applications of integral imaging. Examples of this are the application of InI in object recognition, the mapping of 3D polarization distributions, or the elimination of occluding signals. One of the most interesting applications of integral imaging is the production of views focused at different depths of the 3D scene. This application is the natural result of the ability of InI to create focal stacks from a single input image. In this contribution we present new algorithm for this optical slicing application, and show that it is possible the 3D reconstruction with improved lateral resolution.
Three-dimensional (3D) visualization and recognition using truncated photon counting model and integral imaging
Inkyu Moon
In this paper, a statistical approach for three-dimensional (3D) visualization and recognition of photon-starved events based on a parametric estimator is overviewed. A truncated Poisson probability density function is considered for modeling the distribution of a few photons count observation. For 3D visualization and recognition of photon-starved events, an integral imaging, maximum likelihood estimator (MLE) and statistical inference algorithms are employed. It is shown in experiments that the parametric MLE using a truncated Poisson model for estimating the average number of photons for each voxel of a 3D object has a small estimation error compared with the MLE using a Poisson model and 3D recognition performance for photon-starved events can be enhanced by using the presented method.
Extension of depth of field using amplitude modulation of the pupil function for bio-imaging
In this paper we present a novel approach to generate images of extended depth of field (DOF) without compromising the lateral resolution to support realization of three-dimensional imaging systems such as integral imaging. In our approach in extending DOF, we take advantage of the spatial frequency spectrum of the object specific to the task in hand. The pupil function is thus engineered in such a fashion that the modulation transfer function (MTF) is maximized only in these selected spatial frequencies. We extract these high energy spatial frequencies using PCA method. The advantage of our approach is illustrated using an amplitude modulation and a phase modulation example. In these examples, we split the pupil filter and choose the optimum transmission/phase value of each section in the filter in a way that the response of the system in all the DOF range as well as spatial frequencies of interest is optimized. Consequently, we have optimized the DOF extension process with blocking the minimum possible area in the pupil plane. This maximizes the output image quality (e.g. 10% DOF improvement) compared to the existing methods where non-optimal blocking of the lens area may cause more degradation in output image quality. Experimental results are presented to illustrate our proposed approach.
Axially distributed 3D imaging and reconstruction
Three-dimensional (3D) imaging systems are being researched extensively for purposes of sensing and visualization in fields as diverse as defense, medical imaging, art, and entertainment. An overview on a multi-view imaging system in an axially distributed sensing architecture for three dimensional is presented. In this configuration, the sensor moves along its optical axis and collects 2D imagery which can be computationally reconstructed at arbitrary depths in the object space. When compared to traditional 2D imaging techniques, 3D imaging offers advantages in ranging, robustness to scene occlusion, and target recognition performance. The proposed imaging system is different than conventional multiview imaging systems, such as integral imaging, in the sense that collection of 3D information is not uniform across the field of view and in many cases the inherent linear motion of the platform can be exploited for 3D image acquisition. The system parameters are analyzed and experimental results are presented.
3D Visualization and Processing II
icon_mobile_dropdown
Compressive light field imaging
Light field imagers such as the plenoptic and the integral imagers inherently measure projections of the four dimensional (4D) light field scalar function onto a two dimensional sensor and therefore, suffer from a spatial vs. angular resolution trade-off. Programmable light field imagers, proposed recently, overcome this spatioangular resolution trade-off and allow high-resolution capture of the (4D) light field function with multiple measurements at the cost of a longer exposure time. However, these light field imagers do not exploit the spatio-angular correlations inherent in the light fields of natural scenes and thus result in photon-inefficient measurements. Here, we describe two architectures for compressive light field imaging that require relatively few photon-efficient measurements to obtain a high-resolution estimate of the light field while reducing the overall exposure time. Our simulation study shows that, compressive light field imagers using the principal component (PC) measurement basis require four times fewer measurements and three times shorter exposure time compared to a conventional light field imager in order to achieve an equivalent light field reconstruction quality.
Three-dimensional reconstruction of absorbed data in thin photonic data storage media
We have been investigating a new type of optical data storage media using three-dimensional diffused object. The data is stored as three-dimensional absorbers in a highly scattering medium. The scattering medium can protect the absorbers because it blurs the light distribution. To recover the absorption distribution, the scattering coefficient distribution of the medium is required. We present an algorithm to recover the 3D absorption distribution to decrease the calculation time. Numerical evaluation of the proposed algorithm and the storage capacity are discussed.
Geometric analysis on stereoscopic images captured by single high-definition television camera on lunar orbiter Kaguya (SELENE)
Masato Miura, Jun Arai, Junichi Yamazaki, et al.
We present a generating method of stereoscopic images from moving pictures captured by a single high-definition television camera mounted on the Japanese lunar orbiter Kaguya (Selenological and Engineering Explorer, SELENE). Since objects in the moving pictures look as if they are moving vertically, vertical disparity is caused by the time offset of the sequence. This vertical disparity is converted into horizontal disparity by rotating the images by 90 degrees. We can create stereoscopic images using the rotated images as the images for a left and right eyes. However, this causes spatial distortion resulting from the axi-asymmetrical positions of the corresponding left and right cameras. We reduced this by adding a depth map that was obtained by assuming that the lunar surface was spherical. We confirmed that we could provide more acceptable views of the Moon by using the correction method.
3D Visualization and Processing III
icon_mobile_dropdown
3D display and image processing system for metal bellows welding
Industrial welded metal Bellows is in shape of flexible pipeline. The most common form of bellows is as pairs of washer-shaped discs of thin sheet metal stamped from strip stock. Performing arc welding operation may cause dangerous accidents and bad smells. Furthermore, in the process of welding operation, workers have to observe the object directly through microscope adjusting the vertical and horizontal positions of welding rod tip and the bellows fixed on the jig, respectively. Welding looking through microscope makes workers feel tired. To improve working environment that workers sit in an uncomfortable position and productivity we introduced 3D display and image processing. Main purpose of the system is not only to maximize the efficiency of industrial productivity with accuracy but also to keep the safety standards with the full automation of work by distant remote controlling.
Three-dimensional passive millimeter-wave imaging and depth estimation
Seokwon Yeom, Dong-Su Lee, Hyoung Lee, et al.
We address three-dimensional passive millimeter-wave imaging (MMW) and depth estimation for remote objects. The MMW imaging is very useful for the harsh environment such as fog, smoke, snow, sandstorm, and drizzle. Its penetrating property into clothing provides a great advantage to security and defense systems. In this paper, the featurebased passive MMW stereo-matching process is proposed to estimate the distance of the concealed object under clothing. It will be shown that the proposed method can estimate the distance of the concealed object.
Quantum dot dispersions in aerogels: a new material for true volumetric color displays
The true volumetric displays project a 3D image within a cube viewable from most of its sides thus providing the ultimate physiological depth cues for countless applications. The ultra-light and highly transparent aerogels may provide the best optical medium for these displays as they can be easily fabricated in the form of a large-volume, low-scattering bulk material. On the other hand, the semiconductor nanocrystals (quantum dots, QDs) are a remarkable fluorescent material with optical properties superior to those of conventional materials. QDs dispersed in aerogels hold a promise to become the most efficient display material for volumetric 3D displays. The true volumetric displays described in the literature are built around the concept of two beams exciting the fluorescent material in their intersection. However, the optical properties of QDs are quite different from these of the fluorescent materials proposed for intersecting-beams displays and it may not be feasible to build such displays using QDs. Instead, we are proposing the use of a single focused infrared laser beam to excite a nanostructured material for volumetric color displays consisting of QDs dispersed in a transparent silica aerogel matrix. Presented are the theory and modeling results proving the feasibility of this approach.
CSI Helsinki: comparing three-dimensional imaging of diagonal cutter toolmarks using confocal microscopy and SWLI
I. Kassamakov, C. Barbeau, S. Lehto, et al.
Cutting tools leave characteristic marks that can connect a set of toolmarks to an individual tool. When the depth resolution of an optical microscope is insufficient, more advanced three-dimensional (3D) imaging methods such as Scanning White Light Interferometry (SWLI) and confocal microscopy are required. We cut ten copper wires (2.1±0.1 mm diameter) maintaining a predefined blade orientation and position using diagonal cutting pliers. Images of the sample surfaces were created using equipment based on optical microscopy, SWLI and confocal microscopy. SWLI and confocal microscopy set-ups can produce consistent high-resolution 3D images that are relevant for forensic toolmark comparison.
Poster Session: Three-Dimensional Imaging, Visualization, and Display 2010
icon_mobile_dropdown
Human factor study on the crosstalk of multiview autostereoscopic displays
Jinn-Cherng Yang, Kuo-Chung Huang, Chou-Lin Wu, et al.
Stereoscopic depth perception has been analyzed in many laboratory experiments since Wheatstone's (1838) discovery that disparity is a sufficient and compelling stimulus for the perception of depth with mirror-type stereo displays. In this paper, mirror-type stereo displays were used as the instrument to simulate the 3D image in the human factor experiment. It can be used to simulate the 9 view 3D display by image processing method with different multi-view crosstalk levels measured from luminance measurement device. The disparity of multi-view images to form stereopsis with depth perception is decided by the 9-view autostereoscopic 3D display that subject can properly fuse the image to get the proper visual depth. Computer graphic method applied for multi-view content rendering with shooting distance of 70 cm for each virtual camera. The distance between cameras is 5.6 cm with parallel capture to simulate the images accepted by human eyes. The experimental design was used for testing subjective evaluations based on the questionnaire, and ANOVA methods were used for analysis. Experimental variables of this human factor study for multi-view 3D display are five levels of crosstalk distribution from measured data, with or without shadow effects and perspective line shown within tested images. In addition, the result of acceptable system crosstalk level for multi-view stereoscopic display is between Level 4.7 and Level 5.9 in average for the four tested images.
Moiré pattern reduction by using special designed parallax barrier in an autostereoscopic display
Wei-Ting Yen, Chi-Lin Wu, Chou-Lin Wu, et al.
The moiré pattern is caused by the spatial interference of two regular pattern structures. In the case of an autostereoscopic display, it's caused by the overlap of the parallax barrier and the black matrix among pixels of FPD. To minimize the moiré effect, we simulate the relationship of brightness distribution and various design parameters of the parallax barrier. According to the simulation results, a combination of multiple parameters was chose to obtain a moiré free autostereoscopic display based on the concept of mutual compensation among the design parameters. After the detailed simulation, experiments of the final design were made to verify the performance of the display.
Comparing numerical error and visual quality in reconstructions from compressed digital holograms
Taina M. Lehtimäki, Kirsti Sääskilahti, Tomi Pitkäaho, et al.
Digital holography is a well-known technique for both sensing and displaying real-world three-dimensional objects. Compression of digital holograms has been studied extensively, and the errors introduced by lossy compression are routinely evaluated in a reconstruction domain. Mean-square error predominates in the evaluation of reconstruction quality. However, it is not known how well this metric corresponds to what a viewer would regard as perceived error, nor how consistently it functions across different holograms and different viewers. In this study, we evaluate how each of seventeen viewers compared the visual quality of compressed and uncompressed holograms' reconstructions. Holograms from five different three-dimensional objects were used in the study, captured using a phase-shift digital holography setup. We applied two different lossy compression techniques to the complex-valued hologram pixels: uniform quantization, and removal and quantization of the Fourier coefficients, and used seven different compression levels with each.
Novel approach to estimate fringe order in Moire profilometry
A novel approach to estimate fringe order in Moire topography is proposed. Along with the light source used to create shadow of the grating on the object (as in conventional moiré), proposed method uses a second light source which illuminates the object with color bands from the side. Width of each colored band is set to match that height which leads to a 2π phase shift in moiré fringes. This facilitates one to rule the object with colored bands, which can be used to estimate fringe order using a color camera with relatively low spatial resolution with out any compromise in height sensitivity. Current proposal facilitates one to extract 3D profile of objects with surface discontinuities. It also deals with the possible usage of moiré topography (when combined with the proposed method) in extracting 3D surface profile of many objects with height discontinuities using a single 2D image. Present article deals with theory and simulations of this novel side illumination based approach.
Military Display Systems and Applications I
icon_mobile_dropdown
An examination of OLED display application to military equipment
J. Thomas, S. Lorimer
OLED display technology has developed sufficiently to support small format commercial applications such as cell-phone main display functions. Revenues seem sufficient to finance both performance improvements and to develop new applications. The situation signifies the possibility that OLED technology is on the threshold of credibility for military applications. This paper will examine both performance and some possible applications for the military ground mobile environment, identifying the advantages and disadvantages of this promising new technology.
Panoramic cockpit displays for tactical military cockpits
The F-35 Joint Strike Fighter (JSF) incorporates the latest technology for aerial warfighting. To support this aircraft's mission and to provide the pilot with the increased situational awareness needed in today's battlespace, a panoramic AMLCD was developed and is being deployed for the first time. This 20" by 8" display is the largest fielded to date in a tactical fighter. Key system innovations had to be employed to allow this technology to function in this demanding environment. Certain older generation aircraft are now considering incorporating a panoramic display to provide their crews with this level of increased capability. Key design issues that had to be overcome dealt with sunlight readability, vibration resistance, touchscreen operation, and reliability concerns to avoid single-point failures. A completely dual redundant system design had to be employed to ensure that the pilot would always have access to critical mission and flight data.
Development of high-performance low-reflection rugged resistive touch screens for military displays
Raymond Wang, Minshine Wang, John Thomas, et al.
Just as iPhones with sophisticated touch interfaces have revolutionised the human interface for the ubiquitous cell phone, the Military is rapidly adopting touch-screens as a primary interface to their computers and vehicle systems. This paper describes the development of a true military touch interface solution from an existing industrial design. We will report on successful development of 10.4" and 15.4" high performance rugged resistive touch panels using IAD sputter coating. Low reflectance (specular < 1% and diffuse < 0.07%) was achieved with high impact, dust, and chemical resistant surface finishes. These touch panels were qualified over a wide operational temperature range, -51°C to +80°C specifically for military and rugged industrial applications.
Cost of ownership for military cargo aircraft using a common versus disparate display configuration
Daniel D. Desjardins, Marvin C. Most
A 2009 paper considered possibilities for applying a common display suite to various front-line bubble canopy fighters, whereas further research suggests the cost savings, post Milestone C production/deployment, might not be advantageous. The situation for military cargo and tanker aircraft, may offer a different paradigm. The primary objective of Defense acquisition is to acquire quality products that satisfy user needs with measurable improvements to mission capability and operational support, in a timely manner, and at a fair and reasonable price. DODD 5000.01 specifies that all participants in the acquisition system shall recognize the reality of fiscal constraints, viewing cost as an independent variable. DoD Components must therefore plan programs based on realistic projections of the dollars and manpower likely to be available in future years and also identify the total costs of ownership, as well as the major drivers of total ownership costs. In theory, therefore, this has already been done for existing cargo/tanker aircraft programs accommodating independent, disparate display suites. This paper goes beyond that stage by exploring total costs of ownership for a hypothetical common approach to cargo/tanker display avionics, bounded by looking at a limited number of such aircraft, e.g., C-5, C-17, C-130H (variants), and C-130J. It is the purpose of this paper to reveal whether there are total cost of ownership advantages for a common approach over and above the existing disparate approach. Aside from cost issues, other considerations, i.e., availability and supportability, may also be analyzed.
Military Display Systems and Applications II
icon_mobile_dropdown
Dual redundant display in bubble canopy applications
Ken Mahdi, James Niemczyk
Today's cockpit integrator, whether for state of the art military fast jet, or piston powered general aviation, is striving to utilize all available panel space for AMLCD based displays to enhance situational awareness and increase safety. The benefits of a glass cockpit have been well studied and documented. The technology used to create these glass cockpits, however, is driven by commercial AMLCD demand which far outstrips the combined worldwide avionics requirements. In order to satisfy the wide variety of human factors and environmental requirements, large area displays have been developed to maximize the usable display area while also providing necessary redundancy in case of failure. The AMLCD has been optimized for extremely wide viewing angles driven by the flat panel TV market. In some cockpit applications, wide viewing cones are desired. In bubble canopy cockpits, however, narrow viewing cones are desired to reduce canopy reflections. American Panel Corporation has developed AMLCD displays that maximize viewing area, provide redundancy, while also providing a very narrow viewing cone even though commercial AMLCD technology is employed suitable for high performance AMLCD Displays. This paper investigates both the large area display architecture with several available options to solve redundancy as well as beam steering techniques to also limit canopy reflections.
Command and control displays for space vehicle operations
This paper shall examine several command and control facility display architectures supporting space vehicle operations, to include TacSat 2, TacSat 3, STPSat 2, and Communications Navigation Outage Forecasting System (CNOFS), located within the Research Development Test & Evaluation Support Complex (RSC) Satellite Operations Center 97 (SOC-97) at Kirtland Air Force Base. A principal focus is to provide an understanding for the general design class of displays currently supporting space vehicle command and control, e.g., custom, commercial-off-the-shelf, or ruggedized commercial-off-the-shelf, and more specifically, what manner of display performance capabilities, e.g., active area, resolution, luminance, contrast ratio, frame/refresh rate, temperature range, shock/vibration, etc., are needed for particular aspects of space vehicle command and control. Another focus shall be to address the types of command and control functions performed for each of these systems, to include how operators interact with the displays, e.g., joystick, trackball, keyboard/mouse, as well as the kinds of information needed or displayed for each function. [Comparison with other known command and control facilities, such as Cheyenne Mountain and NORAD Operations Center, shall be made.] Future, anticipated display systems shall be discussed.
Stereoscopic Displays for Training and Operations
icon_mobile_dropdown
High-definition 3D display for training applications
In this paper, we report on the development of a high definition stereoscopic liquid crystal display for use in training applications. The display technology provides full spatial and temporal resolution on a liquid crystal display panel consisting of 1920×1200 pixels at 60 frames per second. Display content can include mixed 2D and 3D data. Source data can be 3D video from cameras, computer generated imagery, or fused data from a variety of sensor modalities. Discussion of the use of this display technology in military and medical industries will be included. Examples of use in simulation and training for robot tele-operation, helicopter landing, surgical procedures, and vehicle repair, as well as for DoD mission rehearsal will be presented.
3D display for enhanced tele-operation and other applications
In this paper, we report on the use of a 3D vision field upgrade kit for TALON robot consisting of a replacement flat panel stereoscopic display, and multiple stereo camera systems. An assessment of the system's use for robotic driving, manipulation, and surveillance operations was conducted. A replacement display, replacement mast camera with zoom, auto-focus, and variable convergence, and a replacement gripper camera with fixed focus and zoom comprise the upgrade kit. The stereo mast camera allows for improved driving and situational awareness as well as scene survey. The stereo gripper camera allows for improved manipulation in typical TALON missions.
On-demand stereoscopic 3D displays for avionic and military applications
Kalluri Sarma, Kanghua Lu, Brent Larson, et al.
High speed AM LCD flat panels are evaluated for use in Field Sequential Stereoscopic (FSS) 3D displays for military and avionic applications. A 120 Hz AM LCD is used in field-sequential mode for constructing eyewear-based as well as autostereoscopic 3D display demonstrators for test and evaluation. The COTS eyewear-based system uses shutter glasses to control left-eye/right-eye images. The autostereoscopic system uses a custom backlight to generate illuminating pupils for left and right eyes. It is driven in synchronization with the images on the LCD. Both displays provide 3D effect in full-color and full-resolution in the AM LCD flat panel. We have realized luminance greater than 200 fL in 3D mode with the autostereoscopic system for sunlight readability. The characterization results and performance attributes of both systems are described.
Head and Body-Worn Displays
icon_mobile_dropdown
Wearable computer technology for dismounted applications
Small computing devices which rival the compact size of traditional personal digital assistants (PDA) have recently established a market niche. These computing devices are small enough to be considered unobtrusive for humans to wear. The computing devices are also powerful enough to run full multi-tasking general purpose operating systems. This paper will explore the wearable computer information system for dismounted applications recently fielded for ground-based US Air Force use. The environments that the information systems are used in will be reviewed, as well as a description of the net-centric, ground-based warrior. The paper will conclude with a discussion regarding the importance of intuitive, usable, and unobtrusive operator interfaces for dismounted operators.
Color tunable photonic textiles for wearable display applications
I. Sayed, J. Berzowska, M. Skorobogatiy
Integration of optical functionalities such as light emission, processing and collection into flexible woven matrices of fabric have grabbed a lot of attention in the last few years. Photonic textiles frequently involve optical fibers as they can be easily processed together with supporting fabric fibers. This technology finds uses in various fields of application such as interactive clothing, signage, wearable health monitoring sensors and mechanical strain and deformation detectors. Recent development in the field of Photonic Band Gap optical fibers (PBG) could potentially lead to novel photonic textiles applications and techniques. Particularly, plastic PBG Bragg fibers fabricated in our group have strong potential in the field of photonic textiles as they offer many advantages over standard silica fibers at the same low cost. Among many unusual properties of PBG textiles we mention that they are highly reflective, PBG textiles are colored without using any colorants, PBG textiles can change their color by controlling the relative intensities of guided and reflected light, and finally, PBG textiles can change their colors when stretched. Some of the many experimental realization of photonic bandgap fiber textiles and their potential applications in wearable displays are discussed.
Visor projected HMD for fast jets using a holographic video projector
Jonathan P. Freeman, Timothy D. Wilkinson, Paul Wisely
With the advent of faster computers, higher resolution LC displays, and cheap lasers there has been a surge in interest in building video projection systems where a computer generated hologram (CGH) is calculated from the video image and displayed on a LC display (used as a phase device). A laser then reconstructs the video image and projects it. A major advantage of this type of projection system is the LC display can have a substantial number of dead pixels without causing a misinterpretation of the information in the displayed symbology or video. In this work we not only developed an HMD using this technique but also incorporated aberration correction into the hologram to reduce lens complexity and weight. The system was designed to fit onto a conventional HGU53P helmet and project off the slightly forward visor (based on the BAE Systems Viper 1 HMD configuration). The optics, laser and LC display all fitted between the area swept by the raised visor and the helmet shell. The end result was two methods of producing a 22 degree FOV display, both capable of easily achieving 4000fL symbology at the eye in red or green with a 75% transmissive visor. Symbology and video could be mixed with the symbology an order of magnitude brighter than the video.
OLED microdisplay design and materials
Ihor Wacyk, Olivier Prache, Tariq Ali, et al.
AMOLED microdisplays from eMagin Corporation are finding growing acceptance within the military display market as a result of their excellent power efficiency, wide operating temperature range, small size and weight, good system flexibility, and ease of use. The latest designs have also demonstrated improved optical performance including better uniformity, contrast, MTF, and color gamut. eMagin's largest format display is currently the SXGA design, which includes features such as a 30-bit wide RGB digital interface, automatic luminance regulation from -45 to +70°C, variable gamma control, and a dynamic range exceeding 50:000 to 1. This paper will highlight the benefits of eMagin's latest microdisplay designs and review the roadmap for next generation devices. The ongoing development of reduced size pixels and larger format displays (up to WUXGA) as well as new OLED device architecture (e.g. high-brightness yellow) will be discussed. Approaches being explored for improved performance in next generation designs such as lowpower serial interfaces, high frame rate operation, and new operational modes for reduction of motion artifacts will also be described. These developments should continue to enhance the appeal of AMOLED microdisplays for a broad spectrum of near-to-the-eye applications such as night vision, simulation and training, situational awareness, augmented reality, medical imaging, and mobile video entertainment and gaming.
Near-eye displays for rugged body-worn applications
Near-eye displays are finding applications in Warfighter's fielded body-worn navigation, information and display systems. In these rugged environments, requirements differ from those displays found in aviation or in simulation and training applications. This paper will discuss the application-specific requirements for these body-worn devices as well as lessons-learned from the field.