Homogenization research of waveform sampling LiDAR point cloud data
Author(s):
Zhiwei Dong;
Liyu Sun;
Shen Tan;
Tao Xu;
Runsu Gao;
Deying Chen
Show Abstract
The streak tube imaging LiDAR has promising application prospect due to the ability of full waveform sampling and high sensitivity. This kind of LiDAR generates massive point cloud data with high efficiency. However, the distribution of laser foot points is found usually irregular due to the scanning mode of this kind of LiDAR. This paper focuses on the interpolation of ideal points to realize uniform distribution of the point cloud using interpolation techniques, including nearest neighbor, arithmetic mean and inverse distance weighted interpolation. Specifically, we propose a new homogenization method in which the inverse distance weighted interpolation is improved. The suitability of homogenization methods for point cloud generated by streak tube imaging LiDAR is tested. The results show that the nearest neighbor method has better restoration of buildings with abrupt elevation values and inverse distance weighted interpolation outperforms other selected methods when processing flatland. It has been proven that the new method we propose possesses the advantages of both nearest neighbor and inverse distance weighted techniques.
The advantages and applications of three-dimensional integral imaging technology
Author(s):
Wenge Zhang;
Junfu Wang;
Xiaoyu Jiang;
Yifei Wang
Show Abstract
Traditional visual images can only be used as flat display to convey two-dimensional (2D) information, which is difficult to meet needs of people. Integral Imaging (InI) is a kind of three-dimensional (3D) display technology using computer graphics, image processing and other display technologies. With the continuous optimization of algorithms and the growth accuracy of devices such as display devices, InI has shown unique advantages in equipment production, military sciences, medical insurance, commercial advertising and more. This article focuses on the advantages and applications of 3D InI technology.
Research progress of computational integral imaging
Author(s):
Yifei Wang;
Xiaoyu Jiang;
Hui Gao;
Junfu Wang
Show Abstract
Integral imaging technology would be one of the most promising technology for the battlefield visualization in future war. The reconstruction of integral imaging can be classified into optical reconstruction and computational reconstruction. Computational reconstruction is accomplished by computer simulation of the optical integral imaging reconstruction process, which can overcome many problems caused by device limitation in optical integral imaging system and improve the image quality through digital processing technology. The reconstructed image is digital format, which can provide data support for 3D depth extraction, 3D target recognition, 3D image processing and so on. Based on the principle of integral imaging, this paper focuses on the principle and development history of computational reconstruction, and compares the performance of two computational reconstruction methods.
3D reconstructions in coregistered photoacoustic and ultrasonic imaging using clinical ultrasonic system
Author(s):
Yongping Lin;
Yanghuan Luo;
Zhifang Li;
Jianyong Cai;
Hui Li
Show Abstract
Photoacoustic Imaging (PAI) has potential for clinical applications in real-time after a tiny modification of a current US scanner. The shared detector platform facilitates a natural integration of PA and US imaging creating a hybrid imaging technique that combines functional and structural information. In this work, two blood vessels phantom experiment was conducted by coregistered photoacoustic and ultrasonic imaging using clinical ultrasonic system. The vessels were placed about 6 cm away from the transducer. With conventional irradiation, real-time PA and US images could be obtained during the experiment. 450 of 2D PA and US images and reconstructed 3D imaging were taken by transducer scanning. The result indicates the system has the ability to get the PA signal in a deep tissue depth. 3D PA image clearly describes the tissue structure and benefits the detecting in clinical application.
Evaluation of a novel reconstruction method for synthetic aperture in-line digital holograms with seams
Author(s):
Meng Ding;
Qi Fan;
Yin Su;
Yun-fei Wang
Show Abstract
In this paper, we propose a new reconstruction method for the synthetic aperture on-axis digital hologram with seams, and then evaluate it thoroughly. This method combines the principles of synthetic aperture and phase retrieval. It is applied to the experiment of the particle field detection. In the experiment, the pictures depict the particle is not only clearly visible at the normal region but also at the seams. With error analysis in reconstruction, method of correcting stitching errors improves accuracy furtherly. Therefore, the method proposed in this paper can effectively restrain the influence on the reconstructed image due to the loss of information at the seams and can achieve high-quality reconstructed image from the seamed stitching synthetic aperture on-axis digital hologram. It can be widely used in the diagnostic domain with high resolution and large visual field.
Rapid algorithm of muli-plane holographic display
Author(s):
Zhe Han;
Yan Qi;
Boxia Yan;
Yanwei Wang;
Yu Wang
Show Abstract
A rapid algorithm of multi-plane holographic display is given. A proper thin lens phase factor was combined with fast Fourier transform (FFT) in this algorithm to move the reconstructed image of Fourier transform hologram to specify depth from infinity and the imaging effect of this algorithm is identical with Fresnel diffraction. Using this simple operation replaced complicated Fresnel diffraction integral in the Ping-Pong iteration algorithm, the computational load of multi-plane hologram was reduced and computational speed was significantly increased. Moreover, the hologram of a 3-D object consists of two pictures at different depths was computed by this modified Ping-Pong algorithm. The reconstructing of this obtained hologram was also simulated in MATLAB. The result showed the new algorithm is feasible and effective.
Research on dispersion of light in integrated imaging LED display technology
Author(s):
Lijin Deng;
Yan Piao;
Yusan Yang;
Taotao Wang
Show Abstract
The integrated imaging stereoscopic display technology is an image technology that uses a microlens array to record and display 3D spatial scene information. The research of integrated imaging LED display technology provides a new development path for the LED industry. In this paper, we use micro lens array to perform stereoscopic display experiments on LED display screens with a dot pitch of 1.25mm. It is found that there is a phenomenon of dispersion of light (rainbow phenomenon) due to crosstalk. Especially when a white light source is displayed, crosstalk is the most serious. In order to solve this problem, according to the relationship between the light emission characteristics of the LED point light source and the crosstalk degree of the LED display pixel point after the lens, a barrier lens is designed to effectively solve the light dispersion phenomenon.
Design of an autostereoscopic 3D shooting system with adjustable spacing between camera arrays
Author(s):
Hui Zhao;
Junguo Xie
Show Abstract
The autostereoscopic 3D display technology requires 3D image acquisition, of which most of the real scene 3D images are taken by camera array. However, at present, the multi-lens stereoscopic camera in the market only has fixed lens spacing or it is not convenient enough that the lens spacing adjustment of the camera array assembled by various research institutions. All of this caused the shooting limitations to be large. There is also a single-lens camera mounted on the guide rail to slide shooting, but it is only suitable for shooting still life. In this paper, we study and provide an autostereoscopic 3D shooting system based on camera arrays, the system consists of 8 to 12 high quality micro digital single lens reflex cameras. Through the L-type mounting plate and the X -type hinged components, each camera can be adjusted according to actual needs its own position, also can quickly adjust the spacing between each camera to make it equal. This system is easy to assemble and disassemble, easy to operate, and can be widely used in the field of the autostereoscopic 3D image acquisition
Effect of tube setting on image quality in industrial x-ray computed tomography
Author(s):
Zhaoying Sun;
Lei Li;
Xiaoqi Xi;
Yu Han;
Bin Yan;
Lihui Rong;
Sangsang Zhou
Show Abstract
In X-ray computed tomography (CT), variability in tube voltage and current setting may affect the image quality. Based on an industrial X-ray micro-CT scanner, this paper will investigate the impact of the X-ray tube setting on image quality of the projection images as well as the reconstruction results with various voltage and current choices in the CT experiments. Fresh corn is initially selected as an experimental sample in 6 different series of measurements. We set the tube current at 130μA, 200μA, 270μA while keeping the tube voltage and other acquisition parameters constant, and then keep the tube current constant while varying the tube voltage at 70kV and 100kV, respectively. For evaluation both the signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) are calculated as image quality criteria for each set of the projected images and reconstructed images. The results indicate that increasing the tube current and voltage can both improve the SNR and CNR. Furthermore, the tube voltage has more impact on the improvements. At the same time, the variations on image quality of reconstruction images keeps the same pace with that of the projection images. The reliability of the conclusion will be further explored experimentally using aircraft blades in CT nondestructive testing.
Three-dimensional display using multi-layer translucencies
Author(s):
Huarong Gu;
Tiancheng Sun
Show Abstract
Three-dimensional (3D) display technology, which aims at presenting almost-realistic 3D images to the observer without any auxiliary devices, has drawn great attention from both academia and industry these years. The 3D display based on multi-layer translucent structure is a new parallax-based 3D display model. Compared with conventional parallax barrier and integrated imaging, it can effectively ensure the utilization of light energy and image resolution of the system, expand the screen depth at the same time, and display a realistic virtual 3D scene. In this paper, we implement a flat 3D display using multi-layer translucencies. Based on our prototype of flat 3D display, we further extend it to spatial 3D display using mirroring of a square pyramid, which allows observers to see virtual 3D objects from different directions in the air.
Calibration of 3D imaging system based on multi-line structured-light
Author(s):
Yi Zou;
Lihong Niu;
Binghua Su;
Yushun Sun;
Jinyuan Liu
Show Abstract
This paper presents a new and efficient calibration method for line structured-light plane and expand it to the multi-line structured light case.This method only uses the checkerboard as the planar target, and it can move freely as long as the line structure light is projected onto the checkerboard.Combining the process with Zhang’s method of camera calibration,we can get plenty of points which are in the light plane to fit the plane.To be more efficient, this paper propose to project multiple light planes with projector .After the calibration of two of the light planes ,A frame is established at the intersection of the light planes, and then all the pose of the light planes are deduced. Each stripe is encoded by photographing several different stripe images to distinguish which plane the stripes belong to. With the pose of the plane, the 3D-points in the camera coordinate are calculated. The experimental results show that the error of the calibration of one light plane is reduced to less than 1% ,and time is deduced by the multiple light planes
Configurable cameras with MMS architecture
Author(s):
Wubin Pang;
David J. Brady
Show Abstract
Monocentric multiscale (MMS) lens architecture provides a versatile, compact, high information efficiency and low-cost way of building panoramic imagers which can be easily tailored to various application scenarios. MMS lens consists of two parts, one concentric spherical objective lens with large aperture in front and an array of small aperture microcameras in rear. This front objective collects incoming light and forms a curved focal surface which is then being segmented and relayed by the secondary array optics. Since the front objective is a spherically symmetric element, the configuration of the secondary array optics determines the overall imaging space. Each microcamera can be used as a building block and offers the flexibility of compositing myriad of FoV coverage and easy re-configuration. Another merit of this modular design is that the design of the secondary optics can be varied from channel to channel. In this way, we can construct an imaging system with multi-focal lengths, multi-aperture sizes and other multi-specifications. This varied channel property allows for sub-region adaptive imaging ability. Finally, if multiple MMS lenses are co-designed and used jointly, some combinational functions can be accomplished. To verify these virtues of MMS architecture, we present several design examples in this paper. A rectangular and a 360-degree ring configuration are demonstrated and show different packing choices. Then we illustrate a multi-focal design which shows secondary optics of different channels are modified for a relatively uniform sampling rate of targeted area.
Design and analysis of an analog signal readout circuit for SPAD
Author(s):
Xiang-Liang Jin;
Duo-Duo Zeng;
Hong-Jiao Yang;
Yang Wang;
Jun Luo
Show Abstract
-In this paper, an ultralow and high speed photocurrent analog signal readout circuit in order to amplify and process the output of SPAD(single-photon avalanche diodes) photocurrent is presented. The four main parts of the ASIC are low temperature coefficient(7.85ppm/°C) bandgap reference circuit, high linearity and high common-mode rejection ratio(120.6dB) operational amplifier, filter circuit and low-delay(63ns/1MHZ) comparator circuit. The SPAD readout chip is fabricated in a standard 0.5um CMOS process and size of 672um*780um. The simulation results indicate the chip successfully amplifies and processes 80nA and 1MHZ photocurrent analog signal. The circuit is fit for processing fleetness change and faint signal in CMOS image sensor of acquisition technology.
liquid crystal phase modulator based on deep sub-grating structure
Author(s):
Hu Heteng;
GuiZhu Wang;
Sui Wei;
Chuan Shen;
Rong RunQin;
Bu Min
Show Abstract
The challenge of dynamic holographic video display based on spatial light modulator is that it requires a large spatial bandwidth product. A simple method is to reduce the size of a single pixel in conventional LCOS (liquid crystal on silicon , LCOS) device. However, with the pixel size shrinks, it also requires a corresponding reduction in thickness of liquid crystal cell, otherwise the fringe field effect between pixels will affect the modulation of normal pixel. A deep sub-wavelength metal grating with a Fabry-Perot resonance is used instead of the top electrode of LCOS to form a liquid crystal phase modulator in this paper. Different from traditional LCOS, which realizes the phase modulation by using the birefringence of the liquid crystal in liquid crystal cell, the birefringence of the liquid crystal in our device is used to modulate the conditions of the reflective boundary of the deep sub-wavelength metal grating, which in turn controls the amount of phase modulation of reflected light in grating slit. The TechWiz and CST Microwave Studios software are used in this paper. Observing the distribution of liquid crystal directors and electric field distribution. Recording the intensity of visible light reflection and observing whether the device can achieve phase modulation of 0 ~ 2π by changing the pixel pitch and grating structure parameters. The simulation results show that there is no significant change in the liquid crystal directors and electric field distribution in different pixel pitch, the device phase modulation is close to 2π , and it has a high reflectivity.
3D laser imaging method based on low cost 2D laser radar
Author(s):
Xiaobin Xu;
Ronghao Pei;
Minzhou Luo;
Zhiying Tan
Show Abstract
In this paper, we propose a three-dimensional (3D) laser imaging method. The working principle of laser radar is introduced, and three scanning strategies are proposes based on low-cost two-dimensional (2D) laser radar. Three dimensional point cloud images under different strategies are simulated and analyzed. Combined with the advantages of the three strategies, a 3D laser radar scheme with pitch and rotation function is designed, and the coordinate transformation of 3D laser radar data point cloud display is established. Three-dimensional imaging experiments on real environment scenes are carried out. The experimental results show that the designed 3D laser radar can get 3D point cloud data in real time. It provides support for low cost 3D laser radar to realize 3D reconstruction and image fusion.
Horizontal parallax light field display using pixel mapping algorithm with high resolution printed EIA (elemental image array)
Author(s):
Yi Yang;
LiCao Li;
HanTsung Hsueh
Show Abstract
We designed a horizontal parallax only light field display system which satisfies the super multi-view (SMV) condition. The pseudoscopic problem is solved by pixel mapping algorithm. The VAC (Vergence-Accommodation conflict) problem is relieved because the reconstructed 3D objects are not located at the display screen, thus the eye can accommodate on the reconstructed 3D objects rather than on the display screen. This system has a FOV(field of view) of ~30 degree and has a good motion parallax. The CII(computational integral imaging) simulation and the experimental results confirmed our design. This system is very close to a commercial 4k mobile phone’s panel parameters, thus can be applied to it directly only with some minor adjusts.
Fast 3D digital holography tomography based on dynamic compressive sensing
Author(s):
Senlin Jin;
Yuan Xu;
Chongxia Zhong;
Wei Liang;
Yan Huang;
Hejun Yao
Show Abstract
As a high-resolution, non-destructive internal structure three-dimensional imaging technology, digital holographic microscopy tomography can provide advanced and safe detection technologies and research tools for the development of high-tech such as life sciences, clinical medicine, and new materials. In order to reduce the reconstruction time and improve the quality of reconstruction, the compressive sensing theory is applied to holographic imaging. Compressive holography technology can not only achieve the tomographic reconstruction of objects from a small amount of holographic data, but also solve the problem of crosstalk between the layer and the layer and the elimination of noise in the tomographic reconstruction process, and the effect is particularly obvious. In this paper, the dynamic compressive sensing theory is applied to the field of three-dimensional digital holographic microscopy, which is different from the fixed sampling method used in the general compressive holographic imaging. It achieved fast 3D digital holography and improved axial resolution. We obtained holographic tomography images at a sampling rate of 6.25%, doubling the axial resolution without loss of reproduction image resolution.
RGB-D dense SLAM with keyframe-based method
Author(s):
Xingyin Fu;
Feng Zhu;
Qingxiao Wu;
Yunlei Sun
Show Abstract
Currently, feature-based visual Simultaneous Localization and Mapping (SLAM) has reached a mature stage. Feature-based visual SLAM systems usually calculate the camera poses without producing a dense surface, even if a depth camera are provided. In contrast, dense SLAM systems simultaneously output camera poses as well as a dense surface of the reconstruction region. In this paper, we propose a new RGB-D dense SLAM system. First, camera pose is calculated by minimizing the combination of the reprojection error and the dense geometric error. We construct a new type of edge in g2o, which adds the extra constraints built with the dense geometric error to the graph optimization. The cost function is minimized in a coarse-to-fine strategy with GPU which contributes to the enhancement of system frame rate and promotion of large camera motion convergence. Second, in order to generate dense surfaces and provide users with a feedback of the scanned surfaces, we use the surfel model to fuse RGB-D streams and generated dense surface models in real-time. The surfels in the dense model are updated with embedded deformation graph to keep them consistent with the optimized camera poses after the system performs essential graph optimization and full Bundle Adjustment (BA). Third, a better 3D model is achieved by re-merging the stream with the optimized camera poses when the user ends the reconstruction. We compare the accuracy of generated camera trajectories and reconstruction surfaces with the state-of-the-art systems based on the TUM and ICL-NIUM RGB-D benchmark datasets. Experimental results show that the accuracy of dense surfaces produced online is very close to that of later re-fusion. And our system produces better results than the state-of-the-art systems in terms of the accuracy of the produced camera trajectories.
RGB-D dense mapping with feature-based method
Author(s):
Xingyin Fu;
Feng Zhu;
Qingxiao Wu;
Rongrong Lu
Show Abstract
Simultaneous Localization and Mapping (SLAM) plays an important role in navigation and augmented reality (AR) systems. While feature-based visual SLAM has reached a pre-mature stage, RGB-D-based dense SLAM becomes popular since the birth of consumer RGB-D cameras. Different with the feature-based visual SLAM systems, RGB-D-based dense SLAM systems, for example, KinectFusion, calculate camera poses by registering the current frame with the images raycasted from the global model and produce a dense surface by fusing the RGB-D stream. In this paper, we propose a novel reconstruction system. Our system is built on ORB-SLAM2. To generate the dense surface in real-time, we first propose to use truncated signed distance function (TSDF) to fuse the RGB-D frames. Because camera tracking drift is inevitable, it is unwise to represent the entire reconstruction space with a TSDF model or utilize the voxel hashing approach to represent the entire measured surface. We use moving volume proposed in Kintinuous to represent the reconstruction region around the current frame frustum. Different with Kintinuous which corrects the points with embedded deformation graph after pose graph optimization, we re-fuse the images with the optimized camera poses and produce the dense surface again after the user ends the scanning. Second, we use the reconstructed dense map to filter out the outliers of the features in the sparse feature map. The depth maps of the keyframes are raycasted from the TSDF volume according to the camera pose. The feature points in the local map are projected into the nearest keyframe. If the discrepancy between depth values of the feature and the corresponding point in the depth map exceeds the threshold, the feature is considered as an outlier and removed from the feature map. The discrepancy value is also combined with feature pyramid layer to calculate the information matrix when minimizing the reprojection error. The features in the sparse map reconstructed near the produced dense surface will impose large influence in camera tracking. We compare the accuracy of the produced camera trajectories as well as the 3D models to the state-of-the-art systems on the TUM and ICL-NIUM RGB-D benchmark datasets. Experimental results show our system achieves state-of-the-art results.
Fusion and display processing for 2D and 3D heterogeneous data of enhanced synthetic vision system
Author(s):
Pengxiang Gao;
Guixi Liu;
Chao Zhang
Show Abstract
The airborne enhanced synthetic vision system generates a virtual scene that reflects a real scene by comprehensively utilizing multiple types of data. The commonly used data includes sensor imaging data, obstacle data, navigation and attitude data, scene database data, and two-dimensional and three-dimensional symbol data. Only if these data are effectively integrated and displayed, can the entire system be made effective and reliable. The fusion display processing in this article is based on the OpenGL rendering pipeline. The highlights include the data types and processing ideas involved in the enhanced synthetic vision system, the processing methods of some common symbols, and the processing flow for superimposing sensor images and database data.
Depth estimation based on binocular disparity and color-coded aperture fusion
Author(s):
Zhiwei Zhong;
Dianle Zhou;
Xiaoshen Wang;
Xiaotian Pan;
Xilu Shun
Show Abstract
In this paper we present a novel binocular passive depth sensor with minor hardware modifications on color-coded aperture. It is based on binocular disparity and color-coded aperture depth from defocus fusion algorithm. The sensor measure depth in detail on occlusion and blur condition in contrast to prior-art approaches. Contributions of this paper are: (1) introduction of color-coded aperture binocular designs, (2) corresponding fusion algorithm. The red and green optical filters are placed in the front of iris of the left and right cameras respectively. In order to correct the fuzzy kernel, we use the red channel of the left image and the green channel of the right image as the reference to perform stereo matching of binocular disparity under the same blurs level. Furthermore, we introduce a method fusion binocular stereo matching and color-coded aperture depth from defocus cost function. The proposed methods are experiment under occlusion and blur condition. The results show that the matching accuracy is improved 20%. And the proposed algorithm is outperformance under mismatch and occlusion condition in binocular stereo matching.
A new method for violence detection based on the three dimensional scene flow
Author(s):
Wu Wang;
Yunfei Cheng;
Yuexia Liu
Show Abstract
Violence detection from surveillance video is a challenging and attractive task. This paper introduce a new violence detection using binocular stereo vision. We use the sparse stereo matching method to extract the feature points of both rectified images and obtain the vision disparity of the point. The 3D coordinates of the points are calculated through the standard 3D measurement theory. To describe the spatio-temporal property, we extract features aligned with the trajectories to characterize depth information (Three-dimensional motion vector), appearance (histograms of oriented gradients) and motion (histograms of optical flow). In order to obtain the discriminative feature, this paper adopts sparse coding scheme and support vector machine (SVM) to classify the feature vector as normal or abnormal.
Phase retrieval via incremental reweighted gradient descent
Author(s):
Shichao Cheng;
Quanbing Zhang;
Feihang Hu;
Yufan Yuan
Show Abstract
In this paper, a phase retrieval algorithm based on the Incremental Truncation Wirtinger flow (ITWF) and reweighted gradient descent algorithm, which is called Incremental Reweighted Gradient Descent (IRGD). The presented IRGD algorithm is divided into two steps as most optimization algorithms: an initial estimation and an iterative refinement. In the iterative process of the algorithm, we refine the initial estimate value by combining the incremental with the reweighted gradient descent. Compared with WF and other algorithms which needs to pass through the entire data at each time, it has obvious advantages when dealing with large-scale signals. In order to speed up the convergence of iterative estimates and increase the robustness, we use the reweighted method to attach large weights to the reliable gradients and small weights to the spurious ones, and integrate the smoothing function and the relaxation parameter into the gradient descent formula. The simulation experimental results show that it can recover the unknown signal accurately under the given random Gaussian measurement with certain noise, and is superior to the most existing algorithms in convergence speed and success rate under the same condition.
Design of LED array light collecting system based on circular aperture mirrors
Author(s):
Ye Wang;
ChunShui Wang;
Fang Li
Show Abstract
Currently LED as a light source is more and more widely applied, this paper presents an optical system design for LED source array base on a Fresnel lens array on a circular disc and a pair of circular mirrors that can compose of LED light together, forming a small area in the high luminance light source optical system can form a high power source system, this system can solve the heat dissipation problem of LED, and then the beam array are uniform, with no color aberration generated by this optical system. Such light source optical system can be used as the LED light source projector, high power shooting lamp, photography light, and many optical instruments such as super power point light source. The use of optical design software to optimize the design and calculation of optical parameters of circular and spherical mirror, based on beam analysis, realize the ideal LED light source array of a general set of optical system structure, can be used to design the others array light source.
Full parallax synthetic hologram based on SRTM elevation terrain data
Author(s):
Rui Hou;
Jia Yu;
Buyu Guo;
Huiping Liu;
Wenbin Xu
Show Abstract
Three-dimensional terrain data have a wide range of applications in urban planning, environmental monitoring, disaster prediction, game entertainment and many other fields. With the development of science and technology and the progress of cognitive psychology, displaying technology has also been improved to fulfill the demand of humans, and promotes the rapid development of three-dimensional display technology. Nowadays, the visualization technology of three-dimensional terrain is a research hotspot in related fields. Through the terrain data model building, simulation and three-dimensional displaying, a variety of techniques have been used to achieve the display of three-dimensional terrain. The three-dimensional display is a technique for displaying three-dimensional images with the medium of two-dimensional plane, which can visually present the different angles and depth information of object and bring the realism close to the objective world. The effect is very similar to people's visual and cognitive habits, which makes it accepted as an ideal real three-dimensional display technology. How to realize the visualization of 3D terrain data without any special glasses or relying on electronic display devices is the research topic of this paper. In this paper, a method of combining Fourier holography with full parallax holographic stereogram is presented to realize the display of three-dimensional terrain by making full parallax hologram. Firstly, the SRTM terrain (elevation) data is processed, the three-dimensional digital model of terrain is established and then is sampled according to the principle of holographic stereogram, and thus a two-dimensional image array containing parallax information is obtained. Then, according to the holographic diffraction formula, the image transformation of two-dimensional image array is carried out to meets the requirement of Fourier holographic transform. Finally, the holographic recording optics setup is designed and the following procedures are take: laser beam is divided into object beam and reference beam, Fourier transform are involved in the procession of object beam, interference between the reference light beam and the Fourier transform image is recorded in the focus point of the objective lens, a microscopic objective with large NA is used to achieve enough view angle, images need to be recorded are displayed one at a time by a LCOS light modulator, under the position of the precision platform every unit of the hologram is recorded automatically, precisely and effectively. After post - processing, the three-dimensional topographic hologram with large view field and full parallax are accomplished. The method presented in this paper realizes the efficient automatic production of three-dimensional topographic hologram based on SRTM terrain data. Under the white light illumination, using the large-view-field and full-parallax three-dimensional hologram display, the elevation data of the terrain are displayed intuitively, accurately, clearly and delicately, which is of great significance for the research of high quality and large field holographic three-dimensional display, and has practical application value in Landform Surveying, commodity exhibition, anti-counterfeiting, advertising and so on.
A large fisheye conversion lens for projectors
Author(s):
Ye Wang;
ChunShui Wang;
Fang Li
Show Abstract
This paper designed a large fisheye conversion lens work for digital projector, the lens work in many applications where users simply want to project a very wide angle onto the screen, such as simulation, immersive environments, and amusement parks. This lens can be applied to most of the current commercial projectors, and the projection ratio of the current commercial projector can reach 0.6 to 0.8: 1, the optical aperture is larger than the diameter of 220mm.
Imaging through turbid medium using a new iterative phase retrieval algorithm
Author(s):
Jiahuan Li;
Chenfei Jin;
Siqi Zhang;
Mingwei Huang;
Yuan Zhao
Show Abstract
Direct detection imaging in complex, inhomogeneous media is a difficult challenge due to the existence of multiple scattering. One way used to extract the information of the object from the speckle pattern is to use speckle correlation based on the memory effect, and the object is recovered by an iterative phase retrieval algorithm. Here we report a new iterative phase retrieval algorithm that is referred to as the absolute output (AO) Gerchberg-Saxton algorithm and can single-shot ultra-fast reconstruct the object image. Different from the error reduction (ER) algorithm and hybrid input-output (HIO) algorithm, this algorithm does not need to satisfy the non-negative constraints. We experimentally demonstrate that the reconstructed image achieved by our algorithm is faster, more reliable, and more consistent. Our method has strong anti-interference ability, which has great potential in imaging through turbid medium such as fog and biological tissue.
Cylindrical sample space imaging and material BRDF performance measurement
Author(s):
Houping Wu;
Haiyong Gan;
Guojin Feng;
Yingwei He;
Zilong Liu;
Sen Liu
Show Abstract
In the field of space exploration, it is often necessary to detect and identify the target under specific light source illumination and different observation angle conditions. Due to different shapes of the sample at different wavelengths of illumination source and geometry measurement conditions, the appearance of the obtained sample images are very different, and there are great differences between the apparent scale of the image and the inherent dimensions of the actual object. Using a standard reflector to calibrate the imaging system, the image or data can be corrected. In this paper, cylindrical sample is selected as the research object, a beam of 680nm parallel monochromatic light source is used for illumination, and the digital camera is used to photograph at a certain angular interval in the range of observation angle (0-75°), and the geometric dimension of the image pixel is calibrated. Through the calibration of image pixel geometry and the calculation of image gray value, the relationship between the gray value changes and the geometrical dimensions of the images captured under different imaging geometry conditions is studied. With the same wavelength light source lighting conditions, a bi-directional reflection distribution function (BRDF) calibration device was used to measure BRDF under multiple geometric conditions for the same material flat material. The BRDFs of material was used to study the appearance of the target sample, and the relationship between the image of the target sample and the corresponding BRDF of the material under different geometric conditions was analyzed and compared. A series of image processing data and methods for cylindrical samples are given in this paper, which has certain practical significance for the study of material properties and target imaging.
Effects of number of elemental images of lens-array based systems on angular resolution
Author(s):
Bu Min;
GuiZhu Wang;
Chuan Shen;
Sui Wei;
QuanBing Zhang;
Rong RunQin;
Hu Heteng
Show Abstract
Lens-array based imaging system is a kind of the direct-view system that is currently a hot topic of research. In fact, the viewing parameters of this system have been one of the focuses of research since its invention. However, most current literature discussions are based on depth of field, spatial resolution, and field of view. Only a few documents or products use angular resolution, spatial resolution, and field of view as criteria. The angle resolution is one of the parameters in the human eye vision module. This article will try to discuss the hardware conditions for realizing the angular resolution in the LCD discrete pixel system and its influence on the 3D perception. The results show that the angular resolution is determined by the number of pixels in each sub-image and is proportional to the number of elemental images and is limited by the LCD pixel pitch.
Research on 16K video stream coding and system architecture for 3D display
Author(s):
Runqin Rong;
Guizhu Wang;
Chuan Shen;
Sui Wei;
Bu Min;
Heteng Hu
Show Abstract
3D display requires a high-resolution, high-pixel playing system. This paper studies the video playback with single frame resolution of 16K × 8K . The bandwidth of display exceeds the current video display capability of 8K × 4K resolution. On the one hand, an appropriate system architecture needs to be built on the existing hardware level. 16 liquid crystal displays (LCDs) are used in this paper, with resolution of each LCD is 4K × 2K and size is 15.6 inches. Constituting a video display terminal array of 4 × 4 ,which the resolution is 16K × 8K . And a single-layer architecture with a fully decode is used. Decoding 16 channels of 4K signals on one host with i7-6800K and two video cards with NVS810 (both with 25GBps memory bandwidth) and deliver signals through 16 DP interfaces in parallel. On the other hand, under this architecture, running load of CPU and GPU, bus bandwidth and scheduling of dynamic storage capacity impose higher requirements on the encoding and decoding of video data. Comparative research between MPEG series and H.26X series coding standard has carried out in this paper. An inter-frame-based forward prediction (BFP) method is proposed. Finally, in the Win7 system, using mpeg-2 encoding standard, and decoding by using ffplay that achieves 16K video with 15Fps smooth playback. The effectiveness of the proposed method is verified. The proposed BFP method further reduces the decoding complexity of CPU.
A new stereo matching method for RAW image data based on improved SGBM
Author(s):
Yan Liu;
Wei Huang;
Yuheng Wang
Show Abstract
Traditionally, stereo vision algorithms are performed after color image processing pipeline, which includes demosaicing, color correction, white balance, etc. Color image processing pipeline possibly causes a loss of key information and introduces artifacts into final output image, which may lead stereo matching accuracy decreased. Hence, we implemented stereo matching algorithm on RAW data before color image processing pipeline to improve binocular stereo vision system matching accuracy. We proposed using RAW data in stereo matching enhance the robustness and accuracy of binocular stereo vision algorithm. Our approach focus on the first stage of many stereo algorithms: stereo matching. We approach the problem by using improved SGBM (Semi-Global Block Matching) algorithm. The proposed algorithm is tested on RAW image pairs captured by stereo camera system, and the experiments indicates that the algorithm is effective.