Innovative microchannel plate with reformulation of composition and modification of microstructure
Author(s):
Jingsheng Pan;
Jingwen Lv;
S. A. Kesaev;
Shulin Liu;
Zhanying Liu;
Junguo Li;
Xiaoqin Chong;
Detan Shu
Show Abstract
The signal-to-noise ratio (SNR) and mean time to failure (MTTF) are two important attributes to describe the performance and operation life of an image intensifier. The presents of the ion barrier film (IBF) in Gen. III image intensifier, which used to suppress MCP's ion feedback, while dramatically improve the MTTF but significantly reduce the SNR, so more completely diminishing the ion poisoning source within the channels of MCP are crucial for improved Gen. III; image intensifier to thinned thickness IBF and achieving this two conflicting attributes promotion simultaneously. This research was originally initiated to develop a MCP with glass composition redesigned specially for GaAs photocathode image intensifier, proved which can be imposed an exceedingly intensive electron bombard degassing but without suffering a fatal gain degrade, and had achieved significantly improved SNR of Gen. III image intensifier but with a short distance to meet the lifetime success, so that our research work step forward to intent upon the restriction of ion poisoning source formation within the MCP substrate, we reformulated the MCP glass composition, and modified the microstructure of this MCP glass substrate though a glass-crystal phase transition during the MCP fabricate heating process, we present an innovative MCP based on a glass-ceramic substrate, with reformulated composition and close-linked network microstructure mix with many of nanometer size crystal grains, provide this MCP with sustainable high gain, lower ion feedback and less outgasing performance, this glass-ceramic MCPs are assembled to Gen. III image intensifiers which results showing promoting both the MTTF and SNR of Gen. III image intensifier.
Research of new-style ultraviolet push-broom imaging technology
Author(s):
Da-yi Yin;
Xin Feng;
Yan Zhang;
Xiang-yang Li;
Xiao-xian Huang;
Bao-li Liu;
Qi Feng
Show Abstract
Nowadays using the ultraviolet (UV) radiation to image space objects has been a progressive direction for remote
sensing. On earth, the atmospheric window to pass the UV radiation is the wave band from 280nm to 400nm. In this gap,
it will be supposed to image for the UV detection. Previously, it had been the normal method to detect the UV radiation
by using silicon-based devices or photomultiplier tubes as key detectors, but they also had intrinsic shortcomings
sensitive to other wave bands, such as the visible or the short-wave infrared band, so the whole optical efficiency of the
system had been low. At the same time, it had been balanced in difficulty, among the Signal-to-Noise Ratio (SNR),
spatial resolution, and spectral resolution, using aforementioned devices. Hence a novel means of the UV push-broom
imaging for remote sensing was introduced in this project. Firstly, a new-style UV linear array detector was designed,
based on the GaN material sensitive to UV radiation from 300nm-370nm, 512-pixel, in possession of the domestic
intellectual property in China, and this UV detector was the first device using the technology to manufacture
GaN-base-512-pixel linear array detector successfully. It has virtues such as the UV radiation band for detection can be
controlled by different ingredients of the GaN-base material, so it isn't necessary to achieve the aim using special UV
optic film filters, and this new-type linear array detector will be flexible and high efficient to image actual objects for UV
remote sensing. Secondly, a UV prototype camera was completed, using the GaN-base-512-pixel UV linear array
detector to implement push-broom imaging, IFOV (500μrad), in nadir and limb view angle (14.67°), SNR prior to 1000
under the condition of a standard solar constant, and the structure of this camera was introduced, including system
characters, optics, electronic modules, and so on. Thirdly, UV images to the actual outdoor objects had been achieved for
the first time. Not only the quality of UV push-broom images was good, but also all parameters of the camera were well
fulfilled. The new-type UV imaging technology using GaN-based linear array detector for push-broom was successfully
validated. In future, this technology will be applied for the marine oil spills pollution detection, preparing for UV
imaging remote sensing under the aviation or the space platform, and it will be carried out from the medium to high
spatial resolution. Besides, it will be applied for the deep space probe or the ozone opacity detection, and etc. In
conclusion, it is significant to the UV remote sensing development.
Extended dynamic range of ultra-high speed gated microchannel plate for x-ray framing camera
Author(s):
Jingsheng Pan;
Jingwen Lv;
Zhurong Cao;
Shenye Liu;
Shulin Liu;
Yanhong Li
Show Abstract
X-ray framing cameras (XFC) based on an ultra-high speed gated microchannel plate (MCP) as a routine diagnostic in
laser-driven Inertial Confinement Fusion (ICF) experiment have deployed on domestic facility for several years,
typically, these XFC devices used a normal MCP with 500μm thick and 12μm pore size, and achieved an optical
temporal gate leas than 100 picoseconds, but which are vulnerable to suffer a time broadened temporal response when
encounter heavy expose, due to the limited dynamic range of the normal MCP. We developed a 56mm format MCP with
250μm thickness and 6μm pore diameter, which objective is to promote the optical temporal gate and dynamic range for
the upgrade XFC, this MCP is fabricated by a special designed low resistance glass, the reduced thickness, small pore
size and increased gain linearity, offered which with ultra-fast temporal response and extended dynamic range characters.
In this paper, we review the mechanisms that limiting the temporal response and gain linearity of this ultra-high speed
gated MCP applied to XFC, and describe the design principle and development work of this ultra-fast temporal response,
extended dynamic range and larger format MCP, this MCP will assemble to the upgrade XFC which is designed by
CAEP and is currently in the final design stages.
Wide baseline stereo matching based on double topological relationship consistency
Author(s):
Xiaohong Zou;
Bin Liu;
Xiaoxue Song;
Yang Liu
Show Abstract
Stereo matching is one of the most important branches in computer vision. In this paper, an algorithm is proposed for
wide-baseline stereo vision matching. Here, a novel scheme is presented called double topological relationship
consistency (DCTR). The combination of double topological configuration includes the consistency of first topological
relationship (CFTR) and the consistency of second topological relationship (CSTR). It not only sets up a more advanced
model on matching, but discards mismatches by iteratively computing the fitness of the feature matches and overcomes
many problems of traditional methods depending on the powerful invariance to changes in the scale, rotation or
illumination across large view changes and even occlusions. Experimental examples are shown where the two cameras
have been located in very different orientations. Also, epipolar geometry can be recovered using RANSAC by far the
most widely method adopted possibly. By the method, we can obtain correspondences with high precision on wide
baseline matching problems. Finally, the effectiveness and reliability of this method are demonstrated in wide-baseline
experiments on the image pairs.
The research on island change detection techniques of multiple-band oriented high resolution remote sensing image
Author(s):
HanSong Zhang;
Difeng Wang;
Delu Pan
Show Abstract
Digital change detection is the computerized process of identifying changes in the state of an object, or other earthsurface
features, between different data. During the last years, a large number of change detection methods have evolved
that differ widely in refinement, robustness and complexity. Some traditional change detection methods could not any
more adapt to high resolution remote sensing images. The prime tendency of remote sensing change detection is from
pixels level to object level. In the paper, with respect to the views of object-oriented change detection in remote sensing
images, an unsupervised technique for change detection (CD) in very high geometrical resolution images is proposed,
which is based on the use of morphological filters. This technique integrates the nonlinear and adaptive properties of the
morphological filters with a change vector analysis (CVA) procedure. Different morphological operators are analyzed
and compared with respect to the CD problem. Alternating sequential filters by reconstruction proved to be the most
effective, permitting the preservation of the geometrical information of the structures in the scene while filtering the
homogeneous areas. We collect two multi-temporal SPOT5 remote sensing images to analyze YangSan island change
detection in this procedure as above mentioned. Experimental results confirm the effectiveness of the proposed technique.
It increases the accuracy of the CD in high remote sensing change detection as compared with the standard CVA
approach.
The calibration of faint simulation star magnitude based on single photon count technique
Author(s):
Xin-ji Gan;
Jin Guo;
Shu-yan Xu
Show Abstract
A calibration method of faint star magnitude of the star scene simulation device is proposed
in this paper. In the research of simulation star magnitude, luminometers and CCD devices
are the general calibration devices which are used to measure the illumination intensity and
calibrate its magnitude. But if the simulation magnitude is only sixth magnitude, its
illumination intensity is only 1.0x10-8 Lux. This illumination intensity level is the lowest
illumination intensity that the commercial luminometer can detect. Hence the simulation
star magnitude lower than six magnitude cannot be calibrated by luminoters. Likewise CCD
devices also need an additive cooler in this case. When the single photon characteristic is
presented due to the low luminosity of simulating light sources, the simulation star
magnitude can be calibrated by detecting its photon flux of radiation with the method of
single photon count. In this paper the detection principle of single photon based on a
compact designed PMT detecting of the radiation level of simulation star magnitude is
advanced. Especially a spectrum match method is proved theoretically to be an effective
means for selecting PMT photosensitive type. In the case of the detection object of the
simulation star in visible wavelength, a analysis indicates that the material of tri-alkali
cathode materials its best choice after being compared the Signal-to-Noise of photon
detector of several PMT photosensitive materials based on the different spectrum match
ratio of different object light sources and different cathode materials. An experiment is
employed to show the relationship of control voltage of PMT and its dark counte, the
relationship of the environment temperature of PMT and its dark counter, which proves its only decades of CPS at room temperature. The so low dark counter avoids a bulky cooler
and is convenient for installing it on the star scene simulation equipment. Finally in the experiment of calibrating the simulation star magnitudes the ability of its calibration is confirmed to reaches up to 12m, meanwhile its calibration error is within ±0.2m.
High-frame-rate intensified shuttered EMCCD camera and performance measurement
Author(s):
Ming-an Guo;
Qun-shu Wang;
Bin-kang Li;
Shao-hua Yang;
Jing-tao Xia
Show Abstract
For low light level applications, A compact and fully integrated, high-frame-rate and intensified shuttered Electron
Multiplying Charge Coupled Device digital image acquisition and analysis system has been developed. The system
integrates high-speed data acquisition, image playback, and image processing features. Basing on the a
backside-illuminated, 128x128 pixels, frame transfer electron multiplying Charge-coupled device imager with a high
quantum efficiency and one video output, A camera operating at up to 800 frames per second has been manufactured.
The camera was to be able to do low light level imaging using of the electron multiplying. The camera is coupled with a
second generation (Gen II) image intensifier by lens and makes it an IEMCCD camera. The system designs are described,
including the clock sequencer generation of the image sensor, the power driving of clock sequencer, the video signal
processing, high-speed data optical fiber transmission and high-speed data acquisition. The dynamic range and the
sensitivity of the EMCCD camera are introduced, and the results are given.
Study on CCD size detecting technology based on imaging
Author(s):
Donglin Yang;
Peng Zhao;
Lei Gu
Show Abstract
Portable CCD size detecting system is characterized by non-contact measuring and convenience for use, and can be
operated in the spot where surroundings is complex and inaccessible. Based on the introduction of operation
principle of portable CCD size detecting system and analysis on various factors which affect measuring accuracy,
this paper presents several measures for improvement of detecting accuracy, such as imaging with narrow field of
view, data acquisition by laser ranging, minimal value acquisition by manual scanning, automatic adjustment of
exposure time of CCD, etc. Detecting accuracy of system is improved.
Influence analysis of the scroll on the image quality of the satellite camera
Author(s):
Chao Fan;
Hong-wei Yi;
Yi-tao Liang
Show Abstract
The object distance of the high-resolution satellite camera will be changed when the camera is scroll imaging, which will
cause not only the alteration of the image scale, but also the variation of the velocity-height ratio (V/H) of the satellite.
The change of the V/H of the camera will induce the asynchronization between the image motion and the traveling of the
charge packet on the focal plane, which will deteriorate the image quality of camera seriously. Thus, the variable
regulation of the relative velocity and the height of the scroll imaging of the satellite were researched, and the expression
the V/H was deduced. Based on this, the influence of the V/H on the image quality was studied from two variable
factors: the latitude and the scroll angle. To illustrate this effect quantitatively, using a given round polar orbit, the
deterioration of the image quality caused by the scroll imaging was calculated for different integral number of the
camera, and regulation interval of the row integration time and the range of the scroll angle were computed. The results
showed that, when the integral number of the camera is equal to 32 and 64, the permitted scroll angle are equal to 29.5°
and 16° respectively for MTFimage motion >0.95, which will give some helpful engineering reference to learn how the
image quality changes during scroll imaging of the satellite camera.
A defect detection scheme for high-end CMOS image sensor
Author(s):
Li Liu
Show Abstract
Defect Detection is a critical process for image sensor production. Many systems has been designed
for low-end CMOS sensors in applications such as mobile phone or webcam. While the industry is
stepping into the hi-end application filed such as motion picture, higher performance sensors are
produced with the improvement of technologies which have different quality standard with those
low-end counterparts. In this paper, a new blemish detection scheme for hi-end CMOS image sensor
is proposed. The defective pixels, columns/rows and clusters on sensors are detected using different
image processing algorithms. The criteria and methods are adjusted according to the different
regions of the image sensor.The tested sensors are then classified according to the test results. The
detection data are also stored for the future video processing purpose. The efficiency of the scheme
is proven by experiments conducted on a high speed high resolution CMOS sensor.
Research and development of a stabilizing holographic interference fringe system based on linear CCD
Author(s):
Chaoming Li;
Xinrong Chen;
Jianhong Wu;
Jianzhi Ju;
Yayi Zhu;
Zuyuan Hu
Show Abstract
A method that is to stabilize holographic interference fringe during holographic recording process is put forward in this
paper. As the kernel of this method, a negative feedback system based on linear CCD and piezoelectric ceramics (PZT)
which is used to compensate the interference fringe random drift caused by various external vibrations in long time
recording process was introduced in details. The proportion-integral-derivative method (PID) is adopted to control the
moving of PZT which is used to compensate the drift of the interference fringes accurately. Thus the interference fringe
can be frozen. Experiment results shows that this negative feedback system by controlling the optical path difference can
effectively compensate the interference fringe random drift caused by various external vibrations in long time recording
process. After using this system, the mean squared error of the interference fringe drift value can be under λ / 60 and the
quality of the holographic grating is improved greatly.
Research of real-time wide field image merging based on multi-cameras
Author(s):
Tao Xu;
Zhao-feng Cen;
Xiao-tong Li
Show Abstract
Wide field images have been widely applied in Visual reality, video compression, transmission and medical apparatus.
The image is usually obtained by using single wide-angle lens such as the fisheye lens or by merging the images scanned
with conventional cameras. The paper proposed a system by placing two cameras at a fixed distance between each other
so the large-scale field of view can be split into small parts and images are obtained synchronously. Because each
adjacent camera has been located fixedly, the corresponding field range of each image is known easily, and a stitching
algorithm based on correlation coefficient of corresponding pixel lists is used to merge the images. As shown by the
experiments, the system proposed in this paper is simple and effective for obtaining wide field image with both high
real-time quality and image resolution.
Optimized design of the inside surface of supersonic missile's elliptical dome
Author(s):
Qun Wei;
Yang Bai;
Hui Liu;
Hongguang Jia;
Ming Xuan
Show Abstract
Dome is the head of a missile which has such a strong effect on the missile's drag. When missiles attack at high speed,
the drag caused by sphere dome is 50%~60% of whole missile's drag [1]. In order to reduce the dome's drag, the idea of
"conformal optics" is studied in some papers. The state of the art of conformal optics is described in James P.Mils paper
[2]. But most people's work focus on the outside of dome's shape design. This paper presents a way to design the dome's
inside surface.
This paper is composed by three main parts. The first part expands the calculation of dome's outflow and the shock
wave. The second section describes how the optical optimizing function made. Finally, the last section shows the result.
Spatial periodicity for coding design in structured light system
Author(s):
Li Xu;
Zhihua Dong;
Zhijiang Zhang
Show Abstract
The principle of spatial periodicity used for coding is proposed in the paper, the maximum stripe deformation (due to
depth change on surface) and measuring resolution limit is analyzed. When spatial periodicity is used for coding, the
resolution is greatly improved, or the number of pattern is greatly reduced for real-time structured light systems. When
spatial periodicity is exploited in coding design, the number of coding is limited according to the principle of spatial
periodicity, which is the maximum of measuring resolution is determined. Or when spatial periodicity is used in coding
design, bottom limit of patterns is defined, which is maximum response speed in real-time structured light system. A
novel coded pattern based on spatial periodicity for real-time structured light system is presented. These coding patterns
allow range scanning of moving objects with high measurement resolution.
A three-dimensional measurement method based on mesh candidates assisted with structured light
Author(s):
Gang Xu;
Wenming Zhang;
Haibin Li;
Bin Liu
Show Abstract
Rendering three-dimensional information of a scene from optical measurement is very important for a
wide variety of applications such as robot navigation, rapid prototyping, medical imaging, industrial
inspection, etc. In this paper, a new 3D measurement method based on mesh candidate with structured
light illuminating is proposed. The vision sensor consists of two CCD cameras and a DLP projector.
The measurement system combines the technology of binocular stereo vision and structured light, so as
to simplify the process of acquiring depth information using mesh candidates. The measurement
method is based on mesh candidates which represent the potential depth in the three dimensional scene.
First the mesh grid was created along the direction of axes in world coordinate system, and the nodes
were considered as depth candidates on the surface of object. Then each group of the mesh nodes
varying along z axis were mapped to the captured image planes of both cameras. At last, according to
the similarity measure of the corresponding pixel pairs, the depth of the object surface can be obtained.
The matching process is between the pixels in both camera planes corresponding to the spatial mesh
candidates. Aided by the structured light pattern, the accuracy of measurement system improved.
Appending the periodic sawtooth pattern on the scene by structured light made measurement easier,
while the computational cost did not increased since the projector had no need to be calibrated. The
3DS MAX and Matlab software were used to simulate measurement system and reconstruct the surface
of the object. After the positioned cameras have been calibrated using Matlab calibration toolbox, the
projector is used to project structured light pattern on the scene. Indicated by experimental results, the
mesh-candidate-based method is obviously superior in computation and accuracy. Compared with
traditional methods based on image matching, our method has several advantages: (1) the complex
feature extraction process is no longer needed; (2) the epipolar constraint is replaced by mesh
candidates so as to simplify stereo match process; (3) the candidate selection strategy makes
unnecessary the process of transformation from two dimensional coordinates to three dimensional
coordinates.
Performance characteristics of solar blind UV image intensifier tube
Author(s):
Hongchang Cheng;
Feng Shi;
Liu Feng;
Hui Liu;
Bing Ren;
Lian-dong Zhang
Show Abstract
The UV radiation of spectrum range of 200~320nm almost is zero on the earth surface because UV radiation is
greatly absorbed by ozone in atmosphere. So this spectrum range is called "Solar Blind Range". Because Solar Blind
UV(SBUV) can't be influenced by atmosphere, it is easy to detect them as soon as SBUV radiation objects appear in the
earth surface. If UV photoelectric image devices are used to observe them, high contrast picture will be acquired, that
bright object's image lie in full black background. It is easy to identify the picture by human eye or other optical sensor
(CCD). A solar blind UV(Ultra Violet) image intensifier tube(SBUV-IIT) is a special image intensifier tube, which was
developed on double proximity focused Generation low-light-level image intensifier tube. It only responses spectrum
range of 200~320 nm., SBUV-IIT can be used to observe UV faint radiation object, because UV sensitivity is high and
response time is rapid and radiation gain is high. Low-altitude-flying missile can be observed by detecting its tail fog
with SBUV-IIT, because its tail fog emits plenty of SBUV. By this way high contrast UV picture can be acquired to
achieve missile warning, and this way has been widely used in foreign ordnance equipment.
SBUV-IIT has been described in this paper. It is double proximity focused MCP (Micro-channel plate) image
intensifier tube. It is 18mm active diameter of photocathode and phosphor screen. Input and output window is quartz
glass and fiber optics faceplate respective. Photocathode material and phosphor screen is tellurium cesium compound
and P20. It has been developed with a limiting UV resolution of 39 line pair per millimeter, and spectral response of
200~320 nm, photocathode maximum sensitivity of 29.5 milli-ampere per watt at wavelength 254 nm and a mass of
35g. It can be coupled with CCD easily. It has been well suited for fingerprint identify and camera system, It'll be used
for UV hail testing, UV earthquake forecasting and so on.
Camera calibration method for dimensional measurement of heavy forging in large scale
Author(s):
Bin Liu;
Chunhai Hu;
Xiaoxue Song
Show Abstract
Camera calibration method plays an important role in the stereovision system to resolve the problems of dimensional
measurement of heavy forging. Due to the intensive vibrating, the camera parameters must be calibrated every time after
the action of the water press. This paper presents a method using the scene geometry to calibrate cameras. In the context
of heavy machinery environments, the constraints which can be used are parallelism and orthogonality. These constraints
lead to geometrically intuitive methods to calibrate the cameras. The huge forging equipment such as water press belongs
to geometrically constrained object and insusceptible to vibrating, which gives natural prior knowledge and constraint
conditions for 3-D reconstruction. The method focuses on the calibration of the extrinsic parameters which are subject to
change since the effects of the workspace factors. The intrinsic parameters were calibrated in advance by an off-line
method and were assumed as invariable. The results of simulation experiments demonstrate that the camera parameters
could be calibrated effectively and achieve the real time need.
Wavelet edge detection based on self-adjusted directional derivative
Author(s):
Jun-fang Wu;
Gui-xiong Liu
Show Abstract
A multi-scale wavelet edge detection algorithm based on directional derivative which can be self-adjusted is proposed.
The high precision and the excellent immunity from noise are achieved. The standard methods of wavelet transform
images along horizontal and vertical directions and suit the detection of horizontal or vertical edges. If they are used to
detect slanting edges, the precision will decline. Other existing wavelet algorithms considering direction information can
only process images along some specified directions. The difficulty confronted by these methods is the dilemma between
the calculational complexity and the orientation accuracy. In this paper, the approach of wavelet edge detection based on
directional derivative which can self-adopt orientation according to edge direction is investigated. The wavelet
transforms are carried out on three scales. At each point of an image, the directional derivative is designed locally based
on the computational results of the neighboring scale so as to acquire self-adjusting characteristic. This has the advantage
to improve precision, and almost not increase the complexity. Besides, the relationship between the Lipschitz exponent
and the magnitudes of wavelet transformation is used to restrain noise. Finally the edge detection experiments for
noise-stained images were done. The results show that our method can achieve both good visual quality and high PSNR
which is enhanced by 3.6 and 6.6 percent respectively comparing with two other wavelet algorithms.
Application of image processing on analyzing the structure of TiO2 nanocrystals
Author(s):
Shu-hua Liu;
Yan-shuang Kang;
Yan-xia Gu
Show Abstract
In this work, we present some practical methods for analyzing and processing the TEM (transmission electron
microscope) images with Matlab, among which including "adjusting the images", "adumbrating the units", "filtering the
noise in the images", and so on. To improve the resolution of the TEM pictures, we use form boards to process the
elements in the input pictures. The form boards are set up according to the characteristics of the input TEM images. In
order to measure the dimension of the nanocrystal more precisely, we distinguish the points with larger changes in grey
level by using the function "edge" to get the structure information of the images. To make the morphology of the crystals
more clearly, we adjust the images by mapping the brightness of the original patterns to a new range of value, which can
be realized with the function "imadjust". To obtain the brightness distribution in the images can help us analyzing the
dispersive property of the nanocrystals. The brightness distribution in the patterns can be obtained with the function
"improfile", which computes the intensity values in the image by using interpolation arithmetic.
Satellite high resolution imaging simulation in space field
Author(s):
Xiaomei Chen;
Ting Li;
Bo Xue;
Xuan Zhang;
Gang Chen;
Guoqiang Ni
Show Abstract
In the paper, a new satellite image simulation method in space field is proposed. According to the path of the satellite
imaging transmission, the simulation is divided into three parts: atmosphere transmission simulation, optical system
imaging simulation, and CCD sampling, integral and quantizing simulation. The experiment results show that the
simulation method in space field can get images closer to reality than the simulation based on MTF, which provide the
detail effects on the imaging path from ground to CCD, and could be an effective tool to estimate the image quality
before satellites in orbit.
Effects of land use and land cover change on ecosystem service values in oasis region of northwest China
Author(s):
Qing Huang;
Dan-dan Li;
Hong-bin Zhang
Show Abstract
Ecosystem services are "the benefits of nature to households, communities, and economies", which can be obtained
directly or indirectly from ecosystem structure, functions or process. Land use and land cover change (LUCC) directly
affects ecosystem functions and ecosystem service values with far-reaching consequences. Located in a typical
Mountain-Oasis-Desert System of arid region, Qiemo oasis is one of the most important oases in Tarim River Basin,
China. Taking Qiemo oasis as a case study, this paper analyzed land use and land cover change from 1989 to 2004 using
remote sensing images of 1989 and 2004. Furthermore, based on Chinese ecosystem service values of unit area of
different ecosystem types, the dynamic changes of ecosystem services value caused by LUCC were evaluated. The
results indicated that the area of forest, Gobi desert and Salina land decreased from 1989 to 2004. On the other hand, the
area of farmland, water land and building land increased, in which the forest land and farmland changed the most. Area
of forest land decreased 41.73% and farmland increased 58.04%. The total ecosystem services value had decreased from
22 265x 104 US$ to 21 373x 104 US$ during the past 15 years. The reduction of forest land was responsible for the
reduction of the ecosystem service value. Population increasing and economic development were main driving forces for
these changes. At last, some measures were put up for a sustainable development in this region.
An overview of crop growing condition monitoring in China agriculture remote sensing monitoring system
Author(s):
Qing Huang;
Qing-bo Zhou;
Li Zhang
Show Abstract
China is a large agricultural country. To understand the agricultural production condition timely and accurately is related
to government decision-making, agricultural production management and the general public concern. China Agriculture
Remote Sensing Monitoring System (CHARMS) can monitor crop acreage changes, crop growing condition, agriculture
disaster (drought, floods, frost damage, pest etc.) and predict crop yield etc. quickly and timely. The basic principles,
methods and regular operation of crop growing condition monitoring in CHARMS are introduced in detail in the paper.
CHARMS can monitor crop growing condition of wheat, corn, cotton, soybean and paddy rice with MODIS data. An
improved NDVI difference model was used in crop growing condition monitoring in CHARMS. Firstly, MODIS data of
every day were received and processed, and the max NDVI values of every fifteen days of main crop were generated,
then, in order to assessment a certain crop growing condition in certain period (every fifteen days, mostly), the system
compare the remote sensing index data (NDVI) of a certain period with the data of the period in the history (last five
year, mostly), the difference between NDVI can indicate the spatial difference of crop growing condition at a certain
period. Moreover, Meteorological data of temperature, precipitation and sunshine etc. as well as the field investigation
data of 200 network counties were used to modify the models parameters. Last, crop growing condition was assessment
at four different scales of counties, provinces, main producing areas and nation and spatial distribution maps of crop
growing condition were also created.
Study of the square grid pattern in dielectric barrier discharge by a CCD digital camera
Author(s):
Lifang Dong;
Shuai Wang;
Han Yue;
Hong Xiao;
Yujie Yang;
Weili Fan
Show Abstract
Square grid pattern generated in a dielectric barrier discharge device is adopted by CCD digital camera. By using 2-D
Fourier spectrum analyzing, it is found that the dot-line pattern is selected by two modes and the hexagonal pattern is
selected by single mode, while the square grid pattern is formed by three wave resonance. For further investigation, the
intensity distribution of square grid pattern is analyzed. Results show that the structures of square grid pattern is perfect
and the intensity distribution along a line contains two types of intensity peaks, indicating that each cell of square grid
pattern is composed of eight spots.
Intelligent real-time CCD data processing system based on variable frame rate
Author(s):
Su-ting Chen
Show Abstract
In order to meet the need of image shooting with CCD in unmanned aerial vehicles, a real-time high
resolution CCD data processing system based on variable frame rate is designed. The system is consisted of three
modules: CCD control module, data processing module and data display module. In the CCD control module, real-time
flight parameters (e.g. flight height, velocity and longitude) should be received from GPS through UART (Universal
Asynchronous Receiver Transmitter) and according to the corresponding flight parameters, the variable frame rate is
calculated. Based on the calculated variable frame rate, CCD external synchronization control impulse signal is
generated in the control of FPGA and then CCD data is read out. In the data processing module, data segmentation is
designed to extract ROI (region of interest), whose resolution is equal to valid data resolution of HDTV standard
conforming to SMPTE (1080i). On one hand, Ping-pong SRAM storage controller is designed in FPGA to real-time store
ROI data. On the other hand, according to the need of intelligent observing, changeable window position is designed, and
a flexible area of interest is obtained. In the real-time display module, a special video encoder is used to accomplish data
format conversion. Data after storage is packeted to HDTV format by creating corresponding format information in
FPGA. Through inner register configuration, high definition video analog signal is implemented. The entire system has
been implemented in FPGA and validated. It has been used in various real-time CCD data processing situations.
Optical investigation on one dimensional dielectric barrier discharge by photomultiplier tubes
Author(s):
Xue-chen Li;
Na Zhao
Show Abstract
In this paper, a simple optical system is used to study the spatial-temporal evolution of pattern formation, which is
composed of an image system and two photomultiplier tubes. The pattern formation is realized in one dimensional
discharge device controlled by dielectric barrier. Results indicate that the discharge is filamentary when the applied
voltage is low, compared with the uniform mode when the applied voltage is high enough. Discharge current of the
former is quite weak as a result of the discharge area is very small. The discharge current signal can hardly be discerned
from the displacement current. Furthermore, the discharge current signal of the latter is completely submerged in the
displacement current. A photomultiplier tube is used to detect the light emission from the discharge. The light emission
signal can be obtained because the photomultiplier tube can magnify the light emission signal several tens or several
hundreds times. Under this circumstance, discharge dynamics can be investigated. Obviously, photomultiplier tube is the
crucial equipment in this optical investigation. Furthermore, both the light emission signals from the total discharge and a
chosen part of the discharge are magnified simultaneously by using two photomultiplier tubes. The discharge
characteristics and mechanism are analyzed.
A rapid 3D shape reconstruction method from silhouette images
Author(s):
Shuai Liu;
Gang Han;
Lingli Zhao
Show Abstract
Three dimensional (3D) shape lies an unresolved and active research topic on the cross-section of computer vision and
digital photogrammetry. In this paper, we focus on 3D object shape reconstruction from uncalibrated images and put
forward a hybrid method. The recovering 3D shape comprises two steps, firstly, calculate homography transformation to
obtain the outlines, secondly, calculate the reconstructed object height by vanishing point and vanishing line from
reference height. This hybrid method requires no camera calibration or the estimation of the fundamental matrix; hence,
it reduces the computational complexity by eliminating the requirement for abundant conjugate points. The experiment
shows that the method is much validated and something useful is obtained.
Investigation of a novel light source by fast opto-electronic device
Author(s):
Xuechen Li;
Pengying Jia;
Na Zhao;
Zhijui Liu;
Xiadong Tian
Show Abstract
In this paper, a fast opto-electronic device is used to investigate a novel ultraviolet light source with an optical system.
The ultraviolet light source is generated by dielectric barrier discharge in argon at low pressure. Experimental results
indicate that the light source is uniform when the gas pressure is lower than 0.1 atm, however, localized discharge
(discharge filament) can be observed when the gas pressure is 0.4 atm. The light emission signals from the discharge are
detected by fast opto-electronic device (Hamamatsu H7826-01) with increasing the amplitude of the applied voltage. It
is shown that the discharge at low voltage (slightly above the breakdown voltage) has two discharge pulses per half cycle
of the applied voltage, and duration of each pulse is more than 1μs. The number of discharge pulses increases with
increasing the applied voltage. An intensified charge coupled device (ICCD) is usually used to investigate the mechanism
of the uniform discharge at low pressure. However, an optical system is used in our experiment. The optical system
includes an image-forming block and a fast opto-electronic device. Spatially resolved measurement of the discharge can
be achieved selectively. The research results indicate that the uniform light source is composed of many micro-discharges
that distribute randomly on the electrode. The duration of the micro-discharge is about several tens nanoseconds. These
results are of great importance for the generation and application of ultraviolet light source.
Robust materials classification based on multispectral polarimetric BRDF imagery
Author(s):
Chao Chen;
Yong-qiang Zhao;
Li Luo;
Dan Liu;
Quan Pan
Show Abstract
When light is reflected from object surface, its spectral characteristics will be affected by surface's elemental
composition, while its polarimetric characteristics will be determined by the surface's orientation, roughness
and conductance. Multispectral polarimetric imaging technique records both the spectral and polarimetric
characteristics of the light, and adds dimensions to the spatial intensity typically acquired and it also could
provide unique and discriminatory information which may argument material classification techniques. But for
the sake of non-Lambert of object surface, the spectral and polarimetric characteristics will change along with the
illumination angle and observation angle. If BRDF is ignored during the material classification, misclassification
is inevitable. To get a feature that is robust material classification to non-Lambert surface, a new classification
methods based on multispectral polarimetric BRDF characteristics is proposed in this paper. Support Vector
Machine method is adopted to classify targets in clutter grass environments. The train sets are obtained in the
sunny, while the test sets are got from three different weather and detected conditions, at last the classification
results based on multispectral polarimetric BRDF features are compared with other two results based on spectral
information, and multispectral polarimetric information under sunny, cloudy and dark conditions respectively.
The experimental results present that the method based on multispectral polarimetric BRDF features performs
the most robust, and the classification precision also surpasses the other two. When imaging objects under
the dark weather, it's difficult to distinguish different materials using spectral features as the grays between
backgrounds and targets in each different wavelength would be very close, but the method proposed in this
paper would efficiently solve this problem.
Applied research of the maximum classification square error method using linear CCD
Author(s):
Chunting Ma;
Ning Liu;
Chao Xiong;
Liqing Fang
Show Abstract
The method of average threshold and the Maximum Classification Square Error Method are the important statistics
methods in the data signal process. The result of calculation are compared. The constitution of the linear CCD measuring
system are induced. The compare experiment using the instrument whose precision less than 0.1 are detected. On the
condition of outside light interfere, the result of experiment is satisfied to the expectation of the measuring.
Interactive closed point algorithm used in 3D objects surface model
Author(s):
Lingli Zhao;
Shuai Liu;
Junsheng Li
Show Abstract
In order to construct a 3D model of object lacking of texture, the main difficulty for registration is the lacking of feature
points and the obtaining of the points' coordinates. The method handles registration problem based on the Iterative
Closed Point (ICP) algorithm, which requires only the procedure to find the closed point on a geometric entity to a given
point. The ICP algorithm is a popular method for the registration, when there is lack of feature points. In order to
compute the points' coordinates, the projector can provide the clear and stable texture on the surface of the object
lacking of texture easily. The camera is used to take photos as the image data for the next processing. Using curve
detection and space intersection, spatial points on the surface of the sheet metal parts are obtained. Sub-Models
overlapping each other are registered by ICP, so the 3D reconstruction is finished. The feasibility of ICP is verified by
the results of the experimentation.
Research on CCD visual sensor-based embedded level measuring system for oil tankers
Author(s):
Le Song;
Yu-chi Lin;
Mei-rong Zhao;
Ying Wu
Show Abstract
A new level measuring system for oil tankers based on machine vision is designed for realizing close-cabin operating
and remote monitoring. The system adopts ARM9 S3C2240 microchip as the central processing unit. With a high-precision
macro-focusing CCD sensor and an image capturing module, the system can acquire the level ruler images and
process them with a series of algorithms. Pre-processing procedures to the captured ruler images, including binarizing
and denoising methods, are implemented to improve the image quality. A grey level projecting program is used to
extract the rectangular area containing digit characters and segment the digits into individual parts. Following judgment
strategies are executed to separate the exact digit of the characters. Each character is scanned with vertical and horizontal
lines at various positions. Pixel change point numbers are counted to distinguish different digit characters to proceed the
recognition procedure. The scale in the viewing field can be accurately localized, so that the automatic recognition result
is obtained. The experimental results for different oil levels indicate that the measuring accuracy of this system can
achieve ±0.1 mm and the automatic reading time is less than 0.5 s, which shows the characteristics of high-precision and
high-speed.
Wide field multi-objects position detection through digital close-range photogrammetry
Author(s):
Yi Jin;
Chao Zhai;
Yonggang Gu
Show Abstract
Digital close-range measurement technique is developed from the geodesy surveying and the photogrammetry
surveying, at present it is mainly applied to wide field detection with high relative accuracy, but its absolute accuracy is
not high, about one micron to one millimeter. In this paper we apply this technique to wide field multi-objects optical
fibers' space coordinates detection, a set of non-contact on-line detection system is designed and the accuracy of position
detection is less than 0.03mm in the field of 600mm×600mm, that means this system has high relative accuracy and high
absolute accuracy. In order to build this system, two aspects are mainly researched: CCD measurement error is the basis
of photogrammetry, in order to control measurement error, the influence to measurement error affected by speckle
recognition algorithm, the light source and camera space position is researched from experiments and theory. The result
shows some significant conclusions: the detection error of Gravity Method is about 0.03 pixel; The uniformity of light
source is important; The position detection of static goals through general photogrammetry can achieve high accuracy,
about several microns, but when objects are moving, F number, lamp-house, speckle status and imaging size will
probably cause additional measurement error, It is about a dozen microns to tens of microns. How to detect objects in
wide field of view is a critical problem in photogrammetry, for the normal single frame photo-field is about hundreds of
millimeter and the measuring field is several meters. In this paper, surface measuring through laser ranging device and
Triangle Intersection Method are used to obtain fibers' positions in wide field of view. And some key technologies are
adopted such as precise calibration of CCD camera, light rays adjustment method, subpixel image processing technology
and corresponding image points matching. Multi-objects can be detected simultaneously in the wide field of view
through theses technologies and the detection accuracy is increased. The experiments result shows that this system is
stable and reliable, and it has the potential in precision measurement, industry measurement and other application areas.
Research on image separation and reconstruction method of single channel double spectrum low light level system
Author(s):
Chuang Zhang;
Lian-fa Bai;
Yi Zhang
Show Abstract
On the base of analyzing the dual-channel systems, a single-channel dual-band false color night imaging principium
based on inter-frame compensation is proposed to realize originally dual-channel dual-band on single-channel system
with a raster filter setting forward. Single channel double spectrum low light level system can receive stripe image
included two spectrum information of low light level in single channel by stripe filter slice. On the aspect of the
dual-band low light level images separation and compensation, the technique of spectrum separation and compensation
reconstruction were researched, the stripe low light level images were obtained from the scenery actual imaging with the
'long' wave information and the 'short' wave information in turn. The sample block compensation method based on the
correlation of the gray space and the inter-frame compensation methods were designed to compensate the split dual-band
images. The simulation experimental study of the dual-band low light level image separation and the compensation
technique were done, the results indicate that the above methods in the single channel dual-spectrum color low light level
system have applied effectively, and have achieved the goal of the dual-band low light level image separation and the
compensation.
Laser linewidth measurement based on image processing and non-air gap F-P etalon
Author(s):
He-yong Zhang;
Wei-jiang Zhao;
De-ming Ren;
Yan-chen Qu
Show Abstract
Laser linewidth measurement has been realized in the paper through image processing and on non-air gap F-P . Firstly ,
The expression of linewidth measurement founded on non-air gap F-P has been obtained from the interference theory of
multi-beam of light. Secondly, The practical linewidth of pulse Nd:YAG laser has been measured with the method
above. An interference pattern produced by CCD was used for digital image processing. In the course of processing,
Canny operator was applied for the sake of picking-up the edge of interference circle, then the inner and external radius
of the interference circle can be acquired in the form of pixels. The actual physical size can be calculated through
relative transformation according to the radius of the interference circle. At last, Nd: YAG laser with 28ns pulsed-width
was used as the emission source, The experimental results based on the method above is 36.8 MHz. This data is in
agreement with 34.2 MHz through Discrete Fourier Transform(DFT), As we all know, the frequency resolving power of
DFT is depend on the effective acquisition points, So the cubic spline interpolation was introduced after the course of
DFT, and the better result has been achieved.
Research and development of infrared object detection system based on FPGA
Author(s):
Jianhui Zhao;
Jianwei He;
Pengpeng Wang;
Fan Li
Show Abstract
Infrared object detection is an important technique of digital image processing. It is widely used in automatic navigation,
intelligent video surveillance systems, traffic detection, medical image processing etc. Infrared object detection system
requires large storage and high speed processing technology. The current development trend is the system which can be
achieved by hardware in real-time with fewer operations and higher performance. As a main large-scale programmable
specific integrated circuit, field programmable gate array (FPGA) can meet all the requirements of high speed image
processing, with the characteristics of simple algorithm realization, easy programming, good portability and inheritability.
So it could get better result by using FPGA to infrared object detection system.
According to the requirements, the infrared object detection system is designed on FPGA. By analyzing some of the
main algorithms of object detection, two new object detection algorithms called integral compare algorithm (ICA) and
gradual approach centroid algorithm (GACA) are presented. The system design applying FPGA in hardware can
implement high speed processing technology, which brings the advantage of both performance and flexibility. ICA is a
new type of denoising algorithm with advantage of lower computation complexity and less execution time. What is more
important is that this algorithm can be implemented in FPGA expediently. Base on image preprocessing of ICA, GACA
brings high positioning precision with advantage of insensitivity to the initial value and fewer times of convergence
iteration. The experiments indicate that the infrared object detection system can implement high speed infrared object
detecting in real-time, with high antijamming ability and high precision.
The progress of Verilog-HDL and its architecture are introduced in this paper. Considering the engineering application,
this paper gives the particular design idea and the flow of this method's realization in FPGA device. And we also discuss
the problems on how to describe the hardware system in Verilog-HDL. Based on the hardware architecture of infrared
object detection system, the component units of the system are discussed in detail, such as image data acquisition unit,
data pre-processing unit and logical control unit etc. The design of the FPGA function and its implementation are carried
on Verilog-HDL with TOP-DOWN method. The ending is the prospect of the project.
Pose estimation based on the constraints of inner angles and areas of triangles
Author(s):
Rujin Zhao;
Qiheng Zhang;
Mingjun Wu;
Haorui Zuo
Show Abstract
This paper presents an iterative pose estimation method on the basis of point correspondences, which are composed of
3D coordinates of feature points under object reference frame and their 2D projective coordinates under image reference
frame. The proposed method decomposes the pose estimation into two steps. Firstly, the 3D coordinates of the feature
points under camera reference frame are estimated iteratively by Gauss-Newton method. In this process, the variables are
defined by the lengths of the vectors from the focus point of camera to the feature points; meanwhile, several novel
constraints are constructed by a set of error functions built out of the inner angles and areas of the triangles formed by
three arbitrary non-collinear feature points, because they can describe the shape of object uniquely and completely.
Secondly, by using Gauss-Newton method again, the rotation angles (i.e., pitch, yaw, and roll) and 3D translation of the
object are estimated from the 3D coordinates of the feature points under camera reference frame obtained in the first
step. Experiments involving synthetic data as well as real data indicate that the proposed method is more accurate and no
less fast than the previous method.
The image pretreatment based on the FPGA inside digital CCD camera
Author(s):
Rui Tian;
Yan-ying Liu
Show Abstract
In a space project, a digital CCD camera which can image more clearly in the 1 Lux light environment has been
asked to design . The CCD sensor ICX285AL produced by SONY Co.Ltd has been used in the CCD camera. The FPGA
(Field Programmable Gate Array) chip XQR2V1000 has been used as a timing generator and a signal processor inside
the CCD camera. But in the low-light environment, two kinds of random noise become apparent because of the
improving of CCD camera's variable gain, one is dark current noise in the image background, the other is vertical
transfer noise. The real time method for eliminating noise based on FPGA inside the CCD camera would be introduced.
The causes and characteristics of the random noise have been analyzed. First, several ideas for eliminating dark current
noise had been motioned; then they were emulated by VC++ in order to compare their speed and effect; Gauss filter has
been chosen because of the filtering effect. The vertical transfer vertical noise has the character that the vertical noise
points have regular ordinate in the image two-dimensional coordinates; and the performance of the noise is fixed, the
gray value of the noise points is 16-20 less than the surrounding pixels. According to these characters, local median
filter has been used to clear up the vertical noise. Finally, these algorithms had been transplanted into the FPGA chip
inside the CCD camera. A large number of experiments had proved that the pretreatment has better real-time features.
The pretreatment makes the digital CCD camera improve the signal-to-noise ratio of 3-5dB in the low-light environment.
Study on measuring the motion parameters of a space motion component with two CCD cameras
Author(s):
Shiming Yang;
Yechu Hu
Show Abstract
Sometimes it is needed to measure the motion parameters of a space motion component, including displacement, velocity
and acceleration of certain points or angular displacement, angular velocity and angular acceleration of a component.
Aiming at this problem, we proposed a new method using two color CCD cameras to measure the motion parameters of
a space motion component. A specific method was studied to measure the space coordinates of two points in a cylindrical
component with two color CCD cameras. Straight lines in different color were marked uniformly on the surface of a
cylindrical component. The lengths of the straight lines were same and the end points were on two circles of the
cylindrical surface. Two color CCD cameras were placed according to a certain angle and calibrated. Time varying
images of the cylindrical component in space motion were taken by the two color CCD cameras simultaneously. The
pixel coordinates of the straight line end points were extracted from the images by a computer program for image
processing. Then their space coordinates were calculated. The space coordinates of the two circle centers were obtained
from the space coordinates of the straight line end points. According to the space coordinates of the two circle centers
and the time intervals between successive photographs, the motion parameters of the cylindrical component was
calculated by using numerical method. The result from this system is basically consistent with actual situation.
A method of camera calibration with adaptive thresholding
Author(s):
Lei Gao;
Shu-hua Yan;
Guo-chao Wang;
Chun-lei Zhou
Show Abstract
In order to calculate the parameters of the camera correctly, we must figure out the accurate coordinates of the certain
points in the image plane. Corners are the important features in the 2D images. Generally speaking, they are the points
that have high curvature and lie in the junction of different brightness regions of images. So corners detection has already
widely used in many fields. In this paper we use the pinhole camera model and SUSAN corner detection algorithm to
calibrate the camera. When using the SUSAN corner detection algorithm, we propose an approach to retrieve the gray
difference threshold, adaptively. That makes it possible to pick up the right chessboard inner comers in all kinds of
gray contrast. The experiment result based on this method was proved to be feasible.
Real-time multi-core parallel image sharpness evaluation algorithm for high resolution CCD/CMOS based digital microscope autofocus imaging system
Author(s):
Lei Zhang;
Peng Liu;
Yu-ling Liu;
Fei-hong Yu
Show Abstract
Multi-core parallel computing is spreading in most industries and the imaging and machine vision industry is also taking
the advantage of this technology. The utilization of parallel computing will increase the throughputs and reduce response
times of the imaging system, especially for the high resolution CCD/CMOS based imaging system. Multi-core image
processing fully utilizes the ability of the CPU's parallel computing, for multiple cores share the processing task of an
imaging system. The parallel computing automatically detects the number of CPUs or the number of the CPU cores and
then automatically splits the image into the according number of logical blocks, which will be then passed on to the
processing threads separately. After all the processing threads finishes, the result will be synthesized. For high resolution
CCD/CMOS based digital microscope autofocus imaging system, the speed of measuring the sharpness of the current
collected image greatly affects the speed of the autofocus process. The real-time requirement of the system needs fewer
time cost for image sharpness evaluation and the multi-core parallel computing is applied in the algorithm to meet this
requirement. The proposed algorithm is as follows. First, the current collected image is divided into several logical
blocks; second, for each block, a worker thread will compute the sharpness of this block; finally, after all the worker
threads finishes, the sharpness will be summed for comparison with the next collected image. In order to test the
efficiency of the algorithm, a dedicated high resolution CCD/CMOS based digital microscope autofocus imaging system
is designed and implemented and several image sharpness evaluation algorithms are used, as well as the self-adaptive
mountain-climbing search (SAMCS) method for the searching method. The numeric simulation and the experimental
results show that the proposed algorithm greatly improves the speed of the autofocus process.
Simulation study on angle measurement accuracy of star sensor
Author(s):
Hong-tao Wang;
Chang-zhou Luo;
Yu Wang;
Shu-fang Zhao;
Hui Cheng
Show Abstract
The theoretical and simulation study on angle measurement accuracy of star sensor have been done. According to the
measurement model of star sensor, the mathematical model of pixel gray distribution for star point, the method to
determine the size of star point and the method to locate cenrtoid of the star point are discussed in detail. Simulation
experiments on angle measurement accuracy of star sensor are carried out subsequently. Some useful conclusions have
been educed after simulation.
The research on automatic white balance for digital microscope
Author(s):
Xin-xin Yan;
Lei Zhang;
Ting-yu Zhao;
Fei-hong Yu
Show Abstract
Automatic white balance plays a key role in digital color imaging system based on CCD/CMOS sensors. The gray world
method and its variants are widely used for their simplicity. However, they will fail if the image is dominated by only
one or two colors. The iterative method, which extracts gray color points from the image for color temperature
estimation, performs well if there are enough gray color points. But it does not work in the case of serious color casts or
lack of gray color points. Thus, a new method is proposed combining the iterative method and the gray world method.
The iterative method is for fine adjustment, while the gray world method is for coarse adjustment. The characteristics of
the digital microscope are taken into account as well. There are three major contributions in the paper. First, brightness
constraint is considered during the gray color points detection. The detecting procedure is more precise as a result.
Second, each frame of the video stream is divided into n-by-n blocks so as to increase the immunity to the noise. Last,
the fine adjustment and the coarse adjustment are combined together. The 'Fine-Coarse-Fine' routine adjusts the image
properly even though there are not sufficient gray color points. Experiments on digital microscope indicate that the
proposed automatic white balance method is robust, effective and efficient.
Reliability improvement of low-cost camera for microsatellite
Author(s):
Jiankang Zhou;
Xinhua Chen;
Yuheng Chen;
Wang Zhou;
Weimin Shen
Show Abstract
Remote sensing is one of the most defective means for environment monitor, resource management, national security
and so on, but existing conventional satellites are too expensive for common users to afford. Microsatellites can reduce
their cost and optimize their image products for specific applications. Space camera is one of their important payloads.
The trade-off faced in a cost driven camera design is how to reduce cost while still have the required reliability. This
paper introduces our path to develop reliable and low-cost space camera. The space camera has two main parts: optical
system and camera circuits. Commercial off-the-shelf (COTS) lenses are difficult to maintain their imaging performance
under space environment. Our designed optical system adopts catadioptric layout, so that its temperature sensitivity is
low. The material and structure of camera lens can bear the vibration and shock during its launch. Its mechanical
reliability is approved through mechanical test. A window made of synthetic fused silica is used to protect the lens and
CCD sensor from space radiation. Optical system is completed with compact structure, wide temperature range, large
relative aperture, high imaging quality and pass through the mechanical test, thermal cycling and vacuum thermal test.
Modular concept is developed within the space camera circuit, which is composed of seven modules which are power
supply unit, microcontroller unit, waveform generator unit, CCD unit, CCD signal processor unit, LVDS unit, and
current surge restrain unit. Module concept and the use of plastic-encapsulated microcircuits (PEMs) components can
simplify the design and the maintainability and can minimize size, mass, and power consumption. Through the
destructive physical analysis (DPA), screening, and board level burn-in select the PEMs than can replace the
hermetically sealed microcircuits(HSMs). Derating, redundancy, thermal dissipation, software error detection and so on
are adopted in the camera design phase. The degree of reliability of the circuits can achieve 0.98/0.5Year. Environmental
tests, including vacuum thermal test, thermal cycle test and radiation test, verify the component reliability in the space
environment.
A research on general assessment and analysis of high-speed photoelectronic imaging systems
Author(s):
Shiming Xiang
Show Abstract
On basis of the theory of limited photon signal-noise ratio and Fourier spectrum a new method is proposed for
general assessment and analysis of a high-speed photoelectronic imaging systems. Several expressions are given to
relate system's temporal-spatial MTF and resolutions with its characteristic image element size, exposure time,
target illumination, and main parameters of the optical objective and imaging device. Some problems are discussed
in the paper about system's image quality and its limiting factors, and explained visually by several theoretical
figures. It is shown that the prediction by expressions are partially supported by some experiment results. The
theoretic result is very useful in analysis and assessment of high-speed photoelectronic imaging systems.
Research on microfluidic chip and imaging system used to measure Ca2+ in cell
Author(s):
Wei Zhou;
Sixiang Zhang;
Dugang Ran;
Bao Liu
Show Abstract
The microfluidic fluorescence detecting system which is used to measure the concentration of Ca2+ had been designed.
On the microfluidic chip we designed, cell-dyeing, cell fostering, reagent injecting and other operations can be
completed. The monochromatic light came from optical monochromator which can emit continuous spectrum was used
to excitated the fluorescent probe in the cell, then the fluorescence signal and image were sampled by the PMT and CCD,
at last the data was processed and the content of Ca2+ in the cell was figured out by using the fluorescence ratio method.
Meanwhile, by using the system, the dynamic curve of [Ca2+]1 in cell was given after being stimulated by high K+. The
precise result verifies that the system is stable and credible and it meets the requirement of detecting [Ca2+]i in live cells
in the filed of physiology.
CMOS readout circuit design for infrared image sensors
Author(s):
Libin Yao
Show Abstract
The infrared imaging system has been developed for more than 50 years, from the early stage the scanned imaging
system using single unit detector to imaging system using focal plane detector arrays. For focal plane array detectors, the
readout circuit is used to read the photon detector signal out. Charge coupled device had been used for the readout of the
focal plane array detectors and currently CMOS technology is used. In this paper, readout circuit design using CMOS
technology for infrared focal plane array detectors is reviewed. As an interface between the detector and the image signal
processing circuits, readout circuit is a critical component in the infrared imaging system. With the development of the
CMOS technology, the readout circuit is now moving into the CMOS technology. With the feature size scaling down,
the readout cell size is reduced, which enable us to integrate more complex circuits into the readout cell. From the system
point of view, different requirements and specifications for the CMOS readout circuit are analyzed and discussed.
Different readout circuit parameters such as injection efficiency, dynamic range, noise, detector biasing control, power
consumption, unit cell area, etc are discussed in details. Performance specifications of different readout cell structures
are summarized and compared. Based on the current mirroring integration readout cell, a fully differential readout cell is
proposed. The injection efficiency of this proposed readout cell is very close to unity and the detector biasing voltage is
close to zero. Moreover, the dynamic range of the proposed readout cell is increased and the rejection on interference is
improved because of the fully differential structure. All these are achieved without much power consumption increasing.
Finally, a full digital readout circuit concept is introduced. By employing a current controlled oscillator, the photocurrent
is converted to frequency and integrated in digital domain and the final output is digital signal.
A wide dynamic range CMOS image sensor with operation mode change in security surveillance field
Author(s):
Xiao-chen Li;
Su-ying Yao;
Bin-qiao Li
Show Abstract
High dynamic range of the CMOS image sensor is important, especially applied to the security surveillance. While real
scenes produce a wide range of brightness variations, vision systems use low dynamic range image detectors that
typically provide poor quality images, which greatly limit what vision can acquire today. Due to the various environment
of surveillance, dynamic range of image sensor is more important than other conditions. This paper demonstrates a new
method for significantly improving the dynamic range. Compared to the other methods to enhance dynamic range like
multiple sampling or multiple exposure and so on, the approach which this paper puts forward ensures image quality, at
the same time, greatly improves the dynamic range of dark light conditions, and also easy to operate and accomplish.
The sensor is implemented in 0.18-μm CMOS technology and achieves 5mm×5mm chip size with 6μm×6μm pixels. The
results of the chip test are proposed in the paper. A scene with measured dynamic range exceeding 86dB but power is
only about 70mW, which is especially suitable for security surveillance application.
Research on the new performance model for human eye
Author(s):
Kecong Ai;
Chen Wang;
Xudong Li
Show Abstract
Based on the photon noise fluctuation theory and the linear filter theory, the new performance model for human eye will
be established in this paper, which is denominated as "the photon detector and linear filter synthesis performance model"
or "wave-particle duality performance model". The threshold resolution angle and universal apparent distance detecting
equation for human eye will be studied and derived in the large-scale luminance level. The traditional limiting resolution
angle and apparent distance detecting equation for human eye will be improved and discussed in detail. The relationship
between the threshold detecting theory for human eye and the improved Johnson criteria will be set up and the new
number of the resolvable circles across the target and background for detection, recognition and identification will be put
forward. All of these are coincident with the visual theory and threshold characteristics of the human eye as well as many
actually measured data.
The research on projective visual system of night vision goggles
Author(s):
Shun-long Zhao
Show Abstract
Driven by the need for lightweight night vision goggles with good performance, we apply the projective lens into night
vision goggles to act as visual system. A 40-deg FOV projection lens is provided. The useful diameter of the image
intensifier is 16mm, and the Resolutions at Center and edge are both 60-lp/mm. The projection lens has a 28mm
diameter and 20g weight. The maximum distortion of the system is less than 0.15%. The MTF maintained more than 0.6
at a 60-lp/mm resolution across the FOV. So the lens meets the requirements of the visual system. Besides, two types of
projective visual system of night vision goggles are presented: the Direct-view projective visual system and the Seethrough
projective visual system. And the See-through projective visual system enables us to observe the object with our
eyes directly, without other action, when the environment becomes bright in a sudden. Finally we have reached a
conclusion: The projective system has advantages over traditional eyepiece in night vision goggles. It is very useful to
minish the volume, lighten the neck supports, and improve the imaging quality. It provides a new idea and concept for
visual system design in night vision goggles.
Color night vision method based on the correlation between natural color and dual band night image
Author(s):
Yi Zhang;
Lian-fa Bai;
Chuang Zhang;
Qian Chen;
Guo-hua Gu
Show Abstract
Color night vision technology can effectively improve the detection and identification probability. Current color
night vision method based on gray scale modulation fusion, spectrum field fusion, special component fusion and
world famous NRL method, TNO method will bring about serious color distortion, and the observers will be visual
tired after long time observation. Alexander Toet of TNO Human Factors presents a method to fuse multiband night
image a natural day time color appearance, but it need the true color image of the scene to be observed. In this paper
we put forward a color night vision method based on the correlation between natural color image and dual band
night image. Color display is attained through dual-band low light level images and their fusion image. Actual color
image of the similar scene is needed to obtain color night vision image, the actual color image is decomposed to
three gray-scale images of RGB color module, and the short wave LLL image, long wave LLL image and their
fusion image are compared to them through gray-scale spatial correlation method, and the color space mapping
scheme is confirmed by correlation. Gray-scale LLL images and their fusion image are adjusted through the
variation of HSI color space coefficient, and the coefficient matrix is built. Color display coefficient matrix of LLL
night vision system is obtained by multiplying the above coefficient matrix and RGB color space mapping matrix.
Emulation experiments on general scene dual-band color night vision indicate that the color display effect is
approving. This method was experimented on dual channel dual spectrum LLL color night vision experimental
apparatus based on Texas Instruments digital video processing device DM642.
Wavelength calibration and spectral line bending determination of an imaging spectrometer
Author(s):
Yuheng Chen;
Yiqun Ji;
Jiankang Zhou;
Xinhua Chen;
XiaoXiao Wei;
Weimin Shen
Show Abstract
After alignment of an imaging spectrometer, the image of a special wavelength should in theory
strictly meet with the design value and is focused on a certain column of the CCD focal plane. Since
the imaging spectrometer is usually used in spatial or aerial environment, the optical components and
the detector will departure from the regulated place and leads to focusing the image onto the deflected
position of the focal plane in the spectral direction.
Since the onboard readjustment of an inaccurate imaging spectrometer is usually unavailable, the
equivalent task should be performed by certain post processing method. In this paper, we present a
wavelength calibration method based on a fitting algorithm. Because of the linear diffraction feature of
a grating, first order fit is adopted for the calibration. Using a standard mercury lamp as the light source
during the calibration, the experimental imaging data collected from the whole CCD focal plane is used
for the wavelength calibration to construct the actual wavelength distributing curve.
Because of spectral line bending (smiling) of the imaging spectrometer, the wavelength calibration
result of each row of the CCD plane differs so that a row-by-row calibration work should be carried out.
The total row-by-row calibration result not only provides a full-scale and high-precision calibration
effort, but also brings forward a smiling evaluation method for the whole imaging spectrometer.
Using a standard Hg-Cd lamp as both the illuminating light source and the object, the
spectroscopic image of the slit focusing onto the CCD focal plane of a calibrated imaging spectrometer
is collected. In certain rows of the image, the center position of every spectral line is recorded. Through
the comparison of recorded positions of different rows, the smiling of the calibrated imaging
spectrometer is worked out, which meets with the design value.
Autonomous navigation algorithm for precision landing based on computer vision
Author(s):
Yang Tian;
PingYuan Cui;
HuTao Cui
Show Abstract
In this paper we propose a visual algorithm for use by a deep space exploration spacecraft to estimate the relative
position and attitude on broad during the descent phase. This algorithm is composed of the relative motion recovery
which provides part motion states estimates based on tracking feature through the monocular image sequence, and
landmark recognition based algorithm which offers the scale of the relative motion and absolute position of spacecraft.
The results on synthetic image show that the proposed algorithm can provide the estimation of state with satisfactory
accuracy.
Single and few photon avalanche photodiode detection process study
Author(s):
Josef Blazej;
Ivan Prochazka
Show Abstract
We are presenting the results of the study of the Single Photon Avalanche Diode (SPAD) pulse response risetime and its
dependence on several key parameters. We were investigating the unique properties of K14 type SPAD with its high
delay uniformity of 200 μm active area and the correlation between the avalanche buildup time and the photon number
involved in the avalanche trigger. The detection chip was operated in a passive quenching circuit with active gating. This
setup enabled us to monitor the diode reverse current using an electrometer, a fast digitizing oscilloscope, and using a
custom design comparator circuit. The electrometer reading enabled to estimate the photon number per detection event,
independently on avalanche process. The avalanche build up was recorded on the oscilloscope and processed by custom
designed waveform analysis package. The correlation of avalanche build up to the photon number, bias above break,
photon absorption location, optical pulse length and photon energy was investigated in detail. The experimental results
are presented. The existing solid state photon counting detectors have been dedicated for picosecond resolution and
timing stability of single photon events. However, the high timing stability is maintained for individual single photons
detection, only. If more than one photon is absorbed within the detector time resolution, the detection delay will be
significantly affected. This fact is restricting the application of the solid state photon counters to cases where single
photons may be guaranteed, only. For laser ranging purposes it is highly desirable to have a detector, which detects both
single photon and multi photon signals with picoseconds stability. The SPAD based photon counter works in a purely
digital mode: a uniform output signal is generated once the photon is detected. If the input signal consists of several
photons, the first absorbed one triggers the avalanche. Obviously, for multiple photon signals, the detection delay will be
shorter in comparison to the single photon events. The detection delay dependence on the optical input signal strength is
called the "detector time walk". To enable the detector operation in both the single and multi photon signal regime with a
minimal time walk, a time walk compensation technique has been developed in nineties.
Improved spectral radiance responsivity calibration of charge-coupled-device (CCD) imaging spectrometer with an internally illuminated integrating sphere
Author(s):
Shu-rong Wang;
Zhen-duo Zhang;
Fu-tian Li;
Xiao-hu Yang
Show Abstract
A technique for calibrating the spectral radiance responsivity of the CCD imaging spectrometer with an internally
illuminated integrating sphere is described. The spectral radiance of the integrating sphere is obtained by two steps.
Firstly, a Spectralon panel diffuser and an ultraviolet spectrometer are combined into a new spectral radiometer which
transfers the spectral irradiance of a NIST standard of spectral irradiance to that of the receiving aperture of the
integrating sphere. Subsequently, the spectral radiance of the integrating sphere is derived from heat transfer theory for
Lambertian radiators. The overall uncertainty of determining the spectral radiance of the integrating sphere is ±2.3%. On
the basis of known spectral radiance, the radiance calibration of an available Czerny-Turner imaging spectrometer in our
laboratory has been completed in 200-400nm with an uncertainty of about ±2.7%.
CSSAR airglow gravity wave imager and its preliminary observation
Author(s):
Cui Tu;
Xiong Hu;
Shangyong Guo;
Zhaoai Yan;
Yongqiang Cheng
Show Abstract
The CSSAR airglow imager is developed to investigate the atmospheric gravity waves near the
mesopause region. The CSSAR imager consists of a fisheye with a focal length of 10.5 mm and an F
number of 2.8, three pieces of lenses for collimation, two filters, an imaging lens and a scientific CCD
camera with 1024x1024 array, 24x24 micros pixels. Two filters for measuring the OH Meinel band
(750-850nm) airglow layer peaking at 87 km and the O2 (865nm) airglow layer peaking at 90 km are
used in the imager. Preliminary observation of the all-sky OH Meinel band airglow has been done
during 02:00 to 06:00 on Jan. 5th, 2009, at Hancun (39.4°N, 116.6°E), Langfang, Hebei, which is the
first time to image gravity wave in China. Case study shows that one quasi-monochromatic gravity
wave has the horizontal wavelength of ~19 km, observed horizontal phase velocity of ~18 m/s,
horizontal propagating azimuth direction of ~269° and observed period of ~18 min.
Activation experiment of exponential-doping NEA GaAs photocathodes
Author(s):
Jijun Zou;
Gangyong Lin;
Xiong Wei;
Lin Feng;
Zhi Yang;
Benkang Chang
Show Abstract
An exponential-doping GaAs photocathode was designed and activated, the achieved integral sensitivity for the
exponential-doping cathode is 1956μA/lm, which is much higher than that of gradient-doping cathode with identical
thickness of epitaxial layer. According to the quantum efficiency theory of exponential-doping cathode, we analyzed the
reason responsible for the increase in integral sensitivity of exponential-doping cathode, which are mainly attributed to
the invariable induced electric field, the photoelectrons driven by the field move towards the cathode surface by way of
diffusion and drift. Accordingly, increase the average distance that photoelectrons transport and reduce the influence of
the back-interface recombination velocity on photoemission.
The performance test and the application of CCD
Author(s):
Shu-rong Wang;
Xiao-hu Yang
Show Abstract
The e2v technologies' CCD sensor CCD57-10, which will be used as a detector in the Limb UV Imaging
Spectrometer, is tested at first. Then the spectral responsivity calibration of the whole Imaging
Spectrometer simply is described. The error analysis and the correction of the test of CCD's parameters
under good repeatability and stability indicate that the properties of the CCD are suitable to the Limb UV
Imaging Spectrometer. The spectral responsivity calibration and the theoretical calculation of the whole
Imaging Spectrometer are within an acceptable error level.
The implementation of CMOS sensors within a real time digital mammography intelligent imaging system: The I-ImaS System
Author(s):
C. Esbrand;
G. Royle;
J. Griffiths;
R. Speller
Show Abstract
The integration of technology with healthcare has undoubtedly propelled the medical imaging sector well into the twenty
first century. The concept of digital imaging introduced during the 1970s has since paved the way for established
imaging techniques where digital mammography, phase contrast imaging and CT imaging are just a few examples. This
paper presents a prototype intelligent digital mammography system designed and developed by a European consortium.
The final system, the I-ImaS system, utilises CMOS monolithic active pixel sensor (MAPS) technology promoting on-chip
data processing, enabling the acts of data processing and image acquisition to be achieved simultaneously;
consequently, statistical analysis of tissue is achievable in real-time for the purpose of x-ray beam modulation via a
feedback mechanism during the image acquisition procedure. The imager implements a dual array of twenty 520 pixel ×
40 pixel CMOS MAPS sensing devices with a 32μm pixel size, each individually coupled to a 100μm thick thallium
doped structured CsI scintillator. This paper presents the first intelligent images of real breast tissue obtained from the
prototype system of real excised breast tissue where the x-ray exposure was modulated via the statistical information
extracted from the breast tissue itself. Conventional images were experimentally acquired where the statistical analysis of
the data was done off-line, resulting in the production of simulated real-time intelligently optimised images. The results obtained indicate real-time image optimisation using the statistical information extracted from the breast as a means of a
feedback mechanisms is beneficial and foreseeable in the near future.
Study on real-time registration in dual spectrum low level light night vision technique
Author(s):
Lian-fa Bai;
Yi Zhang;
Chuang Zhang;
Qian Chen;
Guo-hua Gu
Show Abstract
In low level light (LLL) color night vision technology, dual spectrum images with respective special information were
acquired, and target identification probability would be effectively improved through dual spectrum image fusion. Image
registration is one of the key technologies during this process. Current dual spectrum image registration methods mainly
include dual imaging channel common optical axis scheme and image characteristic pixel searching scheme. In dual
imaging channel common optical axis scheme, additional prismatic optical components should be used, and large
amount of radiative energy was wasted. In image characteristic pixel searching scheme, complicated arithmetic made it
difficult for its real time realization. In this paper, dual channel dual spectrum LLL color night vision system structure
feature and dual spectrum image characteristics was studied, dual spectrum image gray scale symbiotic matrix
2-dimensional histogram was analysed, and a real time image registration method including electronic digital shifting,
pixel extension and extraction was put forward. By the analysis of spatial gray-scale relativity of fusion image,
registration precision is quantitatively expressed. Emulation experiments indicate that this arithmetic is fast and exact for
our dual channel dual spectrum image registration. This method was realized on dual spectrum LLL color night vision
experimental apparatus based on Texas Instruments digital video processing device DM642.
Improved entrance optics design for ground-based solar spectral ultraviolet irradiance measurements and system absolute calibration
Author(s):
Caihong Dai;
Jialin Yu;
Bo Huang;
Yan Tian
Show Abstract
The angular response of entrance optics is an important parameter for solar spectral UV measurements, and ideal cosine
entrance optics is required to measure ground-based global solar spectral UV irradiance including direct and diffuse
radiation over a solid angle of 2π sr. Early international comparisons have shown that deviations from the ideal cosine
response lead to uncertainties in solar measurements of more than 10%.
A special spectroradiometer used for solar spectral UV measurements was developed at National Institute of Metrology
(NIM). Based on Polytetrafluoroethylene (PTFE) integrating sphere, seven kinds of cosine-entrance system were
designed and compared. A special cosine measurement apparatus was developed to measure the angular response of the
entrance optics. Experimental results show that, the integral cosine error <f2> is 1.41% for a novel combination entrance
optics, which is composed by a PTFE integrating sphere, a spherical ground quartz diffuser and a special correction ring,
and the cosine error is 0.08% for an incidence angle of θ=±30°, 0.84% at θ=±45°, -0.47% at θ=±60°, -0.74% at θ=±70°,
and 5.47% at θ=±80°.
With the new non-plane entrance optics, the angular response of the solar UV spectroradiometer is improved evidently,
but on the other side, the system's absolute calibration becomes more difficult owing to the curved geometry of the new
diffuser. The calibration source is a 1000W tungsten halogen lamp, but the measurement object is the global radiation of
the solar, so a small error of the calibration distance will lead to an enormous measurement error of solar spectral UV
irradiance. When the calibration distance is 500mm, for an actual diffuser with spherical radius 32.5mm and spherical
height 20mm, the calibration error will be up to 3%~10% on the assumption that the starting point was calculated just
from the acme or the bottom of the half-spherical diffuser. It was investigated that which point inside the threedimensional
entrance optics should be used as the starting point of the calibration distance in this paper. According to
the information of the geometrical shape of the diffuser, the different irradiance value on the spherical surface and the
angular response of the receiver, mathematical methods are adopted to calculate the optical reference plane of the
spherical entrance system. Furthermore, an experimental method was used to verify the feasibility of the theoretic
formula.
Relative state parameters from images: testing system, algorithms, and experiment results
Author(s):
Xiaoping Du;
Jiguang Zhao;
Dexian Zeng
Show Abstract
Taking the measurement of relative state parameters between spacecrafts as research object, this paper put forward an
optical method based on monocular computer vision and target features, designed the main structure of the measure
system, built the ground experiment system. Also, the problems of some key technology had been solved effectively in
this paper, like the design of the structure of the feature targets, the prober, the interface circuit, and the measure method.
It brought forward 3 kinds of high definition relative state measure methods between spacecrafts, which are based on
single frame images and target features, the Iterated Extended Kalman Filtering (IEKF) technology and the
Reconstruction of high resolution images. Through the simulations and the ground experiments, we found that the
combination of the 3 kinds of methods, can not only ensure the high definition relative state measure of different
interaction distances, but also increased the stability and reliability of the measure greatly, as on the distance of 50
meters, the stable measure definition can reach the centimeters' order-of-magnitude.
An accurate method for alignment of polarization-maintaining fiber with CCD micro-imaging system
Author(s):
Yan-jie Li;
Rui Wang;
Chun-xi Zhang;
Yuan-hong Yang;
De-wei Yang
Show Abstract
Polarization-Maintaining(PM) optical fiber connector is widely used in various kinds of optic fiber sensors and
communication equipments. The alignment of PM fiber polarization axis with the orientation key axis is one of the most
important factors determining the extinction ratio of the connector. In order to ensure high accurate alignment of these
two axises, CCD micro-imaging system is employed to take the cross-section image of Panda PM fiber, the edge points
of stress rods are extracted by sub-pixel edge detection algorithm based on Hessian matrix. Consequently, the automatic
detection of the polarization axis and the accurate calculation of the angle θ between the two axises are realized.
Experiment results indicate that the method, combining CCD micro-imaging system and an accurate calculation of the
angle θ, is effective to improve the alignment precision, which can reach ±0.5°. The work lays the foundation for
realizing the auto-manufacture of PM optical fiber connector.
Ghost-free reconstruction of multi-layer scenes using light-field method
Author(s):
Zhihua Dong;
Dan Zeng;
Xi Han;
Zhijiang Zhang
Show Abstract
To avoid the ghosting effect of the light field rendering caused by the insufficient sampling rate, we extend the
application of the criterion of ghost-free reconstruction from one constant-depth scenes to more complex scenes which
contain multiple depth layers. It is shown that the optimal constant depth and the maximum camera interval can be
determined by the scene geometry and the camera resolution. The relationship between them is formulated in this paper.
Also, we use an experiment to verify the criterion presented here. The mean square difference (MSD) between the
reconstructed views and the standard views are calculated to show the reconstruction quality at different camera intervals.
The quantitative data are basically in accord with the subjective observation in this experiment and the results
sufficiently support the theoretical analysis.
A sun tracking and back-sunlight target detecting system
Author(s):
Fan-sheng Chen;
Sheng-li Sun
Show Abstract
A back-sunlight target is a target which comes out in the direction of the sun or which flies across the sun. When
the sun enters the field of view (FOV) of a conventional detecting system, the sun's intense radiation will saturate the
focal plane detector or may even damage it, so there exists solar-blind regions that conventional systems can not observe.
Therefore to track the sun and to detect back-sunlight targets is very important in military defense because the potential
of back-sunlight military targets, such as battle plane or missiles "coming out of the sun", cannot be avoided. A
resolution for sun tracking and back-sunlight target detecting is proposed here and its experimental system is designed
and tested. This system can be a useful supplement for conventional detecting systems.
The system consists of a sun capturer, a back-sunlight target detector, a two-dimension tracker, and a data
acquisition module. In order to detect and identify back-sunlight targets, the back-sunlight target detector has to stare at
the sun. This process comes in two steps. First the sun capturer searches and coarsely tracks the sun and then the
back-sunlight target detector finely tracks the sun and images it, while two dimensional motor gimbals are used as
mechanical tracker of the system. When any back-sunlight target is found, the data acquisition module will provide the
external processors with the information of the sun and the back-sunlight target for further processing.
The system's tracking accuracy of the sun is 0.006°, and the detecting resolution of back-sunlight targets is
0.0004°.
Research and simulation of star capture based on star sensor
Author(s):
Jing Hu;
Bo Yang;
Chenhao Wu
Show Abstract
The starlight refraction navigation is considered to be one of the most promising methods for satellite autonomous
navigation. This paper mainly did research on the capture of navigation stars in starlight refraction navigation. By
studying on the geometry relation between measurement star and satellite, a measurement star orientation method can be
deduced which can be used to simulate the actual orbital navigation. According to this method the measurement star can
be obtained at any satellite position. Then the measurement information can be modeled through which the laboratory
digital simulation of starlight refraction navigation and integrated navigation can both be performed. At the mean time,
confirm the navigation measurement starlight orientation combined with the star catalog. Then use starlight refraction
navigation to calculate the satellite positions and velocities based on the Unscented Kalman Filter. At last, use the
starlight refraction and starlight elevation integrated navigation based on the information fusion method to resolve the
matter that the refraction measurement star can not be captured. Compared with merely using starlight refraction
navigation, the precision of integrated navigation can be effectively improved.
Design of monolithic visible light / IR CCD focal plane array
Author(s):
Li Li;
Ping Xiong
Show Abstract
Recently, more and more attention has been paid on the multispectral imaging for its excellent resolution capability in
the complex environment. For this reason, various sensors for multispectral imaging has been developed. Generally,
these sensors contain a visible light Focal Plane Array for visible light imaging and an IR Focal Plane Array for IR
imaging with the CCD or CMOS readout circuit to output the signal. In this paper, a novel monolithic visible light
(400nm-800nm) / IR (1μm-5μm) charge coupled device (CCD) focal plane array sensor was designed. This sensor was
fabricated using 2 micron design rule and a double level poly silicon with four phase transfer structure. The especial
design of the device was that it integrated the visible light sensitive cells and IR sensitive cells on a single chip with the
PN junction photodiode for the visible light signal detecting and the PtSi Schottky-barrier diode for the IR signal
detecting. The number of PN junction photodiode and PtSi Schottky-barrier diode arrayed in offset intersection were all
512(H)×256(V), so the hole number of the pixels were 512(H)×512(V). The device was operated in interleaved scanning
mode, the visible light signal and IR signal was exported in odd filed and even field respectively. This sensor was an
interline-transfer CCD with an on-chip amplifier which was used to read out the video at 12.5 MHz to provide standard
30 frames per second format. The test result shows that this sensor was succeed in visible light / IR multispectral
imaging worked at the temperature of 77K.
Visible light / IR CCD Focal Plane Array has an excellent potential application foreground for its wide spectral response.
It can be replaced the two separated systems for visible light imaging and IR imaging. For this reason, it can be
decreased the complexity of the camera system and reduced the cost.
Study of the precision of upper atmospheric wind field measurement
Author(s):
Yuan-he Tang;
Lu He;
Hai-yang Gao;
Lin Qin;
Rui-xia Zhang;
Ci Zhu
Show Abstract
The passive optical methods to observe the earthly upper atmospheric wind field by satellite remote sensing is to
measure the parameters including atmospheric wind velocities, temperature, pressure and volume emission rates of
airglow (aurora). WINDII is the first image interferometer for upper atmospheric wind measurement in 1991 made by
Canada and France loaded on NASA's UARS. The precision of wind speed is 10m/s for WINDII and its temperature
precision is 10K. The second wind measurement instrument of SWIFT is launched at 2011 based on the same principle
as WINDII. SWIFT's wind speed precision is 3m/s, and its temperature precision is 2K. According to the development of
the photoelectron technology and CCD, the wind field's detected precision is enhanced continuously. In this paper, the
theory of detected precision of wind speed and temperature is analyzed firstly; the factors between the higher precision
of wind field and CCD detector parameter are made sure. And then the precision equation is deduced. The wind speed
and temperature precision expression includes of optical path difference (OPD), phase, aurora wavelength, visibility,
CCD's responsibility, signal-to-noise, view of field (VOF) etc. The precision of 1m/s wind speed and 1K temperature
need fixed OPD 24.28cm with O+ 732.0nm aurora. This research can provide the theory for advance upper atmospheric
wind field detecting precision.
High frame rate PtSi CCD infrared sensors
Author(s):
Xue-tao Weng;
Zhun-lie Tang
Show Abstract
By using 2μm design rule and silicon technology, PtSi128×128 high frame rate (500frame/second) progressive scan
CCD devices has been designed, fabricated and applied. The CCD devices is of vertical 3 phases, horizontal 4 phases,3
level ,interline transfer and optimized optical cavity configuration. The pixel number is 128×128.The pixel size is 30×30
μm2.The factor of fulfill is about 27%. The guard ring is designed as 3μm. The interlace between guard ring and
platinum silicide zone is 1μm. The channel stop is 2μm .The collection diode is 4.5×5μm2.The barrier between vertical
CCD and platinum silicide is 3×5μm2.The width of vertical CCD is 9μm. The vertical super notch is not adopted. But
accessional phosphorus ion is implanted in order to enhance charge capacity. The interlace between vertical 3 phases
poly silicon is 1μm. The vertical clock frequency is 62.5 KHz. The width of horizontal CCD is 40μm. The notch is 3μm
.It is selectable. The interlace between 4 phases poly silicon is also 1μm. The 4 phases poly silicon is fabricated by 3
layer poly silicon because of vertical 3 phases. The horizontal clock frequency is 10 MHz. The configuration of output
amplifier is two stage source follower amplifier. LDD is adopted. The bandwidth of output amplifier is designed as 40
MHz. The sensitive of output amplifier is 4 μv/e. 90nm gate oxide and 70nm nitride layers are fabricated first. Than
LOCOS,B diffusion, Buried channel, Notch(selectable), VCCD, Barrier, Channel stop, Poly1,Poly2,Poly3,Collecton
diode, Guard ring, Source and drain, Platinum silicide , hole, aluminium are orderly fabricated. Parameter test is under
500 frame/second .NETD is 0.6K.Dynamic range is 64dB.Non-uniformity is 0.7%(corrected).The number of defect is
zero.
Optimization designed frame transfer area array sensor with vertical antiblooming structure by the CAD tools
Author(s):
Yu-Bing Lv;
Chang-Lin Liu;
Fei Long;
Yu Zhen;
Ling Wang
Show Abstract
The frame transfer Area Array Sensor with vertical antiblooming structure requires that all performance characteristics
such as large charge capacity, high charge transfer efficiency, low read noise and low antiblooming voltage be optimized
in a singlemanufacturable CCD (charge-coupled device). There is a common tendency to optimize one performance
characteristic at the expense of others. With the goal of optimizing above all performance characteristics, a frame
transfer area array sensors with vertical antiblooming structure is optimization designed by the CAD (computer aided
design) tools and fabricated, which equal to twenty-six micron square pixel size with 516 (H) x1028 (V) active
pixels(the channel stop is 4μm and the photosensitive area is 22x22 μmxμm). At first, it is simply introduced the
modeling of the frame transfer area array sensors with vertical antiblooming structure from the top to down and
simulated by the CAD tools combined with the process and device models. The essential design and calculation of the
CCD with antiblooming function are discussed in detail. And the process, layout and device parameters to design this
frame transfer area array sensors (including the impurity concentration and layer thickness of the P-well and the buriedchannel,
and the design output amplifier) are optimized according to the above simulation. At the last, the device is
fabricated and the performance characteristics ( e.g. charge capacity, charge transfer efficiency, read noise and
antiblooming voltage) are test to compare the simulation. To demonstrate the process used to optimize this frame
transfer area array sensors with vertical antiblooming structure, this paper contains test data for this kind of CCD device
which is optimization designed and fabricated by the CAD (computer aided design) tools. The test datas shows the
way using the CAD tools optimization design a frame transfer area array sensors with vertical antiblooming structure
correct.
Operational life prediction on gating image intensifier
Author(s):
Yu-hui Dong;
Zhi-guo Shen;
Zhong-li Li
Show Abstract
Operational life is one of the important parameters to evaluate second and super second generation image intensifiers. It
can be used not only to monitor manufacturing technique in product line, then the technology on photocathode
processing, MCP degassing and MCP producing can be adjusted promptly, but also to eliminate the image intensifiers
which have hidden risk on operational life as early as possible. Recently gating image intensifiers are used widely,
method to estimate the operational life of gating image intensifier related to its practical operate mode and working
condition need to be established urgently. The least square method to analyze the operational life test data in product line
was introduced in this paper. Now the data can be analyzed with convenient statistic analyze function on Excel. Using
"worksheet function" and "chart wizard" and "data analysis" on Excel to do the least square method calculation,
spreadsheets are established to do complex data calculation with worksheet functions. Based on them, formulas to
monitor the technology parameters were derived, and the conclusion that the operational life was only related to the
decrease slope of photocathode exponential fit curve was made. The decrease slope of photocathode sensitivity
exponential fit curve and the decrease percent of the exponential fit photocathode sensitivity can be used to evaluate the
qualification of the operational life rapidly. The mathematic models for operational life prediction on image intensifier
and gating image intensifier are established respectively based on the acceptable values of the decrease percent of the
exponential fit photocathode sensitivity and the expecting signal to noise ratio. The equations predicting the operational
life related to duty cycle and input light level on gating image intensifier were derived, and the relationship between
them were discussed too. The theory foundation were made herein, so the user can select proper gating image intensifier
type just considering the practical using condition and make the best design and application project. The paper gave
guidance on data analyzing in product line and using gating image intensifier.
Combining laser scan and photogrammetry for 3D object modeling using a single digital camera
Author(s):
Hanwei Xiong;
Hong Zhang;
Xiangwei Zhang
Show Abstract
In the fields of industrial design, artistic design and heritage conservation, physical objects are usually digitalized by
reverse engineering through some 3D scanning methods. Laser scan and photogrammetry are two main methods to be
used. For laser scan, a video camera and a laser source are necessary, and for photogrammetry, a digital still camera with
high resolution pixels is indispensable. In some 3D modeling tasks, two methods are often integrated to get satisfactory
results. Although many research works have been done on how to combine the results of the two methods, no work has
been reported to design an integrated device at low cost. In this paper, a new 3D scan system combining laser scan and
photogrammetry using a single consumer digital camera is proposed. Nowadays there are many consumer digital
cameras, such as Canon EOS 5D Mark II, they usually have features of more than 10M pixels still photo recording and
full 1080p HD movie recording, so a integrated scan system can be designed using such a camera. A square plate glued
with coded marks is used to place the 3d objects, and two straight wood rulers also glued with coded marks can be laid
on the plate freely. In the photogrammetry module, the coded marks on the plate make up a world coordinate and can be
used as control network to calibrate the camera, and the planes of two rulers can also be determined. The feature points
of the object and the rough volume representation from the silhouettes can be obtained in this module. In the laser scan
module, a hand-held line laser is used to scan the object, and the two straight rulers are used as reference planes to
determine the position of the laser. The laser scan results in dense points cloud which can be aligned together
automatically through calibrated camera parameters. The final complete digital model is obtained through a new a patchwise
energy functional method by fusion of the feature points, rough volume and the dense points cloud. The design
details are introduced, and a toy cock is used to test the new method, and the test results proof the validity of the new
method.
An image fusion method based region segmentation and complex wavelets
Author(s):
Junju Zhang;
Yihui Yuan;
Benkang Chang;
Yiyong Han;
Lei Liu;
Yafeng Qiu
Show Abstract
A fusion algorithm for infrared and visible light images based on region segmentation and the dual-tree complex wavelet
transform. Before image segmentation, morphological top-hat filtering is firstly performed on the IR image and visual
images respectively and the details of the luminous area are eliminated. Morphological bottom-hat filtering is then
performed on the two kinds of images respectively and the details of the dark area are eliminated. Make the top-hat
filtered image subtract the bottom-hat filtered image and obtain the enhanced images. Then the threshold method is used
to segment the enhanced images. After image segmentation, the DTCWT coefficients from different regions are merged
separately. Finally the fused image is obtained by performing inverse DTCWT. The evaluation results show the validity
of the presented algorithm.
Digital Foucault tester for the measurement of parabolic wave form
Author(s):
Xiao-peng Wang;
Ri-hong Zhu;
Lei Wang
Show Abstract
Digital Foucault tester for quantitative estimate the wave form of aspheric surfaces is based on the high precision
knife position determination and the image data processing methods. In this paper, we report a set of digital Foucault
tester for the measurement of parabolic surface. The movement of the knife-edge is controlled by PC, and the
shadow patterns are captured by a CCD in real time and then are fed back to the computer. A new kind of data
processing method, which has the advantage of simple arithmetic and high precision, is given in the paper. The
method offers a reliable base for Digital Foucault tester.
Research of noise reduction and nonuniformity correction for CMOS image sensor
Author(s):
Hong Fan;
Feng Cui;
Wu-jun Xu;
Yi-zhi Wu;
Run-he Qiu
Show Abstract
Charge coupled devices (CCD) technology has been for a long time the technology of choice in high quality image
sensing. CCDs use a special manufacturing process to create the ability to transport charge across the chip without
distortion. This process leads to very high-quality sensors in terms of fidelity and light sensitivity. Drawbacks are high
power consumption and no possible on-chip processing capabilities. With CMOS reduced feature size technology, it
becomes possible to add on-chip control and processing units in order to obtain a fully integrated camera on a single
chip. For these reasons, it has gained potential for use in many applications. CMOS image sensors(CIS) use traditional
manufacturing processes to create the chip -- the same processes used to make most microprocessors. Based on this
difference, CMOS sensors traditionally have lower quality, lower resolution and higher noise. For gaining high quality
image, the analysis of the types and reasons of noise and noise reduction for CMOS image sensor are very important.
Noise control technology to various noises is discussed in this paper. Methods of noise reduction for linear CMOS
imagers and logarithmic CMOS imagers are different. An important factor limiting the performance of sensor arrays is
the nonuniform response of detectors. Fixed pattern noise caused by the nonuniform response of the sensors gives the
uncorrected images a white-noise-degraded appearance. Nonuniformity correction techniques are also developed and
implemented to perform the necessary calibration for sensing applications in this paper. Noise reduction and
nonuniformity correction are effective ways to gain high quality images for CMOS image sensor.
Solar blind UV and visible bispectral imaging detection system
Author(s):
Li-gang Wu;
Wei Huang;
Tie-feng Xu;
Rui-qin Tan;
Yan Yang;
Ming-liang Tu
Show Abstract
Corona discharge of high voltage lines and equipment has always been an operational and maintenance problem for
electric power utilities. In addition to causing noise and radio interference problems, these luminous discharges, which
result from the ionization of air around an electrode, may also indicate the presence of faulty, damaged or contaminated
high voltage components. Corona can lead to the some components' premature aging and failure. Therefore, it's
necessary to develop a system to identify corona discharge sources and pinpoint the offending component so that it may
be replaced. The corona emission in the solar-blind ultraviolet (SBUV) region (240 - 280 nm) is much weaker but the
solar background is nil. Accordingly, a beam-split scheme, including a catadioptric UV telescope, a solar-blind UV
filter, an intensified-CCD (ICCD), and a visible camera, is applied in this system. The catadioptric UV telescope is
especially designed in this paper. Twain reflecting spherical surfaces, composed the majority of the UV telescope, are
combined with a pair of positive and negative lenses in the front, and a correction lens in the back-end. To be
emphasized, all the elements' surfaces of the catadioptric telescope are spherical, so that it can be manufactured
conveniently. In addition, it has a large aperture of 68 mm, with a focus length of 180mm, so as to improve the optical
resolution, enhance the power of entrance pupil and elevate the sensitivity of the imaging system. A folding mirror is
positioned in front of the telescope's central obscuration so that the UV and visible cameras have a common axis. In
addition, the bispectral image fusion is based on digital signal processor TMS320DM642 of TI company, where the
DM642 device has three configurable video port peripherals (VP0, VP1, and VP2), and each video port consists of two
channels - A and B with a 5120-byte capture/display buffer that is splittable between the two channels. Therefore,
DM642 has enough video ports to satisfy two video-in channels from the UV ICCD and the visible CCD, and one videoout
channel for bispectral fusion. At last, an image fusion algorithm based on pixel is used in experiments, and a
bispectral fused image is given clearly in this paper.
Multi-curve spectrum representation of facial movements and expressions
Author(s):
Li Pei;
Zhijiang Zhang;
Zhixiang Chen;
Dan Zeng
Show Abstract
This paper presents a method of multi-curve spectrum representation of facial movements and expressions. Based on 3DMCF
(3D muscle-controlled facial) model, facial movements and expressions are controlled by 21 virtual muscles. So,
facial movements and expressions can be described by a group of time-varying curves of normalized muscle contraction,
called multi-curve spectrum. The structure and basic characters of multi-curve spectrum is introduced. The performance
of the proposed method is among the best. This method needs small quantity of data, and is easy to apply. It can also be
used to transplant facial animation between different faces.
The research of ultraviolet detection by using CCD
Author(s):
Yi-fan Zheng;
Xiaoxuan Xu;
Bin Wang;
Zhe Qin;
Jun-mei Li
Show Abstract
Lumogen Yellow S 0790 is a commercial pigment based on azomethine and is used for enhancing charge-coupled
device (CCD) for detecting ultraviolet radiation. It's used as a wavelength up-shifter, whereby short wavelength
ultraviolet (UV) light that is absorbed by the material is rapidly re-emitted with longer wavelengths in the visible spectra,
for improving the spectral response of CCD detectors. In this work we research on differences of the crystallized sample
and the re-crystallized sample in crystal structure, morphology and optical properties, including laser-induced
fluorescence excitation spectrum, emission spectrum and Raman spectra. The results show that re-crystallized Lumogen
sample has better crystalline structure on the application of ultraviolet sensitizer. By using the re-crystallized
as-deposited Lumogen films on glass substrate in front of the CCD detector, ultraviolet can be detected and quantized
better.
View field blemishes of ICCD
Author(s):
Shulin Liu;
Guangxu Deng;
Yanhong Li;
Jingsheng Pan;
Zhihong Wang;
Guilin Zeng;
Jiannin Sun
Show Abstract
It is very important to investigate blemishes in intensified CCD (ICCD) because of the difference
in visual effect between direct observing image intensifiers and ICCD cameras for human. The
reasons which result in view field blemishes of low light image intensifiers are analyzed and relative standards of blemishes' dimension, quantity and distribution in different zones are introduced. Thinking of the blemishes stemmed from fiber optic taper and low light image intensifier, referring to relative standards of ICCD overseas manufacturers, the technical requirements of ICCD view field blemishes which fit to Chinese conditions and can be accepted by customers are provided. The relative contents in this paper will provide scientific basis for ICCD standardization work.
Applications of stroboscopic imaging technique in three-dimensional feature detection of micro flexible aerodynamic shape
Author(s):
Yanan Yu;
Xiangjun Wang;
Hong Chen
Show Abstract
Applications of stroboscopic imaging technique in different areas are illuminated. Several major three-dimensional
morphology imaging detection methods for micro flexible adaptive aerodynamic shape, which are based on scaled model
in experiment process, are discussed at home and abroad at present. And stroboscopic imaging detection technique and
testing device are introduced emphatically, which could be used to obtain deformation information of flexible aerodynamic shape. A flexible aerodynamic shape detection method, based on the combination of stroboscopic imaging
technique and optical flow analysis, is proposed to validate experimental model for adaptive aerodynamic shape. This
technique could compensate the inadequacy of numerical analysis and provide more aeroelastic characteristics for
further analysis. Moreover, this measurement method is of advantages such as non contact, real time and visualization etc.
Research on infrared multispectral imaging detection technology
Author(s):
Hong Xu;
Xiangjun Wang;
Yanan Yu
Show Abstract
Research on LWIR multispectral imaging detection technology carried out in the key national defense laboratory in
Tianjin University is introduced in this paper. Firstly, a kind of infrared multispectral image simulation method based on
multispectral or hyperspectral images data in the VIS/NIR band is recommended. The combined strategy of
unsupervised and supervised classification methods is put forward to efficiently realize auto-matching and labeling of
pixels. Then, using the infrared image simulation technology, infrared multispectral simulation images can be generated
highly similar to real natural environments, which are valuable to the development of LWIR multispectral spectrometers,
as well as research on multispectral detection algorithms. Secondly, the co-image plane imaging detection technique is
presented as well as the attempt to make the small LWIR multispectral imaging detector based on such concept. A fourband
prototype has been achieved in VIS/NIR band, verifying the feasibility and validity of this technique.
2048 pixel front illuminated linear CCD for spectroscopy
Author(s):
Chao-min Wang;
Chang-ju Liu;
Yu Zheng;
Ping Li
Show Abstract
The charge couple device(CCD) used in spectroscopy requests good response to the wave bands between 190-1000 nm.
The common measure to carry out UV response is phosphor coatings or back illuminated. Among them, phosphors are
wavelength converters that convert short-wavelength light into the visible spectral region. This technology needs adding
special process which not only raises cost, reduces yield, but also reduces the resolution of the image. Back illuminated
is reducing CCD thickness to 15um which is thinner than the normal paper by mechanical polishing and chemical
corrosion after completing the front processes of CCD. This technology needs special instrument, complex process , and
the yield is also low. Both phosphor coatings and back illuminated have some disadvantages such as low space
resolution, complex process, low yield, high cost etc.
The CCD of traditional structure has no response to the wavelength less than 350nm, the reason is that the length of UV
penetrating through Si is shallow, the penetrating length is only 6.5nm of 300nm UV, the shorter wavelength UV, the
shallower penetrating length. The junction depth of normal CCD process is above 200nm, some realize shallow junction
through molecular beam epitaxy, but the instrument is expensive and the cost is high. The photosensitive area of
normal structure CCD adopting portrait P-N junction, light incidences from N area, N area can't be completely depleted
because of the restrict of physics, photon can't arrive depletion area directly.
On the basis of thorough analysis traditional UV CCD, horizontal P-N junction structure of Photosensitive area is put
forward, whose depletion can reach the surface, the photon falls depletion area directly, which can effectively carry out
the absorption of the UV light and the collection of photoelectron. As the latent absorption of Si3N4 to UV with less than
248nm wavelength, the Si3N4 passivation on the photosensitive area is take out. The improved 2048 elements linear
CCD achieves excellent wide spectral response, the maximum quantum efficiency comes to 65% in the ultraviolet band
and the average quantum efficiency comes to 40% between 190nm and 1000nm wave band.
CCD digital radiography system
Author(s):
Yi Wang D.D.S.;
Xi Kang;
Yuanjing Li;
Jianping Cheng;
Yafei Hou;
Haiwei Han
Show Abstract
Amorphous silicon flat-panel detector is the mainstream used in digital radiography (DR) system. In latest years, scintillation screen coupled with CCD DR is becoming more popular in hospital. Compared with traditional amorphous silicon DR, CCD-DR has better spatial resolution and has little radiation damage. It is inexpensive and can be operated easily. In this paper, A kind of CCD based DR system is developed. We describe the construction of the system, the system performances and experiment results.
TDICCD video data sampling technique in the space remote sensing camera
Author(s):
Qiaolin Huang
Show Abstract
The paper analyzes the generated mechanism of the reset noise when reading out the CCD video signal. It also states a
sampling technique for CCD output video signal, the Correlative Double Sampling (CDS) technique, which is on the
basis of noises canceled-each-other and the mathematics correlative theory. The paper introduces the operation principle
of the CDS technique and its filtering effects on the output noise of CCD (which includes the reset noise of CCD, the
coupled cross-talk noise between the horizontal clock drive and the ground-wire of power supply, the white noise of
output amplifier and the reset noise of 1/f noise). The paper gives a electric circuit of CDS that is applied practically. At
last, it verified the conclusion that the output S/N of CCD signal can attain to 50dB.
A micro-spectroscopy system to measure UV-VIS spectra of single hydrocarbon inclusions
Author(s):
Ailing Yang;
Weiwei Ren;
Jinliang Zhang;
Mingming Tang
Show Abstract
For measuring the UV-VIS spectra of single hydrocarbon inclusions, a new micro-spectroscopy system based on
inverted microscope and reflective objective was established in this paper. This system includes a reflective objective, a
micro-lens, a fiber cable, a 3D adaptor, a spectrometer and a common inverted microscope. The 3D adaptor was
perfectly connected with the microscope and no need any rebuilding to the microscope. The reflective objective can be
easily con-focused with the objective of the microscope. By the fiber cable, the micro-lens and the reflective objective,
the external monochromatic light from the spectrometer was used to excite the inclusions. Using this system, we
measured VIS spectra of the inclusions excited by the internal mercury lamp of the microscope. We also measured the
spectra of the single inclusions excited by external monochromatic light source. In this case, the influence of the
fluorescence of the grain around the inclusion was subtracted from the total spectrum. At the same time, the images of
the inclusions were recorded by a CCD camera. Because this system has a low cost, stable and a high sensitive, it is
promising to measure the fluorescence of micro-size samples.
Automatic recognition of landslides based on change detection
Author(s):
Song Li;
Houqiang Hua
Show Abstract
After Wenchuan earthquake disaster, landslide disaster becomes a common concern, and remote sensing becomes more
and more important in the application of landslide monitoring. Now, the method of interpretation and recognition for
landslides using remote sensing is visual interpretation mostly. Automatic recognition of landslide is a new and difficult
but significative job. For the purpose of seeking a more effective method to recognize landslide automatically, this
project analyzes the current methods for the recognition of landslide disasters, and their applicability to the practice of
landslide monitoring. Landslide is a phenomenon and disaster triggered by natural and artificial reasons that a part of
slope comprised of rock, soil and other fragmental materials slide alone a certain weak structural surface under the
gravitation. Consequently, according to the geo-science principle of landslide, there is an obvious change in the sliding
region between the pre-landslide and post-landslide, and it can be described in remote sensing imagery, so we develop
the new approach to identify landslides, which uses change detection based on texture analysis in multi-temporal
imageries. Preprocessing the remote sensing data including the following aspects of image enhancement and filtering,
smoothing and cutting, image mosaics, registration and merge, geometric correction and radiation calibration, this paper
does change detection base on texture characteristics in multi-temporal images to recognize landslide automatically.
After change detection of multi-temporal remote sensing images based on texture analysis, if there is no change in
remote sensing image, the image detected is relatively homogeneous, the image detected shows some clustering
characteristics; if there is part change in image, the image detected will show two or more clustering centers; if there is
complete change in remote sensing image, the image detected will show disorderly and unsystematic. At last, this paper
takes some landslides at the Parry Lake as a case to implement the effectiveness of the new method in the application of
landslide identification, which takes SPOT-5(Oct 10, 2003) and ALOS-AVNIR2(Sep 19, 2007) as the respective data
sources of pre-landslide and post-landslide. The result shows that the method based on change detection is available of
landslide information in arid area and other area where there is not obvious spectral difference between landslide mass
and the background. Certainly, it will be more available of such area where there is obvious spectrum difference between
landslide region and the background.
The study of atmospheric effect in the image chain
Author(s):
Huai-chuan Qi;
Qiaolin Huang
Show Abstract
Briefly remote sensing imaging system is a synthetical system which has a
scene as its input and an image as its output. From a conceptual or philosophic point
of view, a chain of events, including both remote sensing instrument and others such
as atmospheric influence lead to the final output. Focusing on the aspect of the full
chain, it could not get improvement using unjustified effort and time. The radiance,
which the remote sensing system receives, can be divided into three parts when taking
into account the atmosphere: L=Lp+Ld+Lc, where Lp is path radiance, Ld is the
radiance directly from target, Lc is the cross radiance caused by adjacency effect. In
this paper we try to explain the effect, which is due to adjacency effect and path
radiance, on image quality of space imaging systems by using the simulated method
combining Monte-Carlo method and MODTRAN 4.0.
Calibration algorithm in robotic remanufacturing measurement system based on 3D laser scanner
Author(s):
C. D. Shen;
S. Zhu;
C. Li;
Y. Y. Liang
Show Abstract
In robotic remanufacturing measurement system, the 3D laser scanner is arranged by the robot and the object scanned is
mounted on a turntable. This paper deals with the algorithm of calibrating the relationship between the scanner
coordinate and the robot Tool0, and furthermore locating the center axis of the turntable. The data of Tool0 can be
directly obtained denoting its relationship with the robot base coordinate. So, the coordinate transformation problems are
effectively solved and the measuring data which relative to the robot base coordinate could be congruously saved. This
paper detailed explains the basic algorithm theory, computing method and the result data analysis, and etc. The
calibration algorithm is deduced under the orthogonal coordinate.
A new auto-focusing algorithm for digital camera
Author(s):
Xin Wang
Show Abstract
At present there are still many shortcomings in auto-focusing techniques used in digital imaging systems because of the
complexity of imaging objects and conditions. Especially the problem that how to improve the auto-focusing velocity is
far from completely solved because the contradiction between focusing speed and focusing precision. In this paper, a
novel auto-focusing algorithm is proposed. We present a new measure of image focus based on lifting wavelet transform,
which possesses the stable, highly sensitive characteristic curve preferring to small-scale accurate focusing. Compared
with the traditional focus measures, the new evaluation function proposed has the fastest calculating speed and the best
robust performance for different scenes while ensuring the highest sensitivity. The long-range coarse focusing is gotten
by Variance function for its large range of auto-focusing and good stability. The new proposed measure is used to realize
the fine focusing in a narrow range by Gauss interpolation. We experimentally illustrate its performance on simulated as
well as real data and demonstrate that the algorithm can focus quickly with high focusing accuracy.
CMOS image sensor and its development trend
Author(s):
Yue Song;
Xiao yan Wang
Show Abstract
Along with the development of VLSI, CMOS image sensor has displayed a strong development trend. The text simply introduces the development course and work principle of CMOS image sensor. It gives the comparison of CCD and CMOS image sensor .The main factors and key technology of CMOS image sensor have been analyzed, And the future of CMOS image sensors is foretold.
The electronic subsystem design of the interference imaging spectrometer on CE-1 satellite
Author(s):
Yue-Hong Qiu;
De-sheng Wen;
Bao-chang Zhao
Show Abstract
The Interference Imaging Spectrometer (IIS) is the one of payloads of the Chang'e-1 (CE-1) lunar satellite, which is
used to acquire the spectral information and the global distribution information about lunar minerals. In this paper, some
information about the electronic subsystem design of the Interference Imaging Spectrometer (IIS) is given. First, the
technical specifications and requirements, architecture, function and operating modes of the electronic subsystem are
described briefly. Secondly, the focus plane assembly (FPA) including CCD, CCD driving circuits, CCD buffering
circuits, CCD biasing circuits and low-noise preamp circuits is introduced. Thirdly, the video processing and control
assembly including the correlated double sampling(CDS) circuit, the programmable gain amplifier circuit, the active
filter circuit, the A/D conversion circuit, digital video signal buffers, the timing module, the output interface circuit is
treated. Fourthly, the timing description and logical architecture are given. Finally, some results are supplied. After
careful design, thorough analyses and simulation, sufficient debug and test, the design has satisfied the technical
requirements and achieved the goal of the one-year on-orbit operation.
Inner Mongolia grassland snow disaster mapping based on GIS and MODIS
Author(s):
Hongye Yang;
Huishu Hou;
Xiumei Wang
Show Abstract
Snow disaster is a serious natural disaster in Inner Mongolia Autonomous Region. It results in a great number of
livestock deaths and human casualties. Monitoring snow-cover evolution has a significant social and economical
meaning for snow disaster forecasting, snow disaster rescuing, as well as the reconstruction of the post-disaster
recovery. MODIS sensors have many significant advantages to monitor snow-cover; furthermore, monitoring
snow-cover from MODIS data in Inner Mongolia is a blank research area in China. A swath of MODIS L1B 500m
resolution data is chosen as a study data. After a series of processing works are performed on MODIS data, obtains a
snow-cover map of Inner Mongolia. At last, verifies the accuracy of the snow-cover map by official released data.
The validation result shows that the snow-cover map is accurate so that the application of MODIS data to monitor
large-scale snow-cover is very effective.
Aircraft pose measurement and error correction based on image sequences
Author(s):
Limei Yang
Show Abstract
Image sequences is introduced and used to aircraft pose measurement, with the combination of gereralized
point photogrammetry theory and contour matching, image matching algorithm based on Hausdorff distance and
Generalized point is put forward, Generalized point theory error model is established. At first according to OpenGL
imaging mechanism photoelectric theodolite simulative imaging system is built to simulate aircraft image of different
flying pose, and actual image is regard as standard image to calculate aircraft pose initial value on known exterior
orientation parameters to drive simulative system to create image which is consistent with standard image contour, then
simulative and standard edge is extracted and difference of edges is measured, Hausdorff distance method is used to
adjust dynamically edge of simulative image to realize fast matching; Generalized point theory error model is used to
correct aircraft pose to advance accuracy of aircraft pose of image sequences. Simulative experiment of 3Km and 5Km
rocket pose of (45°,30°, 50°) is done, the result indicate the method is feasible and effective, the measurement
accuracy of aircraft pose after correction is better than previously and measurement accuracy of aircraft pose is less than
0.1°.
Real-time matching algorithm of navigation image based on corner detection
Author(s):
Tao Zhang;
Li-mei Yang
Show Abstract
In order to meet requirement of real-time and high accuracy in image matching aided navigation, a fast and
effective algorithm of remote sensing image matching based on corner detection is put forward. With the combination of
rough and fine match, wavelet transform is used to acquire low frequency component to realize image compression to
decrease calculation work and increase matching speed. Harris corner detection algorithm is used to detect corner of
remote sensing image and template image and energy of every corner is calculated; SSDA algorithm is used to match
remote sensing image and template image coarsely, when the matching result is bigger than one to count absolute value
sum of energy difference of characteristic point energy to realize fine match of remote sensing image and template image
to locate the position of template image in remote sensing image accurately. Simulation experiment proves that the
matching of a remote sensing image resolution of 1018*1530 and a template image resolution of 150*90 can be fulfilled
within 2.392 second, the algorithm is robust and effective, real time image navigation can be achieved.
A real-time enhanced technique for CMOS image sensor based on dynamic area threshold
Author(s):
Zaifeng Shi;
Chao Shi;
Xiaodong Xie;
Suying Yao;
Qingjie Cao
Show Abstract
A real-time enhanced technique which can accelerate the processing time and provide a more distinct image using
dynamic area threshold was presented. It can compensate the flaw of the image which captured by CMOS sensor.
According to the value of statistic result, dynamic increasing algorithm would be used to change the contrast and
brightness via local area pixels enhancement while pixels' real value is larger than the upper threshold. On the
converse, if the real value of pixels is below the lower threshold, dynamic decreasing algorithm would be used in
them to darker these pixels more than these which are above the upper threshold value. The edge of the dynamic
area would be analyzed and provide an appropriate way to process these edge pixels, which will let them be the
usual gradual edge in the dynamic area. And this algorithm is implemented in FPGA of Xilinx XUPV4 with an
MCU which can control the area where we are interested in to increase the quality efficiently.
Computer modeling and simulation of light field camera and digital refocusing with attenuating mask
Author(s):
Xiu-bao Zhang;
Yan Yuan;
Zhi-liang Zhou;
Cheng-ming Sun
Show Abstract
This paper presents a kind of light field camera with a patterned mask inserted between lens and sensor in a conventional
camera. The mask can attenuate the incident light rays as well as the baseband signal is carried by carrier signal during
remote transmission to reduce energy loss in radio and telecommunication field. Linearly independent weighted sums of
light rays can be recorded by the sensor and rays combined in coding way can be decoded according to
modulation-demodulation theory. Through rearranging the tile of the 2D Fourier transform of image on sensor into 4D
planes, the light field can be reconstructed. Although light field image is captured by the camera in single photographic
exposure time, sharp images focused at different depths could be obtained by computation. This paper researches on the
principle, modeling and simulation of the camera according to ray transmission path. Two defocused light field images
are simulated by using some supposed camera parameters and the digital refocused images are reconstructed
respectively.
High speed CMOS active pixel sensors for particle imaging
Author(s):
Yan Li;
Yavuz Degerli;
Zhen Ji;
Jiang Lai
Show Abstract
CMOS active sensors technology has been proved to be one of the potential candidates for charged particle imaging in
future high-energy experiments. Two prototypes of CMOS active pixel sensors aimed at high speed pixel detector with
on-chip data sparcification are presented in this work. While having the same architecture, the two chips were developed
with different CMOS processes in order to evaluate the influence of epitaxial layer thickness on charge detection
performance. Thanks to the offset auto-compensation on both pixel and column level, the noise is well controlled for
both two chips. Binary outputs are realized by column level auto-zeroed discriminators. Using a 55Fe radioactive source,
the charge detection capability is obtained and the factors which influence charged particles detection efficiency is
discussed.
Research on detecting heterogeneous fibre from cotton based on linear CCD camera
Author(s):
Xian-bin Zhang;
Bing Cao;
Xin-peng Zhang;
Wei Shi
Show Abstract
The heterogeneous fibre in cotton make a great impact on production of cotton textile, it will have a bad effect on
the quality of product, thereby affect economic benefits and market competitive ability of corporation. So the
detecting and eliminating of heterogeneous fibre is particular important to improve machining technics of cotton,
advance the quality of cotton textile and reduce production cost. There are favorable market value and future
development for this technology. An optical detecting system obtains the widespread application. In this system,
we use a linear CCD camera to scan the running cotton, then the video signals are put into computer and processed
according to the difference of grayscale, if there is heterogeneous fibre in cotton, the computer will send an order
to drive the gas nozzle to eliminate the heterogeneous fibre. In the paper, we adopt monochrome LED array as the
new detecting light source, it's lamp flicker, stability of luminous intensity, lumens depreciation and useful life are
all superior to fluorescence light. We analyse the reflection spectrum of cotton and various heterogeneous fibre first,
then select appropriate frequency of the light source, we finally adopt violet LED array as the new detecting light source. The whole hardware structure and software design are introduced in this paper.
Smart APS pixel with full frame self-storage and motion detection capabilities
Author(s):
Shi-bin Zhao;
Su-ying Yao;
Jiang-tao Xu;
Hong-le Li
Show Abstract
A novel APS pixel with full frame self-storage and motion detection capabilities is proposed in this paper. Taking
advantage of adding the independent exposure transistor, the FD node of traditional 4T pixel can be used for temporarily
storing the integrated charge between two successive frames. Before the current frame's reset operations, the charge of
previous frame will been readout and compared with the subsequent signal of current frame to locate the motion pixel in
the motion detection comparators which is integrally shared by columns and have a lower load capacitor and higher gain
than the previously published counterparts. The pixel and its peripheral circuit are designed and simulated using SMIC
0.18μm MM/RF 1P6M process. The simulation indicates that the image sensor using proposal design can achieve the
motion detection without obvious reduction of fill factor and have a higher accuracy of detection and lower power
consumption, which is more suitable to the application of surveillance and remote video communication network.
A global shutter CMOS image sensor with wide dynamic range pixel
Author(s):
Jiang-tao Xu;
Zhi-xun Yang;
Shi-bin Zhao;
Su-ying Yao
Show Abstract
A novel five transistor global shutter CMOS active pixel with ultra-high dynamic range is presented in this paper. A
global shutter control transistor is added to traditional four transistor pixel. The five transistor pixel image sensor works
in global shutter mode to shoot high speed moving object with dual sampling to eliminate fixed pattern noise. The image
sensor can restore four transistor pixel rolling shutter mode with global shutter control transistor shutoff to shoot
stationary object with correlated dual sampling to eliminate fixed pattern noise and random noise. A digital control
stepped reset-gate voltage technique with no additional components to increase dynamic range by the compression of
charge integration characteristic curve and to implement anti-blooming by discharging excess carriers is adopted.
Simulation results show that the image sensor can work in global shutter mode and the dynamic range is increased
approximately by 20dB than typical CMOS image sensor.
Applications of low light level imaging technology in the engineering of defect detection and repair of underwater pier
Author(s):
Cheng-dong Zheng;
Xi-zhan Liu;
Yan-sheng Weng
Show Abstract
The frameworks of pier underwater frequently encounter the damage of
wind, wave, corrosion and flush. Specially, The 5.12 earthquake occurring. It brings serious hidden trouble to communications and transport. Under low-light level waters, how to inspect these flaws correctly, it is a most difficulty question for inspecting. So the text provides an effective inspecting and imaging technology with technical datea and application scope. The technology includes imaging system and data acquiring system. It could danimically image the shape of the frameworks of pier underwater.
Paraxial imaging electron optics and its spatial-temporal aberrations for a bi-electrode concentric spherical system with electrostatic focusing
Author(s):
Li-wei Zhou;
Hui Gong;
Zhi-quan Zhang;
Yi-fei Zhang
Show Abstract
As is known, the paraxial solutions play an important role in studying electron optical imaging system and its
aberrations, but the investigation of a bi-electrode concentric spherical system with electrostatic focusing directly from
paraxial electron ray equation and paraxial electron motion equation has not been done before. In this paper, we shall use
the paraxial equations to study the spatial-temporal trajectories and their aberrations for a bi-electrode concentric
spherical system with electrostatic focusing. In the present paper, start from the paraxial electron ray equation and
paraxial electron motion equation, the paraxial spatial-temporal trajectory of moving electron emitted from the cathode
has been solved for a bi-electrode concentric spherical system with electrostatic focusing. The paraxial static and
dynamic imaging electron optics, as well as the paraxial spatial-temporal aberrations in this system are then discussed,
the regularity of paraxial imaging optical properties has been given. The paraxial spatial aberrations, as well as the
paraxial temporal aberrations with different orders, have been defined and deduced, that are classified by the order of (εz/Φac)1/2 and (εr/Φac)1/2. The same conclusions about paraxial spatial and temporal aberrations as we have done before will be given.
Adaptive defect correction and noise suppression module in the CIS image processing system
Author(s):
Su Wang;
Su-ying Yao;
Olivier Faurie;
Zai-feng Shi
Show Abstract
The image quality in a CIS chip is inferior to that of the CCD mainly because the random noise, which is very difficult
to be suppressed in analog circuits, especially in submicron process. The popular high quality random noise suppression
algorithms are too complex to be used in a color CIS chip. Besides the random noise, the defect pixels are more difficult
to avoid, especially in high resolution CIS system. Therefore, a novel spacial adaptive noise suppression algorithm,
which combines the defect pixel correction function, is presented. The spatially adaptive defect correction and noise
suppression algorithm consists of defect pixel detector, defect value corrector, image detail estimator, and two
configurable Gaussion masks to deal with different complex level image region. With a fast and effective performance,
cross mask and 25-in sorting methods are applied to the defect detector and detail estimator circuits. With good PSNR
and visual quality results and much less hardware cost, the proposed method can be practically used in a CIS chip or other camera systems.
Computer simulation for digital refocusing imager based on light field photography
Author(s):
Zhi-liang Zhou;
Yan Yuan;
Bin Xiangli
Show Abstract
This paper presents a computer simulation of light field photography that records the light field by
inserting a microlens array in a conventional camera. A computational model is configured to emulate
how the 4D light field is distributed in the camera and then captured on a 2D sensor. Based on the
recorded light field, refocused images are calculated by spatial integration at different depths. In the
Fourier domain, a refocused photograph can be obtained by taking an appropriate 2D slice in the 4D light field. Due to this theorem, another refocusing algorithm in the Fourier domain is particularly explored in this paper. After reconstructing a focal stack of images at all depths in the scene, a photograph with extended depth of field can be calculated by wavelet based image fusion methods.
Research on liquid identification based on CCD imaging system
Author(s):
Haixiu Chen;
Huiqiang Tang;
Jingfeng Huang
Show Abstract
Owing to the difference in physical and chemical properties, the liquid drops' growth states are dissimilar to different
liquids under same conditions. And this drop growth difference to various liquids is embodied in the corresponding
drop's contour feature obviously. Thus the liquid identification method based on CCD imaging system will be
introduced in detail in this paper. Through experiments to different liquids, the region area, boundary girth, drop length,
drop plumpness, drop circularity, and the profile edge of the liquid drop image will be extracted and analyzed. And with
these information the liquid identification can be realized. From sample experiments the region area and the drop
plumpness is more effective than other parameters in liquid discrimination. And the boundary girth and drop length
difference is very small to some liquids, thus they are the realitive weaker character to liquid drops.
An intended motion estimation method based on unmanned aerial vehicle aviation video image
Author(s):
Hongying Zhao;
Tong Lu
Show Abstract
In the process of the UAV aviation video image stabilization, whether the image sequence interval used to estimate the intended motion is appropriate will have effect on the stabilization quality, either too stable or unstable. To solve this problem, a method of intended motion estimation using the UAV's flight parameters as auxiliary data is proposed in this paper, which also helps to realize real-time image stabilization. UAV always has real-time access to UAV position, flight attitude, velocity and other flight parameters. In this paper we analyze the relationship between these parameters with the flight pattern as a basis of the change in intended motion to achieve an effective judge of the estimation sequence, thus obtain accurate parameters of the intended motion. The method has been demonstrated through a series of experiments on real aviation video data.
Image stabilization algorithm based on multi-bitplane
Author(s):
Hongying Zhao;
Tianzeng Wang
Show Abstract
The key of block matching based on multi-bit-plane is how to accurately select bit-planes which
contain abundant information, to ensure matching precision. In accordance with the characteristics of
aerial video images and through taking a large number of experiments and data statistics a new
algorithm that bit-planes are selected based on peak-value distribution of the histogram is proposed in
this paper. Firstly, the paper introduces the relationship between the peak-value distribution and the
information contained by bit-planes. Secondly the paper introduces the method to select bit-planes by
analyzing the peak-value distribution. Finally two bit-planes containing abundant information are
selected, at which small diamond search and big diamond search is performed respectively. Experiments show that bit-planes containing abundant information can be determined with the algorithm for different video, and the matching precision is guaranteed.
Imaging theory of a retina-like CMOS sensor in high speed forward motion
Author(s):
Huan-huan Zhang;
Feng-mei Cao;
Kai Yan;
Lei Zhang
Show Abstract
Still image output of a typical retina-like CMOS sensor is simulated firstly according to the
arrangement of imaging device array, which is important for following research on imaging theory of
the sensor in high speed forward motion. Then, the degeneration matrix of the sensor in high speed
forward motion is deduced on the basis of imaging mechanism of high speed forward motion and
imaging characteristic of the sensor, and the retina-like CMOS sensor is qualitatively proved to exhibit
less fuzzy degree compared with the rectangular sensor that has the same visual field and biggest resolution in transverse and longitudinal with the retina-like CMOS sensor, according to visual effect of simulated blurred images recorded from the two kinds of sensors. Finally, it is quantitatively proved that the retina-like CMOS sensor has great application potential in high speed forward motion with method to evaluate image quality based on structural information of the blurred image in polar
coordinates.
Design of low latency clock distribution network for long linear photo detector readout circuit
Author(s):
Yang Tai;
Yiqiang Zhao
Show Abstract
Digital clock network design becomes one of the key research topics with the circuit area and the operation
frequence increased. In order to improve the performance of the long linear arrays photo detector (512 elements) readout circuit and reduce the control timing uncertainty, this paper presented an improved clock distribution network based on full-custom design methodology, which optimized the system clock chip distribution. The circuit chip has been fabricated using Chartered 0.35um CMOS process, and the chip size was 3*18mm2, which operated at 50MHz. Test results showed that the improved clock distribution network can effectively reduce the clock delay for more than 87% and had a good inter-channel consistency. The readout accuracy met the design requirements.
Image denoising in real-time system aided by simulation tools
Author(s):
Jintao Wang;
Chu Qiu;
Pengdong Gao;
Yongquan Lu;
Rui Lv;
Wenhua Yu
Show Abstract
Real-Time Image Acquisition System is as shown in Fig.1. "Digital Image noise" is equivalent to analog cameras. This
noise appears as random speckles on an otherwise smooth surface and can significantly retrograde image quality [1-7].
There are outer and inner noise consisting in Image Acquisition System. In this paper we analyzed the noise generated
causation, noise model and denoising method with the aid of simulation tools. Since increased current source / sink
capability and faster switching edge rates result in an increase in ground bounce or noise caused by simultaneously
switching outputs (SSOs) in the FPGA design. The noise can affect device reliability since it can trigger false edges on
critical input signals [8]. Considering the characteristics of FPGAs, flexibility, inherent parallel characteristic and the rapid
prototyping, a debugging technique of FPGA-based was designed with a Lattice ECP2M chip. The experiment results
which include the time-consuming, the wave forms and the calculation cycles were analyzed with special tools,
Protel99SE, ispLever 6.1, ModelSim, etc. and the Post-Place & Route layout of a project was shown in the end of this
paper. It is concluded that we can deal with the inner noise problems on hardware platform, and find out some denoising
methods.
A kind of image real-time enhance processing technology of visible light with low contrast
Author(s):
Wei-qi Jin;
Li Li
Show Abstract
The effective distance of the optical imaging system based on CCD/CMOS is affected strongly by fog or haze on the
border less travelled by or the sea level, so this paper aims to adopt an effective method to use near-infrared filter and
digital image processing to increase the system effective distance. Firstly, this paper analyzes theoretically that the
system has a longer visual distance in the near-infrared than that in the visible light in the low visibility condition, and
makes clear that the visual distance of the system will increase to about 1.5 times as much as before. Secondly, given the
border/ coastal surveillance having the characteristics of broad visual angle and the large distance between the observed
targets, this paper works out a partially overlapped sub-block local histogram equalization algorithm, which will achieve
the real-time image enhancement processing of beyond visual range optical imaging on the condition of enhancing the
contrast and maintaining the image specifics. Thirdly, it has developed a real-time enhancement image processing
system of beyond visual range photoelectric image with high-performance DSP and FPGA. And the observed distance of
the system can realize more than two times as much as the visibility in the weather condition with the visibility is about 7 KM.
A practical SNR estimation scheme for remotely sensed optical imagery
Author(s):
Xinhong Wang;
Lingli Tang;
Chuanrong Li;
Bo Yuan;
Bo Zhu
Show Abstract
Signal-to-Noise Ratio (SNR) is one of the basic and commonly used statistic parameters to evaluate the imaging quality
of optical sensors. A lot of SNR estimation algorithms have been developed in various research fields. However, one
intrinsic fact is usually ignored that SNR is not a constant value, but a quantity changing with the incident radiance
received by the sensor. So SNR values estimated on different images through commonly used method are not
comparable due to their distinct intensity levels between the images. Here we proposed a normalized SNR estimation
scheme which can be readily applied to remotely sensed optical images. With this scheme SNR values obtained from
different images can be of comparability, thus we can easily evaluate the performance degeneration of the sensor with
more sufficient reliability.
On electron-optical spatial and temporal aberrations in a bi-electrode spherical concentric system with electrostatic focusing
Author(s):
Li-wei Zhou;
Hui Gong;
Zhi-quan Zhang;
Yi-fei Zhang
Show Abstract
For a concentric spherical system composed of two electrodes with electrostatic focusing, the electrostatic potential
distribution and the spatial-temporal trajectory of electron motion can be expressed by analytical forms. It is naturally to
take such system as an ideal model to investigate the imaging properties, as well as the spatial-temporal aberrations, to
analyze its particularity and to find the clue of universalities and regularities. Research on this problem can afford
academic foundation not only in studying the static imaging for the night vision tube, but also in studying the dynamic
imaging for high speed image converter tube. In the present paper, based on the practical electron ray equation and
electron motion equation for a bi-electrode concentric spherical system with electrostatic focusing, the spatial-temporal
trajectory of moving electron emitted from the photocathode is solved, the exact and approximate formulae for image
position and arriving time, have been deduced. From the solution of spatial-temporal trajectory the electron optical
spatial and temporal properties of this system are then discussed, the paraxial and geometrical lateral aberrations with
different orders, as well as the paraxial and geometrical temporal aberrations with different orders, are defined and
deduced, that are classified by the order of (εz/Φac)1/2 and (εr/Φac)1/2