Show all abstracts
View Session
- Front Matter: Volume 8298
- High Speed Sensors
- Smart Sensors
- High Performance Sensors
- Noise and Characterization
- Technological Improvements
- Color Imaging
- Interactive Paper Session
Front Matter: Volume 8298
Front Matter: Volume 8298
Show abstract
This PDF file contains the front matter associated with SPIE Proceedings Volume 8298, including the Title Page, Copyright information, Table of Contents, and the Conference committee listing.
High Speed Sensors
High-speed VGA resolution CMOS image sensor with global shutter
Show abstract
A 600 frames per second CMOS image sensor with 644 (H) x 484 (V) 7.4 μm 8-transistor global shutter pixels and
digital outputs is described, consuming 300 mW. The global shutter pixel architecture supports correlated double
sampling, resulting in a full well charge of 23,000 e-, a read noise of 12 e- RMS and a dynamic range of 65.6 dB. The
sensor is designed in 0.18 μm CMOS and its pixel, architecture and performance results are described in the article. A
chip scale ball grid array package was developed for this 1/3" optical format image sensor, resulting in a package size of
only 7 x 7 x 0.7 mm3 and a total weight of 70 mg.
High-speed global shutter CMOS machine vision sensor with high dynamic range image acquisition and embedded intelligence
Francisco Jiménez-Garrido,
José Fernández-Pérez,
Cayetana Utrera,
et al.
Show abstract
High-speed imagers are required for industrial applications, traffic monitoring, robotics and unmanned vehicles, moviemaking,
etc. Many of these applications call also for large spatial resolution, high sensitivity and the ability to detect
images with large intra-frame dynamic range. This paper reports a CIS intelligent digital image sensor with 5.2Mpixels
which delivers 12-bit fully-corrected images at 250Fps. The new sensor embeds on-chip digital processing circuitry for a
large variety of functions including: windowing; pixel binning; sub-sampling; combined windowing-binning-subsampling
modes; fixed-pattern noise correction; fine gain and offset control; color processing, etc. These and other CIS
functions are programmable through a simple four-wire serial port interface.
High-speed CMOS image sensor for high-throughput lensless microfluidic imaging system
Show abstract
The integration of CMOS image sensor and microfluidics becomes a promising technology for point-of-care (POC)
diagnosis. However, commercial image sensors usually have limited speed and low-light sensitivity. One high-speed and
high-sensitivity CMOS image sensor chip is introduced in this paper, targeted for high-throughput microfluidic imaging
system. Firstly, high speed image sensor architecture is introduced with design of column-parallel single-slope analog-todigital
converter (ADC) with digital correlated double sampling (CDS). The frame rate can be achieved to 2400
frames/second (fps) with resolution of 128×96 for high-throughput microfluidic imaging. Secondly, the designed system
has superior low-light sensitivity, which is achieved by large pixel size (10μm×10μm, 56% fill factor). Pixel peak signalnoise-
ratio (SNR) reaches to 50dB with 10dB improvement compared to the commercial pixel (2.2μm×2.2μm). The
degradation of pixel resolution is compensated by super-resolution image processing algorithm. By reconstructing single
image with multiple low-resolution frames, we can equivalently achieve 2μm resolution with physical 10μm pixel.
Thirdly, the system-on-chip (SoC) integration results in a real-time controlled intelligent imaging system without
expensive data storage and time-consuming computer analysis. This initial sensor prototype with timing-control makes it
possible to develop high-throughput lensless microfluidic imaging system for POC diagnosis.
Smart Sensors
Smart image sensor with adaptive correction of brightness
Show abstract
Today, intelligent image sensors require the integration in the focal plane (or near the focal plane) of complex
algorithms for image processing. Such devices must meet the constraints related to the quality of acquired
images, speed and performance of embedded processing, as well as low power consumption. To achieve these
objectives, analog pre-processing are essential, on the one hand, to improve the quality of the images making
them usable whatever the light conditions, and secondly, to detect regions of interest (ROIs) to limit the amount
of pixels to be transmitted to a digital processor performing the high-level processing such as feature extraction
for pattern recognition. To show that it is possible to implement analog pre-processing in the focal plane, we
have designed and implemented in 130nm CMOS technology, a test circuit with groups of 4, 16 and 144 pixels,
each incorporating analog average calculations.
Algorithm architecture co-design for ultra low-power image sensor
Show abstract
In a context of embedded video surveillance, stand alone leftbehind image sensors are used to detect events with
high level of confidence, but also with a very low power consumption. Using a steady camera, motion detection
algorithms based on background estimation to find regions in movement are simple to implement and computationally
efficient. To reduce power consumption, the background is estimated using a down sampled image formed
of macropixels. In order to extend the class of moving objects to be detected, we propose an original mixed
mode architecture developed thanks to an algorithm architecture co-design methodology. This programmable
architecture is composed of a vector of SIMD processors. A basic RISC architecture was optimized in order to
implement motion detection algorithms with a dedicated set of 42 instructions. Definition of delta modulation as
a calculation primitive has allowed to implement algorithms in a very compact way. Thereby, a 1920x1080@25fps
CMOS image sensor performing integrated motion detection is proposed with a power estimation of 1.8 mW.
A CMOS imager using focal-plane pinhole effect for confocal multibeam scanning microscopy
Show abstract
A CMOS imager for confocal multi-beam scanning microscopy, where the pixel itself works as a pinhole, is proposed.
This CMOS imager is suitable for building compact, low-power, and confocal microscopes because the complex Nipkow
disk with a precisely aligned pinhole array can be omitted. The CMOS imager is composed of an array of sub-imagers,
and can detect multiple beams at the same time. To achieve a focal-plane pinhole effect, only one pixel in each subimager,
which is at the conjugate position of a light spot, accumulates the photocurrent, and the other pixels are unread.
This operation is achieved by 2-dimensional vertical and horizontal shift registers. The proposed CMOS imager for the
confocal multi-beam scanning microscope system was fabricated in 0.18-μm standard CMOS technology with a pinned
photodiode option. The total area of the chip is 5.0mm × 5.0mm. The number of effective pixels is 256(Horizontal) ×
256(Vertical). The pixel array consists of 32(H) × 32(V) sub-imagers each of which has 8(H) × 8(V) pixels. The pixel is
an ordinary 4-transistor active pixel sensor using a pinned photodiode and the pixel size is 7.5μm × 7.5μm with a fillfactor
of 45%. The basic operations such as normal image acquisition and selective pixel readout were experimentally
confirmed. The sensitivity and the pixel conversion gain were 25.9 ke-/lx•sec and 70 μV/e- respectively.
Time-to-impact sensors in robot vision applications based on the near-sensor image processing concept
Show abstract
Based on the Near-Sensor Image Processing (NSIP) concept and recent results concerning optical flow and Time-to-
Impact (TTI) computation with this architecture, we show how these results can be used and extended for robot vision
applications. The first case involves estimation of the tilt of an approaching planar surface. The second case concerns the
use of two NSIP cameras to estimate absolute distance and speed similar to a stereo-matching system but without the
need to do image correlations. Going back to a one-camera system, the third case deals with the problem to estimate the
shape of the approaching surface. It is shown that the previously developed TTI method not only gives a very compact
solution with respect to hardware complexity, but also surprisingly high performance.
High Performance Sensors
Diffusion dark current in front-illuminated CCDs and CMOS image sensors
Show abstract
The dark current that arises due to diffusion from the bulk is assuming a more important role now that
CCD and CMOS image sensors have found their way into consumer electronics which must be capable
of operating at elevated temperatures. Historically this component has been estimated from the
diffusion related current of a diode with an infinite substrate. This paper explores the effect of a
substrate of finite extent beneath the collecting volume of the pixel for a front-illuminated device and
develops a corrected expression for the diffusion related dark current. The models show that the
diffusion dark current can be much less than that predicted by the standard model.
A 176x144 148dB adaptive tone-mapping imager
Show abstract
This paper presents a 176x144 (QCIF) HDR image sensor where visual information is simultaneously captured and
adaptively compressed by means of an in-pixel tone mapping scheme. The tone mapping curve (TMC) is calculated
from the histogram of a Time Stamp image captured in the previous frame, which serves as a probability indicator of the
distribution of illuminations within the present frame. The chip produces 7-bit/pixel images that can map illuminations
from 311μlux to 55.3 klux in a single frame in a way that each pixel decides when to stop observing photocurrent integration
-with extreme values captured at 8s and 2.34μs respectively. Pixels size is 33x33μm2, which includes a 3x3μm2 Nwell-
Psubstrate photodiode and an autozeroing technique for establishing the reset voltage, which cancels most of the offset
contributions created by the analog processing circuitry. Dark signal (10.8 mV/s ) effects in the final image are attenuated
by an automatic programming of the DAC top voltage. Measured characteristics are Sensitivity 5.79 V/lux·s , FWC 12.2ke-,
Conversion Factor 129(e-/DN), and Read Noise 25e-. The chip has been designed in the 0.35μm OPTO technology from
Austriamicrosystems (AMS). Due to the focal plane operation, this architecture is especially well suited to be implemented
in a 3D (vertical stacking) technology using per-pixel TSVs.
A high-dynamic range (HDR) back-side illuminated (BSI) CMOS image sensor for extreme UV detection
Show abstract
This paper describes a back-side illuminated 1 Megapixel CMOS image sensor
made in 0.18um CMOS process for EUV detection. The sensor applied a so-call
"dual-transfer" scheme to achieve low noise, high dynamic range. The EUV
sensitivity is achieved with backside illumination use SOI-based solution. The
epitaxial silicon layer is thinned down to less than 3um. The sensor is tested and
characterized at 5nm to 30nm illumination. At 17.4nm targeted wavelength, the
detector external QE (exclude quantum yield factor) reaches almost 60%. The
detector reaches read noise of 1.2 ph- (@17.4nm), i.e. close to performance of EUV
photon counting.
A low-noise 15-µm pixel-pitch 640x512 hybrid InGaAs image sensor for night vision
Show abstract
Hybrid InGaAs focal plane arrays are very interesting for night vision because they can benefit from the nightglow
emission in the Short Wave Infrared band. Through a collaboration between III-V Lab and CEA-Léti, a 640x512 InGaAs
image sensor with 15μm pixel pitch has been developed.
The good crystalline quality of the InGaAs detectors opens the door to low dark current (around 20nA/cm2 at room
temperature and -0.1V bias) as required for low light level imaging. In addition, the InP substrate can be removed to
extend the detection range towards the visible spectrum.
A custom readout IC (ROIC) has been designed in a standard CMOS 0.18μm technology. The pixel circuit is based on a
capacitive transimpedance amplifier (CTIA) with two selectable charge-to-voltage conversion gains. Relying on a
thorough noise analysis, this input stage has been optimized to deliver low-noise performance in high-gain mode with a
reasonable concession on dynamic range. The exposure time can be maximized up to the frame period thanks to a rolling
shutter approach. The frame rate can be up to 120fps or 60fps if the Correlated Double Sampling (CDS) capability of the
circuit is enabled.
The first results show that the CDS is effective at removing the very low frequency noise present on the reference
voltage in our test setup. In this way, the measured total dark noise is around 90 electrons in high-gain mode for 8.3ms
exposure time. It is mainly dominated by the dark shot noise for a detector temperature settling around 30°C when not
cooled. The readout noise measured with shorter exposure time is around 30 electrons for a dynamic range of 71dB in
high-gain mode and 108 electrons for 79dB in low-gain mode.
High-dynamic-range 4-Mpixel CMOS image sensor for scientific applications
Show abstract
As bio-technology transitions from research and development to high volume production, dramatic improvements in
image sensor performance will be required to support the throughput and cost requirements of this market. This includes
higher resolution, higher frame rates, higher quantum efficiencies, increased system integration, lower read-noise, and
lower device costs. We present the performance of a recently developed low noise 2048(H) x 2048(V) CMOS image
sensor optimized for scientific applications such as life science imaging, microscopy, as well as industrial inspection
applications. The sensor architecture consists of two identical halves which can be operated independently and the
imaging array consists of 4T pixels with pinned photodiodes on a 6.5μm pitch with integrated micro-lens. The operation
of the sensor is programmable through a SPI interface. The measured peak quantum efficiency of the sensor is 73% at
600nm, and the read noise is about 1.1e- RMS at 100 fps data rate. The sensor features dual gain column parallel ouput
amplifiers with 11-bit single slope ADCs. The full well capacity is greater than 36ke-, the dark current is less than
7pA/cm2 at 20°C. The sensor achieves an intra-scene linear dynamic range of greater than 91dB (36000:1) at room
temperature.
Noise and Characterization
Projecting the rate of in-field pixel defects based on pixel size, sensor area, and ISO
Show abstract
Image sensors continuously develop in-field permanent hot pixel defects over time. Experimental measurements of
DSLR, point and shoot, and cell phone cameras, show that the rate of these defects depends on the technology (APS or
CCD) and on design parameters like imager area, pixel size, and gain (ISO). Increased image sensitivity (ISO) enhances
defects appearance and sometimes results in saturation. 40% of defects are partially stuck hot pixels, with an offset
independent of exposure time, and are particularly affected by ISO changes. Comparing different sensor sizes with
similar pixel sizes showed that defect rates scale linearly with sensor area, suggesting the metric of defects/year/sq mm.
Plotting this rate for different pixel sizes (7.5 down to 1.5 microns) shows that defect rates grow rapidly as pixel size
shrinks. Curve fitting shows an empirical power law with defect rates proportional to the pixel size to the power of -2.1
for CCD and to the power of -3.6 for CMOS. At 7um pixels, the CCD defect rate is ~2.5 greater than for CMOS, but
for 2.4um pixels the rates are equal. Extending our empirical formula to include ISO allows us to predict the expected
defect development rate for a wide set of sensor parameters.
Dynamic CCD pixel depletion edge model and the effects on dark current production
Show abstract
The depletion edge in Charge-Coupled Devices (CCD) pixels is dependent upon the amount of signal charge located
within the depletion region. A model is presented that describes the movement of the depletion edge with increasing
signal charge. This dynamic depletion edge is shown to have an effect on the amount of dark current produced by some
pixels. Modeling the dark current behavior of pixels both with and without impurities over an entire imager demonstrates
that this moving depletion edge has a significant effect on a subset of the pixels. Dark current collected by these pixels is
shown to behave nonlinearly with respect to exposure time and additionally the dark current is affected by the presence
of illumination. The model successfully predicts unexplained aspects of dark current behavior previously observed in
some CCD sensors.
Characterizing the response of charge-couple device digital color cameras
Show abstract
The advance and rapid development of electronic imaging technology has lead the way to production of imaging
sensors capable of acquiring good quality digital images with a high resolution. At the same time the cost
and size of imaging devices have reduced. This has incited an increasing research interest for techniques that
use images obtained by multiple camera arrays. The use of multi-camera arrays is attractive because it allows
capturing multi-view images of dynamic scenes, enabling the creation of novel computer vision and computer
graphics applications, as well as next generation video and television systems. There are additional challenges
when using a multi-camera array, however. Due to inconsistencies in the fabrication process of imaging sensors
and filters, multi-camera arrays exhibit inter-camera color response variations. In this work we characterize
and compare the response of two digital color cameras, which have a light sensor based on the charge-coupled
device (CCD) array architecture. The results of the response characterization process can be used to model the
cameras' responses, which is an important step when constructing a multi-camera array system.
Implementing and using the EMVA1288 standard
Show abstract
The European Machine Vision Association took in the last years the initiative of developing a measurement and reporting standard for industrial image sensors and cameras called EMVA1288.
Aphesa offers camera and sensors measurement services and test equipment according to this EMVA1288 standard. We have measured cameras of various kinds on our self-made test-equipment. This implementation and all the measurement sets require to go in the details of the standard and also show us how good it can be but also how difficult it can be.
The purpose of this paper is to give feedback on the standard, based on our experience of implementers and users. We will see that some measurements are easily reproducible and can easily be implemented while others require more research on hardware, software and procedures and also that the results can sometimes have very little meaning.
Our conclusion will be that the EMVA1288 standard is good and well suited for the measurement and characterization of image sensors and cameras for image processing applications but that it is hard for a newcomer to understand the produced data and properly use a test equipment. Developing a complete and compliant test equipment is also a difficult task.
An overview of the European patent system with particular emphasis on IP issues for imaging devices
Show abstract
In this article we give a comprehensive review of the European Patent System with focus on the procedure, its typical
duration, the requirements that must be met at the various stages in order to obtain an European Patent and its related
costs. All the options available to the applicant are discussed in detail, potential pitfalls are highlighted, and the
differences between the European and US Patent Systems are analysed.
Furthermore, an in-depth and very informative analysis of applications and granted patents in the field of imaging
devices is presented including a study of their evolution during the last 10 years together with an analysis of the countries
and companies that are most active in the field of imagers.
Technological Improvements
Development of high-transmittance back-illuminated silicon-on-sapphire substrates thinned below 25 micrometers and bonded to fused silica for high quantum efficiency and high resolution avalanche photodiode imaging arrays
Show abstract
There is a growing need in scientific and industrial applications for dual-mode, passive and active 2D and 3D LADAR
imaging methods. To fill this need, solid-state, single photon sensitive silicon avalanche photodiode (APD) detector
arrays offer high sensitivity and the possibility to operate with wide dynamic range in dual linear and Geiger-mode for
passive and active imaging. To support the fabrication of large scale, high quantum efficiency and high resolution
silicon avalanche photodiode arrays and other advanced solid-state optoelectronics, a novel, high transmittance, backilluminated
silicon-on-sapphire substrate has been developed incorporating a single crystal, epitaxially grown aluminum
nitride (AlN) antireflective layer between silicon and R-plane sapphire, that provides refractive index matching to
improve the optical transmittance into silicon from sapphire. A one quarter wavelength magnesium fluoride
antireflective layer deposited on the back-side of the sapphire improves optical transmittance from the ambient into the
sapphire. The magnesium fluoride plane of the Si-(AlN)-sapphire substrate is bonded to a fused silica wafer that
provides mechanical support, allowing the sapphire to be thinned below 25 micrometers to improve back-illuminated
optical transmittance, while suppressing indirect optical crosstalk from APD emitted light undergoing reflections in the
sapphire, to enable high quantum efficiency and high resolution detector arrays.
29-mp 35-mm format interline CCD image sensor
Show abstract
This paper describes the design and performance of a new high-resolution 35 mm format CCD image sensor using an
advanced 5.5 μm interline pixel. The pixels are arranged in a 6576 (H) × 4384 (V) format to support a 3:2 aspect ratio.
This device is part of a family of devices that share a common architecture, pixel performance, and packaging
arrangement. Unique to this device in the family is the implementation of a fast line dump structure and horizontal CCD
lateral overflow drain.
Photodiode dopant structure with atomically flat Si surface for high-sensitivity and stability to UV light
Show abstract
In this work, n+pn-type photodiodes with various surface n+ layer profiles formed on the atomically flat Si surface were
evaluated to investigate the relationships between the surface photo-generated carrier drift layer dopant profiles with a
high uniformity and sensitivity and stability to UV-light. The degradation mechanism of photodiode sensitivitiy in UVlight
wavelength due to UV-light exposure is explained by the changes in the fixed charges and the interface states at
Si/SiO2 system above photodiode. Finally, a design strategy of photodiode dopant profile to achieve a high sensitivity
and a high stability to UV-light is proposed.
New smart readout technique performing edge detection designed to control vision sensors dataflow
Hawraa Amhaz,
Gilles Sicard
Show abstract
In this paper, a new readout strategy for CMOS image sensors is presented. It aims to overcome the excessive
output dataflow bottleneck; this challenge is becoming more and more crucial along with the technology miniaturization.
This strategy is based on the spatial redundancies suppression. It leads the sensor to perform edge detection and
eventually provide binary image. One of the main advantages of this readout technique compared to other techniques,
existing in the literature, is that it does not affect the in-pixel circuitry. This means that all the analogue processing
circuitry is implemented outside the pixel, which keeps the pixel area and Fill Factor unchanged. The main analogue
block used in this technique is an event detector developed and designed in the CMOS 0.35μm technology from Austria
Micro Systems. The simulation results of this block as well as the simulation results of a test bench composed of several
pixels and column amplifiers using this readout mode show the capability of this readout mode to reduce dataflow by
controlling the ADCs. We must mention that this readout strategy is applicable on sensors that use a linear operating
pixel element as well as for those based on logarithmic operating pixels. This readout technique is emulated by a
MATLAB model which gives an idea about the expected functionalities and dataflow reduction rates (DRR). Emulation
results are shown lately by giving the pre and post processed images as well as the DRR. This last cited does not have a
fix value since it depends on the spatial frequency of the filmed scenes and the chosen threshold value.
Characterization of orthogonal transfer array CCDs for the WIYN one degree imager
Show abstract
The WIYN One Degree Imager (ODI) will provide a one degree field of view for the WIYN 3.5 m telescope located on
Kitt Peak near Tucson, Arizona. Its focal plane consists of an 8x8 grid of Orthogonal Transfer Array (OTA) CCD
detectors. These detectors are the STA2200 OTA CCDs designed and fabricated by Semiconductor Technology
Associates, Inc. and backside processed at the University of Arizona Imaging Technology Laboratory. Several lot runs
of the STA2200 detectors have been fabricated. We have backside processed devices from these different lots and
provide detector performance characterization, including noise, CTE, cosmetics, quantum efficiency, and some
orthogonal transfer characteristics. We discuss the performance differences for the devices with different silicon
thickness and resistivity. A fully buttable custom detector package has been developed for this project which allows
hybridization of the silicon detectors directly onto an aluminum nitride substrate with an embedded pin grid array. This
package is mounted on a silicon-aluminum alloy which provides a flat imaging surface of less than 20 microns peakvalley
at the -100 C operating temperature. Characterization of the package performance, including low temperature
profilometry, is described in this paper.
Color Imaging
Multispectral device for help in diagnosis
Show abstract
In order to build biological tissues spectral characteristics database to be used in a multispectral imaging system a tissues
optical characterization bench is developed and validated. Several biological tissue types have been characterized in vitro
and ex vivo with our device such as beef, turkey and pork muscle and beef liver. Multispectral images obtained have
been analyzed in order to study the dispersion of biological tissues spectral luminance factor. Tissue internal structure
inhomogeneity was identified as a phenomenon contributing to the dispersion of spectral luminance factor. This
dispersion of spectral luminance factor could be a characteristic of the tissue. A method based on envelope technique has
been developed to identify and differentiate biological tissues in the same scene. This method applied to pork tissues
containing muscle and fat gives detection rates of 59% for pork muscle and 14% for pork fat.
Development of a driving method suitable for ultrahigh-speed shooting in a 2M-fps 300k-pixel single-chip color camera
Show abstract
We have developed an ultrahigh-speed CCD camera that can capture instantaneous phenomena not visible to the human
eye and impossible to capture with a regular video camera. The ultrahigh-speed CCD was specially constructed so that
the CCD memory between the photodiode and the vertical transfer path of each pixel can store 144 frames each. For
every one-frame shot, the electric charges generated from the photodiodes are transferred in one step to the memory of
all the parallel pixels, making ultrahigh-speed shooting possible. Earlier, we experimentally manufactured a 1M-fps
ultrahigh-speed camera and tested it for broadcasting applications. Through those tests, we learned that there are cases
that require shooting speeds (frame rate) of more than 1M fps; hence we aimed to develop a new ultrahigh-speed camera
that will enable much faster shooting speeds than what is currently possible. Since shooting at speeds of more than
200,000 fps results in decreased image quality and abrupt heating of the image sensor and drive circuit board, faster
speeds cannot be achieved merely by increasing the drive frequency. We therefore had to improve the image sensor
wiring layout and the driving method to develop a new 2M-fps, 300k-pixel ultrahigh-speed single-chip color camera for
broadcasting purposes.
Accurate color with increased sensitivity using IR
Show abstract
Many applications require accurate color captures in daylight conditions and increased sensitivity for low light
conditions. This is often accomplished by using a mechanical switch to remove the IR cut filter. The sensitivity is
increased at the expense of color accuracy. And a mechanical part is required in the camera. The TRUESENSE
Color Filter Pattern offers an opportunity to get increased sensitivity - using the IR region - while still maintaining
color accuracy. A 2x increase in sensitivity can be achieved over the current TRUESENSE CFA capture which uses
an IR cut filter.
Computational color constancy using chromagenic filters in color filter arrays
Show abstract
We have proposed, in this paper, a new color constancy technique, an extension to the chromagenic color constancy.
Chromagenic based illuminant estimation methods take two shots of a scene, one without and one with a specially chosen
color filter in front of the camera lens. Here, we introduce chromagenic filters into the color filter array itself by placing
them on top of R, G or B filters and replacing one of the two green filters in the Bayer's pattern with them. This allows
obtaining two images of the same scene via demosaicking: a normal RGB image, and a chromagenic image, equivalent
of RGB image with a chromagenic filter. The illuminant can then be estimated using chromagenic based illumination
estimation algorithms. The method, we named as CFA based chromagenic color constancy (or 4C in short), therefore,
does not require two shots and no registration issues involved unlike as in the other chromagenic based color constancy
algorithms, making it more practical and useful computational color constancy method in many applications.
Experiments show that the proposed color filter array based chromagenic color constancy method produces comparable
results with the chromagenic color constancy without interpolation.
Interactive Paper Session
The infrared network video monitoring system based on Linux OS
Show abstract
This paper describes an infrared network video monitoring system based on Linux OS. Firstly, we design the hardware
system that we needed. Secondly, the software platform is introduced in this paper. The Linux operate system is applied
in our software solution. Finally, the application software design process is introduced in the paper. The system can be
used to encode the picture captured from infrared CCD, and then send the picture to another same embedded system to
decode the picture, and finally display it on the LCD and achieve the goal of the infrared video's remote monitoring. As
the infrared CCD would not be affect by the dim light, this monitoring system could be used all day long.
Motion blur-free time-of-flight range sensor
Seungkyu Lee,
Byongmin Kang,
James D.K. Kim,
et al.
Show abstract
Time-of-flight depth sensor provides faster and easier way to 3D scene capturing and reconstruction. The depth
sensor, however, suffers from motion blur caused by any movement of camera or objects. In this manuscript,
we propose a novel depth motion blur pixel detection and elimination method that can be implemented on any
ToF depth sensor with light memory and computation resources. We propose a blur detection method using the
relations of electric charge amount. It detects blur pixel at each depth value calculation step only by checking
the four electric charge values by four internal control signals. Once we detect blur pixels, their depth values are
replaced by any closest normal pixel values. With this method, we eliminate motion blur before we build the
depth image with only few more calculations and memory addition.
CMOS BDJ photodiode for trichromatic sensing
Show abstract
A novel method for achieving trichromatic color detection using a single photodetector with less than three p-n junctions
is presented. This new method removes the constraints of color sensing in buried-double-junction (BDJ) photodiode,
eliminates the need for a priori light source knowledge or for changing color intensity. After using a single visible light
optical filter to block irradiance external of visible spectrum, the color detection is achieved by taking the difference in
depletion region photocurrent generated by different reverse bias voltages. This "difference output" effectively forms the
"third" optical wavelength specific depletion region required for trichromatic color sensing. This method is based on
exploiting the relationship between photon absorption and photon penetration depth of silicon, and the basic property of
p-n junction photodiode which states that only photons absorbed within depletion region generate current. The theory is
validated experimentally using BDJ photodiodes fabricated through MOSIS Inc. in the AMI-ABN 1.5um technology and
ON-SEMI 0.5um technology. A commercial p-i-n photodiode is also being investigated for contrast and comparison.
On image sensor dynamic range utilized by security cameras
Show abstract
The dynamic range is an important quantity used to describe an image sensor. Wide/High/Extended dynamic range is
often brought forward as an important feature to compare one device to another. The dynamic range of an image sensor
is normally given as a single number, which is often insufficient since a single number will not fully describe the
dynamic capabilities of the sensor.
A camera is ideally based on a sensor that can cope with the dynamic range of the scene. Otherwise it has to sacrifice
some part of the available data. For a security camera the latter may be critical since important objects might be hidden
in the sacrificed part of the scene.
In this paper we compare the dynamic capabilities of some image sensors utilizing a visual tool. The comparison is based
on the use case, common in surveillance, where low contrast objects may appear in any part of a scene that through its
uneven illumination, span a high dynamic range. The investigation is based on real sensor data that has been measured in
our lab and a synthetic test scene is used to mimic the low contrast objects. With this technique it is possible to compare
sensors with different intrinsic dynamic properties as well as some capture techniques used to create an effect of
increased dynamic range.
Design of low-noise output amplifiers for P-channel charge-coupled devices fabricated on high-resistivity silicon
Show abstract
We describe the design and optimization of low-noise, single-stage output amplifiers for p-channel charge-coupled
devices (CCDs) used for scientific applications in astronomy and other fields. The CCDs are fabricated on highresistivity,
4000-5000 Ω-cm, n-type silicon substrates. Single-stage amplifiers with different output structure
designs and technologies have been characterized. The standard output amplifier is designed with an n+ polysilicon
gate that has a metal connection to the sense node. In an effort to lower the output amplifier readout
noise by minimizing the capacitance seen at the sense node, buried-contact technology has been investigated. In
this case, the output transistor has a p+ polysilicon gate that connects directly to the p+ sense node. Output
structures with buried-contact areas as small as 2 μm × 2 μm are characterized. In addition, the geometry of the
source-follower transistor was varied, and we report test results on the conversion gain and noise of the various
amplifier structures. By use of buried-contact technology, better amplifier geometry, optimization of the amplifier
biases and improvements in the test electronics design, we obtain a 45% reduction in noise, corresponding to
1.7 e- rms at 70 kpixels/sec.
S/N improvement for the optical-multiplex image-acquisition system
Show abstract
The optical-multiplex system is comprised of an image sensor, a multi-lens array and signal processing unit. The key
feature of the optical-multiplex system is that each pixel of the image sensor captures multiple data of the object through
multi-lenses and the object data is obtained by processing the raw data output from the optical-multiplex image sensor.
We are now able to report that our system can improve the signal-to-noise ratio of the image output from the opticalmultiplex
system by changing the shading characteristics of the multi-lenses in the optical-multiplex system. In a model
of the system for simulation purposes, an optical-multiplex system with five lenses is used. The five lenses are located at
the center, upper, lower, left and right above an image sensor. We calculate the signal-to-noise ratio of the image output
from the optical-multiplex system by changing the shading characteristics of the four lenses located at the upper, lower,
left and right. The best signal-to-noise ratio of this image output by the optical-multiplex system is 8.895 dB better than
that of a camera with a single lens. This value is beyond the previous report value of 3.764 dB.
Fully integrated system-on-chip for pixel-based 3D depth and scene mapping
Show abstract
We present for the first time a fully integrated system-on-chip (SoC) for pixel-based 3D range detection suited for
commercial applications. It is based on the time-of-flight (ToF) principle, i.e. measuring the phase difference of a
reflected pulse train. The product epc600 is fabricated using a dedicated process flow, called Espros Photonic CMOS.
This integration makes it possible to achieve a Quantum Efficiency (QE) of >80% in the full wavelength band from
520nm up to 900nm as well as very high timing precision in the sub-ns range which is needed for exact detection of the
phase delay. The SoC features 8x8 pixels and includes all necessary sub-components such as ToF pixel array, voltage
generation and regulation, non-volatile memory for configuration, LED driver for active illumination, digital SPI
interface for easy communication, column based 12bit ADC converters, PLL and digital data processing with temporary
data storage. The system can be operated at up to 100 frames per second.