Proceedings Volume 4669

Sensors and Camera Systems for Scientific, Industrial, and Digital Photography Applications III

Nitin Sampat, John Canosa, Morley M. Blouke, et al.
cover
Proceedings Volume 4669

Sensors and Camera Systems for Scientific, Industrial, and Digital Photography Applications III

Nitin Sampat, John Canosa, Morley M. Blouke, et al.
View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 24 April 2002
Contents: 1 Sessions, 41 Papers, 0 Presentations
Conference: Electronic Imaging 2002
Volume Number: 4669

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Section
Section
icon_mobile_dropdown
Noise properties of a high-speed semiconductor-gas-discharge infrared imager
Valery M. Marchenko, Sascha Matern, Hans-Georg Purwins, et al.
The imager consists of a planar semiconductor-gas discharge (SGD) cell allowing the ultra fast IR-to-visible conversion with response time on the microsecond scale. The semiconductor wafer is made of Si:Zn providing the spectral range of 1.1 - 3.5 micrometers . The 100 micrometers discharge gap is filled with Ar under the pressure of 100 hPa. The cell is cooled down approximately to 90 K. Among studied properties are noise, both in time and space domains, detectivity, noise equivalent irradiance and, when applying the imager in a thermal imaging system, noise equivalent temperature difference (NETD). Investigations of the spatial noise and NETD have been carried out by using a low-noise CCD camera capturing output images of the SGD cell. For measuring the temporal noise, a low-noise photomultiplier is used to detect gas discharge radiation from the area of about one resolved pixel. The own noise of the SGD cell is found by comparing signal-noise dependencies obtained at acquiring outgoing light of the cell, on the one hand, with those at observing a thermal radiation source with well describable photon noise, on the other hand. The results indicate that the imager has surprisingly low noise which is very close to the photon-noise limit.
Application of high-speed IR converter in scientific and technological research
Sascha Matern, Valery M. Marchenko, Yuri Astrov, et al.
The infrared (IR) image converter is based on a planar semiconductor-gas discharge structure operating at a temperature of about 100 K in the spectral range of 1.1 to 3.5 micrometers . Semiconductor material and gas are Si:Zn and Ar respectively. The conversion of input IR images into the visible is characterized by a time constant in the order of a few microseconds, the dynamic range is at least 104 and good linearity is observed. Together with the Hamamatsu framing camera C4187 the IR converter has been applied to investigate in the microsecond range the spatio-temporal dynamics of radiation of 1.318 micrometers Nd:YAG laser. The second application was the study of the mode evolution of an Er:YSGG laser operating at a wavelength of 2.79 micrometers and an Er:YAG laser with a wavelength of 2.94 micrometers using a pulse length of 100 to 150 microsecond(s) , and a pulse energy of about 40 mJ. In this case, the IR converter is combined with an intensified CCD camera, of which the exposure time is down to 10 microsecond(s) . In the third application, the IR converter is used in combination with a fast CMOS camera to monitor Nd:YAG laser welding of stainless steel samples at the rate of 320 frames/s.
Column parallel vision system: CPV
Naohisa Mukozaka, Haruyoshi Toyoda, Seiichiro Mizuno, et al.
We have designed and constructed a column parallel vision (CPV) system to realize an intelligent and general purpose image processing system with higher frame rate within 1 millisecond. The system consists of an original designed photo detector array (PDA) with high frame rate, a parallel processing unit with fully parallel processing elements (PEs), and a controller for PDA and PEs. The column parallel architecture enables the PDA 1 millisecond frame rate with 256 analog levels which is required for industrial image processing and measurements. The parallel processing unit has been fabricated by using FPGAs and has 128 X 128 PEs to perform the fully parallel image processing. The PEs have the S3PE architecture which has SIMD type parallel processing flow, and was constructed only 500 transistors suitable for integration in future. Since the PEs are operated by control signal from the controller, we can achieve desired image processing task to the system by changing the software. We have demonstrated that the system can be worked as a tracking system. The experimental results show the system was realized high speed feedback loop as 1000 frame/s including the tracking operation with noise reduction, matching (self-window algorithm) and moment calculation.
Development of the next-generation document reader: Eye Scanner
Toshiyuki Amano, Tsutomu Abe, Tetsuo Iyoda, et al.
Up to now a copying machine requires setting the document to the copy glass of the designated position. In addition to that, the copying machine of the present cannot copy the non-flat shape document (e.g. books, cylindrical objects, etc.) clearly. Besides, it cannot copy without distortion. So, we propose a next generation's document reader 'Eye Scanner.' The Eye Scanner is composed of the range finder, digital camera and pan-tilt stage system. Due to these devices the Eye Scanner is possible shape information and high-density texture image acquisition. Therefore the Eye Scanner is able to read the document by the free viewpoint and able to generate the image which is not distorted by the geometric conversion. Moreover, the Eye Scanner has the ability for high-resolution image generation by the digital camera, which placed on the optical coaxial position by the half-mirror. In this paper, we explain about the detail of the system. At the explanation of the methodology we explain the methodology about distortion correction by the free form transformation with shape information and explain about the technique of image merging. At the experiment, we show some result of distortion correction and show the result of image merging.
128x96 pixel field emitter-array image sensor with HARP target
Toshio Yamagishi, Masakazu Nanba, Katsunori Osada, et al.
In pursuit of developing a next-generation pick-up device having high definition and ultrahigh sensitivity features, research continues on a new type of image sensor that combines a HARP target and a field emitter array. A new field emitter array on a small-sized substrate is designed and a unique packaging technique is proposed. The prototype device is sealed in a vacuum package with a thickness of only about 10 mm and has 128 horizontal and 96 vertical pixels. Experimental results show that images could be successfully reproduced for the first time ever in a device of this type. Highly sensitive characteristics and propr resolution were also obtained with the device. The prototype image sensor can operate stably for more than 250 hours, demonstrating its feasibility and potential as a next- generation image pickup device.
Image sensor based on pulse frequency modulation for retinal prosthesis
Jun Ohta, Norikatsu Yoshida, Tetsuo Furumiya, et al.
We demonstrate an application of a CMOS image sensor using pulse frequency modulation for retinal prosthesis. To increase the sensitivity we use a pulse frequency modulation (PFM) instead of a conventional integration method. The fundamental device characteristics of PFM are described. Based on the results of the PFM pixel circuit, we have fabricated A 128 X 128 pixel array chip using 0.35 micrometers CMOS technology, and successfully demonstrated to capture images by using this chip. For the application of the PFM to retinal prosthesis, the output characteristics of PFM are modified in the frequency range, current stimulation, and biphasic output. Also future issues for implantation are discussed.
Two-dimensional imaging sensor based on the measurement of differential phase related to surface plasmon resonance
Ho-Pui Ho, Wai Wah Lam, Shu-Yuen Wu, et al.
The optical phase change associated with the surface plasmon resonance (SPR) effect that exists in a glass-metal- dielectric stack has been studied using a differential phase imaging technique. A typical prism-coupled SPR setup was constructed and a Mach-Zehnder interferometer was used to perform interferometric analysis between the two orthogonal polarizations in the exit beam. By stepping the optical phase of the reference arm, one can measure the phase change caused by the SPR effect. Since the reference and signal beams traverse identical optical paths, we expect that this scheme can be more robust in terms of noise immunity. The interrogation area can be enlarged to enable imaging of the SPR sensing surface. Initial phase measurement obtained from a salt-water mixture will be presented to demonstrate the operation of the technique.
Grabbing video sequences using protein-based artificial retina
Lasse T. Lensu, Jussi P. S. Parkkinen, Sinikka Parkkinen, et al.
Bacteriorhodopsin thin film matrix has been studied for real-time acquisition of video. The proton pumping property of bacteriorhodopsin is reversible, and the relaxation time back to the basic state is approximately 10 ms at ambient temperature. Photostimulation can be used to return bacteriorhodopsin to the basic state in 50 microsecond(s) . The measurements show that the photocycle becomes slower in polyvinylalcohol than in solution, thus the achievable acquisition frequency is limited by the composition of the thin film.
Development of a high-resolution surveillance camera with 520 TV lines
Seiji Okada, Yukio Mori, Ryuichiro Tominaga, et al.
We have developed digital signal processing algorithms one of which realizes high-quality image with 520TV lines of horizontal resolution using a single-chip color 410k pixels- CCD and the other improves vertical resolution in an electronic zoom. We have also developed a single-chip LSI realizing these algorithms in real-time and a high- resolution surveillance camera with 520TV lines. These algorithms consist of three technologies as follows: (1) Chroma adaptive horizontal aperture compensation area by area (2) Edge adaptive color separation and new optical LPF with high MTF (3) Motion adaptive electronic zoom.
Spectral matching imager using correlation image sensor and variable-wavelength illumination
Akira Kimachi, Toshihide Imaizumi, Ai Kato, et al.
This paper proposes a spectrally selective imaging system called 'spectral matching imager,' which consists of the variable wavelength monochrome light source and the correlation image sensor. The variable wavelength monochrome light source illuminates the scene while sweeping its wavelength with time to expand the spectral reflectance/transmittance function of objects along the time axis. At each pixel, the correlation image sensor produces the correlation in the time domain between the expanded spectral function and a reference spectral function. Consequently, pixels that establish a good spectral match have large values in the output image. The spectral matching imager satisfies (1) high spectral resolution, (2) efficient data compression, and (3) tunability to arbitrary optical filter characteristics. Experimental results demonstrate successful detection of objects having detailed narrowband structures, such as glass pieces doped with rare-earth elements.
Wide-dynamic-range camera using a novel optical beam splitting system
Takayuki Yamashita, Masayuki Sugawara, Kohji Mitani, et al.
A wide dynamic range camera for high picture quality use is proposed. The camera is equipped with a novel optical beam splitting system. It first divides incident light into two different intensity lights. Small intensity light is taken by a single-chip color imager. The other large intensity light is further led to a tri-color prism and taken by three imagers. These functions are integrated into one-piece optical block, which is suited for 2/3-inch optical format standard. An experimental HDTV camera has been developed. The exposure ratio was set as 9:1. A high exposure image is taken by three 2M-pixel CCDs and a low exposure image is taken by a single-chip color 2M-pixel CCD with an on-chip stripe color filter. The results have shown that the validity of the proposed method for obtaining wide dynamic range images with high picture quality.
Development of an onboard spectro-polarimeter for Earth observation at NAL
Various methods, techniques and sensors for Earth observation are being developed worldwide as the necessity of protecting the Earth's environment increases. In particular, polarimetric analysis of solar rays reflected from the Earth's surface is expected to play an important role in future Earth environment observation. A new type of spectro-polarimeter based on a liquid crystal tunable filter (LCTF) has been developed at NAL for such analysis. Efforts are now under way to put this sensor to practical use in airborne or satellite-based remote sensing of the Earth's environment by developing a sensor package and onboard observation system based around it. This paper first presents the operational principle and construction of the LCTF spectro-polarimeter, which captures images in the 400 - 720 nm wavelength band. Next, an outline of an onboard observation system incorporating the spectro-polarimeter is described and its applicability to airborne remote sensing discussed. The performance of the observation system is then shown based on experimental results. Other possible applications of the sensor are presented, and finally, the results of the evaluation of the observation system, e.g. hyper-spectral resolution of less than 10 nm, are summarized in the conclusion.
Low-noise signal detection technique in CMOS image sensors using frame oversampling and nondestructive high-speed readout
In this paper, we propose a method of low-noise signal detection technique using frame oversampling and a CMOS image sensor with non-destructive high-speed readout mode. The technique enables the use of the high-gain column amplifier and the digital integration of signals without noise accumulation. The column amplifier is effective to reduce the noise due to the wideband amplifier and the quantization noises. The least square estimation of the noise using the intermediate non-destructive outputs further reduces the noise level. Simulation results show that the input referred noise can be reduced to a few electrons.
Experimental characterization and simulation of quantum efficiency and optical crosstalk of CMOS photodiode APS
Cecile Marques, Pierre Magnan
CMOS imagers, now considered as a valuable alternative to CCD in many application fields, still suffer from higher optical crosstalk and lower quantum efficiency. Therefore investigations have to be done in order to characterize these parameters and model them. This paper describes photodiode test structures implemented on two different standard technologies, a 0.7 micrometers lightly doped substrate CMOS SLP/DLM and a 0.5 micrometers EPI on heavily doped substrate CMOS processes from Alcatel-Microelectronics. Both dedicated in-line 20 micrometers square photodiodes and 20 micrometers pitch APS photodiode pixels are implemented. Quantum efficiency and optical crosstalk were measured for several wavelengths. The spotscan measurement setup that makes use of a dedicated halogen optical source coupled to a thin core optical fiber and a microscope objective is described and results illustrating charge collection and diffusion mechanism phenomena are given. For each technology, analytical modeling and physical 2D device simulations (ISE-TCAD) have been performed to evaluate charge collection efficiency. They are taken into account in the comparison with experimental results regarding both quantum efficiency and optical crosstalk. As a summary, the behavior of these two technology types will be compared and perspectives related to processes evolution will be drawn.
On-chip binary image processing with CMOS image sensors
Canaan Sungkuk Hong, Richard I. Hornsey
In this paper, we demonstrate a CMOS active pixel sensor chip, integrated with binary image processing on a single monolithic chip. A prototype chip comprising a 64 X 64 photodiode array with on-chip binary image processing is fabricated in standard 0.35 micrometers CMOS technology, with 3.3 V power supply. The binary image processing functionality is embedded in the column structure, were each processing element is placed per column, reducing processing time and power consumption. This column processing structure is scalable to higher resolution. A 3 X 3 local mask (also called structure element) is implemented every column so that row-parallel processing can be achieved with a conventional progressive scanning method.
CMOS megapixel digital camera with CameraLink interface
Martin Waeny, Peter Schwider
CMOS image sensors brought a number of advantages compared to CCD image sensors. Selective readout (ROI), logarithmic compression and better high-speed performance are just some of the key assets of CMOS technology. In spite of these features, CMOS sensors are only rarely used in industrial vision applications. One of the reasons for this gap between potential and realized applications is the lack of industrial cameras, with standard interfaces. This paper presents a digital camera with the CameraLinkTM interface based on a megapixel CMOS imagesensor. The CameraLinkTM standard has the potential to set an end to company specific interconnect solutions, and limitations by the analog TV-standard. The CameraLinkTM standard is based on a 7:1 serialization and LVDS (Low Voltage Differential Signal) transmittance chip set.
Transversal direct readout CMOS APS with variable shutter mode
Shigehiro Miyatake, Masaru Miyamoto, Takashi Morimoto, et al.
A transversal direct readout (TDR) structure for CMOS active pixel image sensors (APSs) eliminates the vertically striped fixed pattern noise. This novel architecture has evolved to incorporate a variable shutter mode as well as simplifying the pixel structure. This paper describes a 320 X 240- pixel TDR APS that not only exhibits neither vertically nor horizontally striped fixed pattern noise, but can also take pictures at selected exposure times. The pixel consists of a photodiode, a row- and a column-reset transistor, a source- follower input transistor, and a column-select transistor instead of the row-select transistor found in conventional CMOS APSs. The column-select transistor is connected to a signal line, which runs horizontally instead of vertically. The column-reset and the column-select transistor are driven by the same pulse different from its predecessor. Thus the pixel is simplified by the reduction of the number of bus lines similar to conventional CMOS APSs.
Proton radiation damage in high-resistivity n-type silicon CCDs
A new type of p-channel CCD constructed on high-resistivity n-type silicon was exposed to 12 MeV protons at doses up to 1 X 1011 protons/cm2. The charge transfer efficiency was measured as a function of radiation dose and temperature. We previously reported that these CCDs are significantly more tolerant to radiation damage than conventional n-channel devices. In the work reported here, we used pocket pumping techniques and charge transfer efficiency measurements to determine the identity and concentrations of radiation induced traps present in the damaged devices.
Radiation events in astronomical CCD images
Alan R. Smith, Richard J. McDonald, Donna C. Hurley, et al.
The remarkable sensitivity of depleted silicon to ionizing radiation is a nuisance to astronomers. 'Cosmic rays' degrade images because of struck pixels, leading to modified observing strategies and the development of algorithms to remove the unwanted artifacts. In the new-generation CCD's with thick sensitive regions, cosmic-ray muons make recognizable straight tracks and there is enhanced sensitivity to ambient gamma radiation via Compton-scattered electrons ('worms'). Beta emitters inside the dewar, for example high-potassium glasses such as BK7 , also produce worm-like tracks. The cosmic-ray muon rate is irreducible and increases with altitude. The gamma rays are mostly by- products of 40K decay and the U and Th decay chains; these elements commonly appear as traces in concrete and other materials. The Compton recoil event rate can be reduced significantly by the choice of materials in the environment and dewar and by careful shielding. Telescope domes appear to have significantly lower rates than basement laboratories and Coude spectrograph rooms. Radiation sources inside the dewar can be eliminated by judicious choice of materials. Cosmogenic activation during high-altitude fights does not appear to be a problem. Our conclusion are supported by tests at the Lawrence Berkeley National Laboratory low-level counting facilities in Berkeley and at Oroville, California (180 m underground).
Deep-UV-sensitive high-frame-rate backside-illuminated CCD camera developments
Robin M. Dawson, Robert Andreas, James T. Andrews, et al.
New applications for ultra-violet imaging are emerging in the fields of drug discovery and industrial inspection. High throughput is critical for these applications where millions of drug combinations are analyzed in secondary screenings or high rate inspection of small feature sizes over large areas is required. Sarnoff demonstrated in1990 a back illuminated, 1024 X 1024, 18 um pixel, split-frame-transfer device running at > 150 frames per second with high sensitivity in the visible spectrum. Sarnoff designed, fabricated and delivered cameras based on these CCDs and is now extending this technology to devices with higher pixel counts and higher frame rates through CCD architectural enhancements. The high sensitivities obtained in the visible spectrum are being pushed into the deep UV to support these new medical and industrial inspection applications. Sarnoff has achieved measured quantum efficiencies > 55% at 193 nm, rising to 65% at 300 nm, and remaining almost constant out to 750 nm. Optimization of the sensitivity is being pursued to tailor the quantum efficiency for particular wavelengths. Characteristics of these high frame rate CCDs and cameras will be described and results will be presented demonstrating high UV sensitivity down to 150 nm.
Flight CCD detectors for the Advanced Camera for Surveys
Marco Sirianni, Mark Clampin, George F. Hartig, et al.
The Advanced Camera for Surveys (ACS) is a third generation science instrument scheduled for installation into the Hubble Space Telescope (HST) during the servicing mission 3B scheduled for late February 2002. The instrument has three cameras, each of which is optimized for a specific set of science goals. The first, the Wide Field Camera, is a high throughput (43% at 700 nm, including the HST OTA), wide field (200' X 204'), optical and I-band optimized camera. The second, the High Resolution Channel (HRC) has a 26' X 29' field of view, it is optimized for the near-UV (a peak throughput of 24% at 500 nm) and is critically sampled at approximately 630 nm. The third camera, the Solar-Blind Camera is a far-UV, photon counting array that has a relatively high throughput over a 26' X 29' field of view. Two of the three cameras employ CCD detectors: the WFC a mosaic of two SITe 2048 X 4096 pixel CCDs and the HRC a 1024 X 1024 CCD based on the Space Telescope Imaging Spectrograph 21 micrometers pixel CCD. In this paper we review the performance of the flight detectors selected for ACS.
Smart sensor for surface inspection: concepts and prototype description
Stephane Poujouly, Bernard A. Journet
The purpose of this paper is to present the conception of what could be a smart laser range finder. A smart distance sensor should be able to adapt its different parameters to the real measurement case and to the different steps of the measurement process. The system chosen here is based on phase shift measurement method. The implemented solution for phase shift measurement is the IF sampling method that means under-sampling technique, associated to digital synchronous detection. Its main advantage is a global simplification of the electronic system, leading to a quite simple development of a twofold modulation frequency system that is required for high resolution measurement within a wide range. Frequencies at 10 MHz and 240 MHz have been retained and the system is designed with only one PLL, which is a digital one, reducing the phase noise. The emission and detection parts are designed for wideband operation and to be digitally controlled, in order to adapt their characteristics to the measurement situation. The whole measurement sequence is described, including different steps at both modulation frequencies, and calibration of the system.
Optical sensor for real-time weld defect detection
In this work we present an innovative optical sensor for on- line and non-intrusive welding process monitoring. It is based on the spectroscopic analysis of the optical VIS emission of the welding plasma plume generated in the laser- metal interaction zone. Plasma electron temperature has been measured for different chemical species composing the plume. Temperature signal evolution has been recorded and analyzed during several CO2-laser welding processes, under variable operating conditions. We have developed a suitable software able to real time detect a wide range of weld defects like crater formation, lack of fusion, excessive penetration, seam oxidation. The same spectroscopic approach has been applied for electric arc welding process monitoring. We assembled our optical sensor in a torch for manual Gas Tungsten Arc Welding procedures and tested the prototype in a manufacturing industry production line. Even in this case we found a clear correlation between the signal behavior and the welded joint quality.
Straightness measurement of a moving table using laser beams and quadrant PSDs
Koji Tenjimbayashi
The straightness of a moving table consists of three rotation angle errors and two lateral displacement errors. It is well known that two lateral displacement errors can be measured by using one corner cube mirror. We have already shown theoretically that three rotation angle errors an be measured by two pairs of parallel mirrors. This paper discusses the experiment of rotation angle error measurement and shows the method is useful.
Usage of DSC meta tags in a general automatic image enhancement system
Stefan Moser, Michael Schroeder
In this contribution we show how meta information is included in almost every digital still camera (DSC) image can be utilized for image classification and automatic image enhancement in a photo finishing system. All DSC manufacturers have realized the importance of meta tag information and almost all camera models support meta tags. However, nowhere in the literature we have found an application of image tags for building an automatic image enhancement system. Here, we show a way to use the tagged information for this purpose. The additional information about the capture conditions allows us to classify each image in earlier learned classes and to apply a more appropriate processing in the form of optimized image enhancement algorithms. We show how such an image enhancement system could work and also explain how to use the obtained image classes to restrict the plausibility of scene type classification. Finally we present experimental results on the general performance of the image enhancement system.
New approach to auto-white-balancing and auto-exposure for digital still cameras
Nasser Kehtarnavaz, Hyuk-Joon Oh, I. Shidate, et al.
This paper presents an auto-white balancing algorithm named scoring. The spectral distributions of the Macbeth reference colors together with the spectral distributions of various color temperature light sources are used to obtain a number of reference color points in the CbCr color space. A number of representative color points are also obtained from a captured image by using a previously developed multi-scale clustering algorithm. A match is then established between the set of reference colors and the set of representative colors. The matching scheme generates the most likely light source candidate under which the image is taken. Furthermore, this paper presents an auto-exposure algorithm using a mapping from the luminance histograms of five sub- areas in the image to an exposure value. A neural network is designed to perform the mapping. The histogram in each sub- area is used to determine the mean, variance, minimum, and maximum luminance for that sub-area. The same spatial information is computed for previous frames to incorporate temporal changes in luminance into the network.
MPEG streaming over mobile Internet
Myungjin Lee, Kyounghee Lee, Truong Cong Thang, et al.
MPEG streaming over Mobile Internet leads to degradation of MPEG video quality. When a handoff of a mobile node (MN) occurs, it is quite difficult to guarantee seamless video quality due to the change of routing path towards the MN. In this paper, we propose a new scheme, MPEG streaming over Concatenation and Optimization for Reservation Path (CORP), to guarantee QoS of MPEG streaming service in the Mobile Internet. When a handoff of a MN occurs, the CORP extends the existing reservation path, which was established using RSVP between a server and a MN, to a new Base Station (BS) that the MN currently connected to, instead of making a new RSVP session between the server and the MN. To demonstrate practicality of the proposed scheme, we built a prototype system which provides the MPEG-1 Video and Audio on demand over Mobile Internet using Mobile IP and IEEE 802.11b wireless LAN. Our experiment shows that the proposed scheme significantly improves the peak signal-to-noise rate (PSNR) of MPEG streaming video. Also, we analyzed the video quality of our scheme with respect to TCP and UDP transport protocols.
Optimal scheduling of capture times in a multiple-capture imaging system
Ting Chen, Abbas El Gamal
Several papers have discussed the idea of extending image sensor dynamic range by capturing several images during a normal exposure time. Most of these papers assume that the images are captured according to a uniform or an exponentially increasing exposure time schedule. Even though such schedules can be justified by certain implementation considerations, there has not been any systematic study of how capture time schedules should be optimally determined. In this paper we formulae the multiple capture time scheduling problem when the incident illumination probability density function (pdf) is completely known as a constrained optimization problem. We aim to find the capture times that maximize the average signal SNR. The formulation leads to a general upper bound on achievable average SNR using multiple capture for any given illumination pdf. For a uniform pdf, the average SNR is a concave function in capture times and therefore well-known convex optimization techniques can be applied to find the global optimum. For a general piece-wise uniform pdf, the average SNR is not necessarily concave. The cost function, however, is a Difference of Convex (D.C.) function and well-established D.C. or global optimization techniques can be used.
Very fast algorithm for the JPEG compression factor control
Arcangelo Bruna, Massimo Mancuso, Alessandro Capra, et al.
In this paper we propose a new algorithm for the Compression Factor Control when the JPEG standard is used. It can be used, for example, when the memory size to store the image is fixed, like in a Digital Still Camera, or when a limited band channel is used to transmit the image. The JPEG standard is the image compression algorithm used 'de facto' by all the devices due the good trade off between compression ratio and quality, but it do not ensure a fixed stream size due to the run-length/variable-length encoding, so a compression factor control algorithm is required. This algorithm allows a very good rate control in a faster way compared to the known algorithms and a lower power consumption too, so it can be used in the portable devices.
Photocurrent estimation for a self-reset CMOS image sensor
Xinqiao Liu, Abbas El Gamal
CMOS image sensors are capable of very high frame rate non- destructive readout. This capability and the potential of integrating memory and signal processing with the sensor on the same chip enable the implementation of many still and video imaging applications. An important example is dynamic range extension, where several images are captured during a normal exposure time - shorter exposure time images capture the brighter areas of the scene while longer exposure time images capture the darker areas of the scene. These images are then combined to form a high dynamic range image. Dynamic range is extended at the high end by detecting saturation, and at the low end using linear estimation algorithms that reduce read noise. With the need to reduce pixel size and integrate more functionality with the sensor, CMOS image sensors need to follow the CMOS technology scaling trend. Well capacity, however, decreases with technology scaling as pixel size and supply voltages are reduced. As a result, SNR decreases potentially to the point where even peak SNR is inadequate. In this paper, we propose a self-reset pixel architecture, which when combined with multiple non-destructive captures can increase peak SNR as well as enhance dynamic range. Under high illumination, self-resetting 'recycles' the well during integration resulting in higher effective well capacity, and thus higher SNR. A recursive photocurrent estimation algorithm that takes into consideration the additional noise due to self- resetting is described. Simulation results demonstrate the SNR increase throughout the enhanced photocurrent range with 10dB increase in peak SNR using 32 captures.
Figure-of-merit for CMOS imagers
Chi-Shao Lin, Frank Mau-Chung Chang, Bimal P. Mathur, et al.
Imagers designed for different system applications may comply with different requirements. Performance can be optimized only with an appropriate metric. A CMOS imager 'figure-of-merit (FOM)' for such a purpose is presented in this paper. It evaluates a CMOS imager from three performance categories: spatial resolution, sensing speed, and dynamic range. The sensing speed performance index (SP) is obtained by analyzing the imager signal-to-noise ratio versus exposure level. A modulation transfer function normalized to a 35 mm film standard assuming an identical field-of-view is introduced as the spatial resolution performance index (MTF35e_Avg), regardless of image sensor size. The dynamic range performance index, DR, is defined as ratio of the maximum signal level the imager is able to capture to its noise level. The tradeoffs among SP, MTF35e_Avg, and DR of a CMOS imager are quantitatively analyzed, and its FOM is found to be FOM equals DR X SP X (MTF35e_Avg)2. This paper also gives examples of optimized imager design using the proposed FOM.
High-dynamic-range imaging for digital still camera
The paper presents a collection of methods and algorithms able to deal with high dynamic range of real pictures acquired by digital engines (e.g. CCD/CMOS cameras). Accurate image acquisition can be not well suited under difficult light conditions. A few techniques that overcome the usual 8 bit-depth representations by using differently exposed pictures and recovering the original radiance values are reported. This allows capturing both low and highlight details, fusing the various pictures into a singe map, thus providing a more faithful description of what the real world scene was. However in order to be viewed on a common computer monitor the map needs to be re-quantized while preserving visibility of details. The main problem comes from the fact that usually the contrast of the radiance values is far greater than that of the display device. Various related techniques are reviewed and discussed.
Geometrical noise bandwidth: a new tool to characterize the resolving power of analogue and digital imaging devices
The electrical noise bandwidth quantifies the transfer of noise in electrical systems. Geometrically, the noise can be interpreted as a circle of confusion. The adaptation of the electrical noise bandwidth to geometrical resolution problems demands a generalization of frequencies in two dimensions. The Geometrical Noise Bandwidth is calculated from the product of all MTF, which give the final picture. Here, the optics, the pixel form and size, the color interpolation and optical low pass filters can be considered. Geometrical Noise Bandwidth is measured in mm-2. It can be calculated by integration for films. In the case of digital imaging devices, the integration limit of spatial frequencies is the Nyquist frequency. The circle of confusion is inverse proportional to the quadratic root of the Geometrical Noise Bandwidth. The results of different imaging devices (photographic film, classical monochrome and color matrices, Fuji Super CCD, several digital-backs, scanners) will be compared using test- pictures of the Siemens star resolution test target. Beyond this, the Geometrical Noise Bandwidth permits comparison of the information contents of classical films and digital matrices. We obtain information to evaluate whether digital imaging devices are better than photographic films.
Noise reduction techniques for Bayer-matrix images
In this paper, some arrangements to apply Noise Reduction (NR) techniques for images captured by a single sensor digital camera are studied. Usually, the NR filter processes full three-color component image data. This requires that raw Bayer-matrix image data, available from the image sensor, is first interpolated by using Color Filter Array Interpolation (CFAI) method. Another choice is that the raw Bayer-matrix image data is processed directly. The advantages and disadvantages of both processing orders, before (pre-) CFAI and after (post-) CFAI, are studied with linear, multi-stage median, multistage median hybrid and median-rational filters .The comparison is based on the quality of the output image, the processing power requirements and the amount of memory needed. Also the solution, which improves preservation of details in the NR filtering before the CFAI, is proposed.
Correlation-based color mosaic interpolation using a connectionist approach
This paper presents a specialized extension of a general correlation-based interpolation paradigm, for interpolating image sample color values obtained through a color filter mosaic. This extension features a kernel determined from a priori assumed image characteristics in the form of pre- defined (as opposed to learned) local sample neighborhood patterns. The interpolation procedure locally convolves the color-filtered image samples with the kernel to obtain the interpolated color values. The kernel establishes a mapping from the color-filtered input values to the recovered color output values using weighted, ordered, and thresholded sums of sample values from the local sample neighborhood. This mapping attempts to exploit local image sample interdependencies in order to preserve detail, while minimizing artifacts. The procedure is simulated for the Bayer RGB color filter mosaic using a quasi-linear connectionist architecture that is real-time-hardware- implementable. A perceptual comparison of images obtained from this interpolation with images obtained from bilinear interpolation shows a visible reduction in interpolation artifacts.
High-resolution dyed color-filter-material for use in digital photography applications: cyan, magenta, and yellow color photoresists
Gu Xu, Jonathan W. Mayo, Curtis Planje, et al.
In this study, we have developed a new set of cyan, magenta and yellow (CMY) dyed color filter materials to meet the need of digital photography applications. These new color filter materials consist of a dye, a photo sensitive polymer binder, photo initiators, and acrylic monomers in addition to safe solvents such as propylene glycol methyl ether (PGME) which allow deposition of thin film layers by standard spin-on coating techniques. CMY materials share many desirable properties with standard photoresists, e.g., excellent coating quality, thin film uniformity, and good adhesion to semiconductor substrates. They work as negative resists and are sensitive to i-line UV light with photo speed of 300 mJ/cm2 and below. We have shown, for example, that a 1 micrometers film exposed and developed will exhibit high- resolution feature sizes of 3 micrometers pixels and below. These CMY materials have excellent thermal and light stability and good color characteristics.
Gain fixed-pattern-noise correction via optical flow
SukHwan Lim, Abbas El Gamal
Fixed pattern noise (FPN) or nonuniformity caused by device and interconnect parameter variations across an image sensor is a major source of image quality degradation especially in CMOS image sensors. In a CMOS image sensor, pixels are read out through different chains of amplifiers each with different gain and offset. Whereas offset variations can be significantly reduced using correlated double sampling (CDS), no widely used method exists for reducing gain FPN. In this paper, we propose to use a video sequence and its optical flow to estimate gain FPN for each pixel. This scheme can be used in a digital video or still camera by taking any video sequence with motion prior to capture and using it to estimate gain FPN. Our method assumes that brightness along the motion trajectory is constant over time. The pixels are grouped in blocks and each block's pixel gains are estimated by iteratively minimizing the sum of the squared brightness variations along the motion trajectories. We tested this method on synthetically generated sequences with gain FPN and obtained results that demonstrate significant reduction in gain FPN with modest computations.
Lux transfer: CMOS versus CCD
This paper compares the performance of competing CCD and CMOS imaging sensors including backside-illuminated devices. Comparisons are made through a new performance transfer curve that shows at a glance performance deficiencies for any given pixel architecture analyzed or characterized. Called Lux Transfer, the curve plots signal-to-noise as a function of absolute light intensity for a family of exposure times over the sensor's dynamic range (i.e., read noise to full well). Critical performance parameters on which the curve is based are reviewed and analytically described (e.g., QE, pixel nonuniformity, full well, dark current, read noise, MTF, etc.). Besides S/N, many by- products come from lux transfer including dynamic range, responsivity (e-/lux-sec), charge capacity, linearity and ISO rating. Experimental data generated by 4 micrometers -- 3T pixel DVGA and a 5.6 micrometers -- 3T pixel DXGA CMOS sensors are presented that demonstrate lux transfer use.
Active-area shape influence on the dark current of CMOS imagers
Igor Shcherback, Alexander A. Belenky, Orly Yadid-Pecht
This work presents an empirical dark current model for CMOS Active Pixel Sensors (APS). The model is based on experimental data taken of a 256 X 256 APS chip fabricated via HP in a standard 0.5 micrometers CMOS technology process. This quantitative model determines the pixel dark current dependence on two contributing factors: the 'ideal' dark current determined by the photodiode junction, introduced here as a stable shot noise influence of the device active area, and a leakage current due to the device active area shape, i.e., the number of corners present in the photodiode and their angles. This part is introduced as a process induced structure stress effect.
Temperature dependence of dark current in a CCD
Ralf Widenhorn, Morley M. Blouke, Alexander Weber, et al.
We present data for dark current of a back-illuminated CCD over the temperature range of 222 to 291 K. Using an Arrhenius law, we found that the analysis of the data leads to the relation between the prefactor and the apparent activation energy as described by the Meyer-Neldel rule. However, a more detailed analysis shows that the activation energy for the dark current changes in the temperature range investigated. This transition can be explained by the larger relative importance at high temperatures of the diffusion dark current and at low temperatures by the depletion dark current. The diffusion dark current, characterized by the band gap of silicon, is uniform for all pixels. At low temperatures, the depletion dark current, characterized by half the band gap, prevails, but it varies for different pixels. Dark current spikes are pronounced at low temperatures and can be explained by large concentrations of deep level impurities in those particular pixels. We show that fitting the data with the impurity concentration as the only variable can explain the dark current characteristics of all the pixels on the chip.
Front-illuminated full-frame charge-coupled-device image sensor achieves 85% peak quantum efficiency
Antonio S. Ciccarelli, William V. Davis, William Des Jardin, et al.
A high sensitivity front-illuminated charge-coupled device (CCD) technology has been developed by combining the transparent gate technology introduced by Kodak in 1999 with the microlens technology usually employed on interline CCDs. In this new architecture, the microlens is used to focus the incoming light onto the more transparent of the two electrodes. The new sensors offer significant increases in quantum efficiency while maintaining the performance advantages of front-illuminated full-frame CCDs including 3 pA/cm2 typical dark current at 25 degree(s)C, and 55 ke full well in a 6.8 micrometers pixel.