Proceedings Volume 5678

Digital Photography

cover
Proceedings Volume 5678

Digital Photography

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 23 February 2005
Contents: 7 Sessions, 24 Papers, 0 Presentations
Conference: Electronic Imaging 2005 2005
Volume Number: 5678

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Image Sensors/Camera Design
  • Sensor and Camera Characterization
  • Color Processing
  • Sensor Design and Applications
  • Demosaicking/In-Camera Processing
  • Image Processing/Compression I
  • Image Processing/Compression II
Image Sensors/Camera Design
icon_mobile_dropdown
Roadmap for CMOS image sensors: Moore meets Planck and Sommerfeld
The steady increase in CMOS imager pixel count is built on the technology advances summarized as Moore's law. Because imagers must interact with light, Moore's Law impact differs from its impact on other integrated circuit applications. In this paper, we investigate how the trend towards smaller pixels interacts with two fundamental properties of light: photon noise and diffraction. Using simulations, we investigate three consequences of decreasing pixel size on image quality. First, we quantify the likelihood that photon noise will become visible and derive a noise-visibility contour map based on photometric exposure and pixel size. Second, we illustrate the consequence of diffraction and optical imperfections on image quality and analyze the implications of decreasing pixel size for aliasing in monochrome and color sensors. Third, we calculate how decreasing pixel size impacts the effective use of microlens arrays and derive curves for the concentration and redirection of light within the pixel.
Very-large-area imagers for professional DSC applications
Bart Dillen, Cees Draijer, Louis Meessen, et al.
To meet the continuous demand for more resolution in professional digital imaging, 22M pixels, 645-film format full-frame CCD image sensor was developed as an improved upgrade for an existing 11M pixel 35 mm CCD. This paper presents the device requirements, architecture, modes of operation, and evaluation results of the performance improvements.
1/f noise measurement in CMOS image sensors
Boyd Fowler, Steve Mims, Brett Frymire
This paper describes an in-situ pixel source follower power spectral density (PSD) measurement method that does not require any specialized test equipment. This method requires a dual port CMOS image sensor with analog outputs that allow differential time series noise measurements. We describe the sensor circuits and measurement techniques used for collecting data. We derive an estimator for the PSD based on the measured data. We also present a technique for estimating the confidence interval of the PSD based on Bootstrap re-sampling. Using our estimate of the PSD, we derive estimators for the SPICE NLEV3 1/f noise model parameters AF and KF. We also determine confidence intervals for these estimators. Using this method we present the estimated source follower PSD for a CMOS image sensor fabricated in a 0.18μm CMOS process with 3.3μm X 3.3μm pixels. We also present the estimated values of AF and KF based on the estimated PSD.
An LOD with improved breakdown voltage in full-frame CCD devices
Edmund K. Banghart, Eric G. Stevens, Hung Q. Doan, et al.
In full-frame image sensors, lateral overflow drain (LOD) structures are typically formed along the vertical CCD shift registers to provide a means for preventing charge blooming in the imager pixels. In a conventional LOD structure, the n-type LOD implant is made through the thin gate dielectric stack in the device active area and adjacent to the thick field oxidation that isolates the vertical CCD columns of the imager. In this paper, a novel LOD structure is described in which the n-type LOD impurities are placed directly under the field oxidation and are, therefore, electrically isolated from the gate electrodes. By reducing the electrical fields that cause breakdown at the silicon surface, this new structure permits a larger amount of n-type impurities to be implanted for the purpose of increasing the LOD conductivity. As a consequence of the improved conductance, the LOD width can be significantly reduced, enabling the design of higher resolution imaging arrays without sacrificing charge capacity in the pixels. Numerical simulations with MEDICI of the LOD leakage current are presented that identify the breakdown mechanism, while three-dimensional solutions to Poisson's equation are used to determine the charge capacity as a function of pixel dimension.
Integrating lens design with digital camera simulation
We describe a method for integrating information from lens design into image system simulation tools. By coordinating these tools, image system designers can visualize the consequences of altering lens parameters. We describe the critical computational issues we addressed in converting lens design calculations into a format that could be used to model image information as it flows through the imaging pipeline from capture to display. The lens design software calculates information about relative illumination, geometrical distortion, and the wavelength and field height dependent optical point spread functions (PSF). These data are read by the image systems simulation tool, and they are used to transform the multispectral input radiance into a multispectral irradiance image at the sensor. Because the optical characteristics of lenses frequently vary significantly across the image field, the process is not shift-invariant. Hence, the method is computationally intense and includes a number of parameters and methods designed to reduce artifacts that can arise in shift-variant filtering. The predicted sensor irradiance image includes the effects of geometric distortion, relative illumination, vignetting, pupil aberrations, as well as the blurring effects of monochromatic and chromatic aberrations, and diffraction.
Sensor and Camera Characterization
icon_mobile_dropdown
First principles' imaging performance evaluation of CCD- and CMOS-based digital camera systems
Brian G. Rodricks, Kartik Venkataraman
The new generation of Digital Still Cameras (DSCs) provide a capability of capturing raw data that make it possible to measure the fundamental metrics of the camera. Although CCDs are used in a majority of DSCs, the number of cameras with CMOS-based sensors are increasing. Using first principles, the performance of comparable CCD and CMOS- based DSCs are measured. The performance metrics measured are electronic noise, signal-to-noise ratio, linearity, dynamic range, resolution, and sensitivity. The dark noise and dark current are measured as a function of exposure time and ISO speed. The signal response and signal-to-noise response are measured as a function of intensity and ISO speed. The resolution is measured in terms of the Modulation Transfer Function (MTF) using both raw and rendered data. The spectral sensitivity is measured in terms of camera constants at several wavelengths. Subjective image quality is also measured using scenes that exhibit limiting performance. The ISO speed performance is compared against a film camera.
Psychophysical thresholds and digital camera sensitivity: the thousand-photon limit
In many imaging applications, there is a tradeoff between sensor spatial resolution and dynamic range. Increasing sampling density by reducing pixel size decreases the number of photons each pixel can capture before saturation. Hence, imagers with small pixels operate at levels where photon noise limits image quality. To understand the impact of these noise sources on image quality we conducted a series of psychophysical experiments. The data revealed two general principles. First, the luminance amplitude of the noise standard deviation predicts threshold, independent of color. Second, this threshold is 3-5% of the mean background luminance across a wide range of background luminance levels (ranging from 8 cd/m2 to 5594 cd/m2). The relatively constant noise threshold across a wide range of conditions has specific implications for the imaging sensor design and image process pipeline. An ideal image capture device, limited only by photon noise, must capture at least 1000 photons/pixel (1/sqrt(103) ~= 3%) to render photon noise invisible. The ideal capture device should also be able to achieve this SNR or higher across the whole dynamic range.
Color-reproduction-driven CMOS image sensor design
Qun Sun, Hui Tian, Jim Li
Reducing crosstalk in small pixels has become one of the major efforts in CMOS image sensor design. In this paper, instead of focusing on lowering crosstalk, we explore the problem at color imaging system level. First we study the components that affect the spectral response of CMOS image sensor, including micro-lens, color filter, sensor quantum efficiency (QE), and spectral response shifts caused by crosstalk inside the sensor. This is performed using a commercial tool. Based on the results, a super-linear cross-talk function is constructed. A novel model for color reproduction of image sensor system under crosstalk is then proposed. Hypothetical spectral sensitivity method using smooth cubic spline function model with controllable peak positions and widths is applied to simulate the spectral sensitivity of color filters. The peaks and widths are optimized based on the μ-factor of spectral responses of image sensor system under different crosstalk level. It is found that, with crosstalk, the imaging system can still provide color production as good as, if not better than, the situation without crosstalk. The spectral sensitivity of color filters has to be slightly modified to compensate the effect of crosstalk. The overlaps of spectral responses of color filters should be slightly extended, which means in certain circumstances higher overlap of spectral sensitivities of color filters may be desirable to compensate the effect of crosstalk. Keywords: CMOS image sensor, QE, crosstalk, color reproduction, color filter, color filter array, μ-factor, spectral sensitivity
Color Processing
icon_mobile_dropdown
Color processing in camera phones: How good does it need to be?
As the fastest-growing consumer electronics device in history, the camera phone has evolved from a toy into a real camera that competes with the compact digital camera in image quality. Due to severe constraints in cost and size, one key question that remains unanswered for camera phones is: how good does the image quality need to be so that resource can be allocated most efficiently. In this paper, we have tried to find the color processing tolerance through a study of 24 digital cameras from six manufacturers under five different light sources. We measured both the inter-brand (across manufacturers) and intra-brand (within manufacturers) mean and standard deviation for white balance and color reproduction. The white balance results showed that most cameras didn’t follow the complete white balance model. The difference between the captured white patch and the display white point increased when the correlated color temperature (CCT) of the illuminant was further away from 6500K. The standard deviation of the red/green and blue/green ratios for the white patch also increased when the illuminant was further away from 6500K. The color reproduction results revealed a similar trend for the inter-brand and intra-brand chromatic difference of the color patches. The average inter-brand chromatic difference increased from 3.87 ΔE units for the Δ65 light (6500K) to 10.13 ΔE units for the Horizon light (2300K).
Variational color transformation method for direct color imaging
As an imaging scheme with a single solid-state sensor, the direct color-imaging approach is considered promising. The sensor has photo-sensing layers more than two along its depth direction. Although each pixel has multiple color signals, their spectral sensitivities are overlapped with each other. The overlapped color signals should be transformed to color signals specified by an output device. We present a color transformation method for the direct color-imaging scheme. Our method tries to recover multi-spectral reflectance and then to transform colors by utilizing it. The problem is formulated as the inverse problem that multi-spectral reflectance with a large number of color channels are recovered from observed color signals with a smaller number of color channels, and solved by the regularization technique that minimizes the functional composed of a color-fidelity term and a spectral-regularity term. The color-fidelity term quantifies errors in linear transformation from multi-spectral reflectance to observed color signals; whereas the spectral-regularity term quantifies the property that spectral reflectance at a color channel is similar to those at its neighboring channels. We simulate the direct color-imaging scheme and our method. The results show that in the case of more than five photo-sensing layers our method restores multi-spectral reflectance satisfactorily.
Cross-talk correction methodology for color CMOS imagers
A pixel signal cross-talk correction method that utilizes knowledge of a color image sensor's performance characteristics is presented. The objective is to create a simple, non-iterative algorithm that can be implemented in the on-chip digital logic of an imaging sensor. Inverse cross-talk Bayer color filter array pattern filters are determined for the blurring, multi-channel, cross-channel problem. Simple noise and cross-talk models are developed and used to solve for the corrective deconvolution filters. The noise statistics used have both signal independent and dependent components, and include the noise associated with cross-talk. The methodology is independent of image statistics. The inverse filters are found by solving a set of simultaneous linear equations in the discrete Fourier frequency domain. A direct deterministic regularization method with constrained least squares is then used to solve the ill-posed problem. The local pixel blurred signal-to-noise ratio is used as the regularization parameter. This yields an inverse blur filter weighted by a local scalar noise filter. The resulting method provides a locally adaptive trade-off between cross-talk correction and noise smoothing. Algorithm performance is compared with the standard 3x3 matrix color correction method for image mean square error, color error, flat SNR, and modulation transfer function.
Automatic image classification by color analysis
An automatic natural scenes classifier and enhancer is presented. It works mainly by combining chromatic and positional criterions in order to classify and enhance portraits and landscapes natural scenes images. Various image processing applications can easily take advantage from the proposed solution, e.g. automatically drive camera settings for the optimization of exposure, focus, or shutter speed parameters, or post processing applications for color rendition optimization. A large database of high quality images has been used to design and tune the algorithm, according to wide accepted assumptions that few chromatic classes on natural images have the most perceptive impact on the human visual system. These are essentially skin, vegetation and sky?sea. The adaptive color rendition technique, which has been derived from the results produced by the image classifier, is based on a simple yet effective principle: it shifts the chromaticity of the regions of interest towards the statistically expected ones. Introduction of disturbing color artifacts is avoided by a proper modulation and by preservation of original image luminance values. Quantitative results obtained over an extended data set not belonging to the training database, show the effectiveness of the solution proposed both for the natural image classification and the color enhancement techniques.
Some considerations in the development of color rendering and gamut mapping algorithms
Newly developed standard terminology enables improved communication about color processing objectives and research goals. By clarifying specific color processing tasks, one observes that there are de-facto standard practices for many processing steps, that can be used as a baseline recommendation for implementers and future development work. The more explicit descriptions of development goals can also serve to focus research on the desired objectives. This increased clarity leads to work being both more effective at meeting design objectives, and also more apparently relevant to commercial applications. Differences in objectives and requirements for the development of color rendering and gamut mapping algorithms are discussed and contrasted. In some cases, these differences can explain the reasons for different approaches, enabling broader consensus and understanding. Differences between color appearance and reproduction models are also discussed, along with the impact of these differences on their use for imaging applications. The above concepts are related to important color reproduction considerations, such as the preference and media capabilities. If the more explicit terminology is widely adopted, it could accelerate the advance of digital color understanding across both product manufacturers and users, and enable significantly more effective research, development, and use.
Sensor Design and Applications
icon_mobile_dropdown
Distributed and fractal pixel sensors
A CMOS active pixel sensor array with anti-aliasing using distributed photodiodes that exhibit a 2-D sinc-like function has been fabricated and tested. Unlike the traditional rectilinear photodiode shapes, the sinc-function diode configurations are interleaved with neighboring pixels. This passive form of focal plane signal processing requires no additional circuitry or power, and depends only on the shape of each pixels sensor. Two such distributed sensor arrays were implemented on a single chip, one with pixels including first-order orthogonal side lobes, and one with first and second-order orthogonal side lobes as well as 1st-order diagonal side lobes. For comparisons, a conventional array with rectangular photodiodes was implemented on the same chip, with the same total sensor area as the second distributed version. Simulations of the filtering effectiveness of various pixel shapes will be presented, as well as measurements of pixel performance including leakage and noise. Of interest, distributed pixel sensors have a relatively larger periphery and hence higher capacitance, increased well-capacity and decreased charge-to-voltage gain relative to equal-area square sensors. The larger periphery raises a concern about increased leakage and noise. However, measurements showed less than 2% increase in leakage current and similarly small differences in noise.
An adaptive framework for image and video sensing
Current digital imaging devices often enable the user to capture still frames at a high spatial resolution, or a short video clip at a lower spatial resolution. With bandwidth limitations inherent to any sensor, there is clearly a tradeoff between spatial and temporal sampling rates, which can be studied, and which present-day sensors do not exploit. The fixed sampling rate that is normally used does not capture the scene according to its temporal and spatial content and artifacts such as aliasing and motion blur appear. Moreover, the available bandwidth on the camera transmission or memory is not optimally utilized. In this paper we outline a framework for an adaptive sensor where the spatial and temporal sampling rates are adapted to the scene. The sensor is adjusted to capture the scene with respect to its content. In the adaptation process, the spatial and temporal content of the video sequence are measured to evaluate the required sampling rate. We propose a robust, computationally inexpensive, content measure that works in the spatio-temporal domain as opposed to the traditional frequency domain methods. We show that the measure is accurate and robust in the presence of noise and aliasing. The varying sampling rate stream captures the scene more efficiently and with fewer artifacts such that in a post-processing step an enhanced resolution sequence can be effectively composed or an overall lower bandwidth for the capture of the scene can be realized, with small distortion.
Optimum pixel design for dispersive filtering
Bruce M. Radl
Dispersive imaging has been shown to be an effective technique to optically control aliasing in mosaic pattern color sensors. It can be adapted to all of the currently used color filter array patterns and sensor layouts. Depending on the sensor, residual uncompensated errors remain after the optical image is recorded. Results will be demonstrated for several sensor/filter combinations. These will be used to support the conclusion that pixels that are optimum for dispersive filtering produce smaller residual errors. Pixel geometries, color filter array patterns and spectral sensitivities could be produced to minimize these errors. The design of these sensors are discussed in this paper. Optimum solutions are proposed and analyzed. These proposed solutions are compared to imaging systems currently in use.
Demosaicking/In-Camera Processing
icon_mobile_dropdown
Sharpening-demosaicking method with a total-variation-based superresolution technique
As an optical low-pass filter, a doubly refractive crystal device is used. The filter reduces frequency components lower than the Nyquist frequency, and images are blurred. We previously presented a demosaicking method that simultaneously removes blurs caused by the optical low-pass filter. For the sharpening-demosaicking approach, the Bayer’s RGB color filter array is not necessarily proper, and we studied another color-filter array, namely the WRB filter array, where the W-filtering means that all the visible light passes through it. Our prototypal sharpening-demosaicking method employed the iterative algorithm, and restored only spatial frequency components of color signals lower than the Nyquist frequency corresponding to the mosaicking pattern of the W filters. However, the same recovery problem is solved by a non-iterative method in the Discrete Fourier Transform domain. Moreover, our prototypal method often produced ringing artifacts near sharp color edges. To suppress those artifacts, we introduce the TV-based super-resolution into the sharpening-demosaicking approach. This super-resolution approach restores spatial frequency components higher than the Nyquist frequency from observed blurry spatial frequency components so that without producing ringing artifacts it can enlarge and sharpen images while preserving 1D image structures that intensity values are almost constant along the edge direction.
Mosaic image compression
Most consumer-level digital cameras use a color filter array to capture color mosaic data followed by demosaicking to obtain full-color images. However, many sophisticated demosaicking algorithms are too complex to implement on-board a camera. To use these algorithms, one must transfer the mosaic data from the camera to a computer without introducing compression losses that could generate artifacts in the demosaicked image. The memory required for losslessly stored mosaic images severely restricts the number of images that can be stored in the camera. Therefore, we need an algorithm to compress the original mosaic data losslessly so that it can later be transferred intact for demosaicking. We propose a new lossless compression technique for mosaic images in this paper. Ordinary image compression methods do not apply to mosaic images because of their non-canonical color sampling structure. Because standard compression methods such as JPEG, JPEG2000, etc. are already available in most digital cameras, we have chosen to build our algorithms using a standard method as a key part of the system. The algorithm begins by separating the mosaic image into 3 color (RGB) components. This is followed by an interpolation or down-sampling operation--depending on the particular variation of the algorithm--that makes all three components the same size. Using the three color components, we form a color image that is coded with JPEG. After appropriately reformatting the data, we calculate the residual between the original image and the coded image and then entropy-code the residual values corresponding to the mosaic data.
Near-lossless compression algorithm for Bayer pattern color filter arrays
In this contribution, we propose a near lossless compression algorithm for Color Filter Arrays (CFA) images. It allows higher compression ratio than any strictly lossless algorithm for the price of some small and controllable error. In our approach a structural transformation is applied first in order to pack the pixels of the same color in a structure appropriate for the subsequent compression algorithm. The transformed data is compressed with a modified version of the JPEG-LS algorithm. A nonlinear and adaptive error quantization function is embedded in the JPEG-LS algorithm after the fixed and context adaptive predictors. It is step-like and adapts to the base signal level in such a manner that higher error values are allowed for lighter parts with no visual quality loss. These higher error values are then suppressed by gamma correction applied during the image reconstruction stage. The algorithm can be adjusted for arbitrary pixel resolution, gamma value and tolerated error range. The compression performance of the proposed algorithm has been tested for real CFA raw data. The results are presented in terms of compression ratio versus reconstruction error and the visual quality of the reconstructed images is demonstrated as well.
Image Processing/Compression I
icon_mobile_dropdown
Image enhancement system for mobile displays
In this paper, we present a system for enhancing digital photography on mobile displays. The system is using adaptive filtering and display specific methods for maximizing the subjective quality of images. Because mobile platforms have a limited amount of memory and processing power, we describe computationally efficient scaling and enhancement algorithms that are especially suitable for mobile devices and displays. We also show how a proper arrangement of these algorithms forms an image processing chain that is optimized for mobile use. The developed image enhancement system has been implemented using the Nokia Series60 platform and tested on imaging phones. Tests and results show that significant improvement of quality can be achieved with this solution within the processing power and memory limitations that mobile platforms set.
Dynamic focus window selection strategy for digital cameras
Yibin Tian, Huajun Feng, Zhihai Xu, et al.
The purpose of selecting focus window is not only to reduce computation, but also to improve image sharpness of object(s) of interest. A simple geometrical model is built using thin lens Gauss equation. The necessity of utilizing different focus window selecting strategies is demonstrated with the model. Then a dynamic focus window selecting method is described. It's reasonable to assume that a photographer's gaze direction points to the object(s) of his interest when he is taking a picture. The gaze direction is trackable by various pupil-tracking methods. A simple modification of viewfinder allows us to take images of photographer's eye with the built-in image sensor. The images are then processed to determine the photographer's gaze direction. One small area in the target image is matched to the gaze direction and selected as focus window. Within the focus window, an uneven sampling method can be used to reduce the computational need further. The uneven sampling works the same way as the human retina. This dynamic focus window selecting method can greatly increase the probability of getting sharply focused object(s) of interest, at the same time only less than 1% of the target image is needed to apply the focus measure.
Automatic image enhancement by picture fusion
Alfio Castorina, Alessandro Capra, Salvatore Curti, et al.
This paper describes an automatic technique able to fuse different images of the same scene, acquired with different camera settings, in order to obtain an enhanced single representation of the interested. This allows to extend the functionalities (depth of field, dynamic range) of medium and low cost digital cameras. When Multi-Scale Decomposition (MSD) is used on differently focused images, magnification and blurring effects of lens focusing systems often compromise the final image with unpleasant artifacts. In our approach new techniques able to reduce these artifacts are introduced. Even if the algorithm has been essentially designed to extend depth of field it can be also used on multi-exposed input images thus extending dynamic range. The algorithm can be applied on full colorand on Color Filter Array (CFA)images.
Image Processing/Compression II
icon_mobile_dropdown
The role of camera-bundled image management software in the consumer digital imaging value chain
Milton Mueller, Anuradha Mundkur, Ashok Balasubramanian, et al.
This research was undertaken by the Convergence Center at the Syracuse University School of Information Studies (www.digital-convergence.info). Project ICONICA, the name for the research, focuses on the strategic implications of digital Images and the CONvergence of Image management and image CApture. Consumer imaging - the activity that we once called “photography” - is now recognized as in the throes of a digital transformation. At the end of 2003, market researchers estimated that about 30% of the households in the U.S. and 40% of the households in Japan owned digital cameras. In 2004, of the 86 million new cameras sold (excluding one-time use cameras), a majority (56%) were estimated to be digital cameras. Sales of photographic film, while still profitable, are declining precipitously.
Haze removal for image enhancement
We present a method to improve the quality of hazy digital images for digital photography applications. Most of the methods for haze removal that have been reported in the literature require either multiple images of the scene or auxiliary information, and are thus not suitable for digital photography applications. Our method is based on variable contrast enhancement. To determine where the enhancement should be applied, we find the location of the horizon. We use a ratio of luminance values to form an edge map, a connected components algorithm to find the sky set, and then finally a connected components algorithm to find any object extending over the horizon. The ground cover is then contrast enhanced with a modified piecewise linear function, with more enhancement at the horizon than at the bottom of the image. We demonstrate the capability to improve image quality with several images.