Show all abstracts
View Session
- Imaging Technology
- Color and Optics
- APS and A/D Conversion
- Poster Session
- Imaging Technology
- Color and Optics
- APS and A/D Conversion
- Computational Sensors I
- APS and A/D Conversion
- Color and Optics
- Poster Session
- Computational Sensors I
- Poster Session
- Computational Sensors I
- Computational Sensors II
- Poster Session
- Computational Sensors II
- Computational Sensors I
- Computational Sensors II
- Imaging Technology
- Poster Session
Imaging Technology
CMOS-compatible avalanche photodiodes
Show abstract
As a step towards a complete CMOS avalanche photodiode imager, various avalanche photodiodes have been integrated in a commercially available CMOS process. In this paper, design considerations are discussed and experimental results are compared for a wide variety of diodes. The largest restriction is that no process change is allowed. Even with such a restriction, gains of more than 1000 at an incident wavelength of 637 nm using 83 V for one diode type and 45 V for another one has been shown. Thus, the feasibility of CMOS compatible avalanche photodiodes has been proved, allowing us to proceed towards the next step of integrating controlling circuits, readout circuits and avalanche photodiodes on the same chip. Further developments in this area is already in progress.
Color and Optics
185-deg. ultrawide fish eye lens
Masamura Yamaji,
Toru Nagaoka
Show abstract
A picture paints a thousand words and those pictures form our universal language. This alone makes picture a very convenient tool. However image processing on the computer is still troublesome and requires lots of memory, thus limiting the wide usage ofpictures in communication. Typical image processing tools using the computer are scanner and video camera, and recently digital cameras have come onto the scene rapidly. This is made possible with the advancement in electronics. Combining this with the optical technology, a simple and convenient image processing system is now possible. The birth ofultra-wide conversion (fish eye) lens. To date, there are a few companies manufacturing fish eye lenses, however these are usually bulky and not suitable for use on the small digital cameras that have come into the market recently. The earliest small size high-resolution digital camera in the market was Olympus with their C-820L model. With that camera model as target, we started to develop a prototype for world's first lightweight FOV 185° wide-angle conversion lens. This new technology is developed for the Internet age and to create new culture in the communication field. The generally accepted view is that image processing on the computer is complicated and only specialists can do the job. Laymen like you and me, and older folks, will find the subject too difficult to handle. However our fish eye lens changes all that.
Optical microsystems for imaging
Show abstract
We report on micro-optical systems for image projection and image capturing. Our investigations are focused on passive and active array optics, e.g. arrays of micro-objectives, flat camera lenses, Gabor superlenses and moire magnifiers. We discus the potential of micro-optical systems to replace conventional detection and projection systems. We will present experimental result.
APS and A/D Conversion
10-bit 20-Msample/s ADC for low-voltage low-power applications
Gerard Sou,
Guo Neng Lu,
Geoffroy Klisnick,
et al.
Show abstract
For the development of new low-voltage, low-power imaging microsystems, we have designed a 10-bit 20-Msample/s ADC. It is a 3-stage sub-ranging architecture and has a rail-to-rail dynamic input. To achieve low-voltage operation and low- power consumption, specific analog blocks such as op-amps and flash ADCs were required. Complementary CMOS comparators with no static consumption were used to build a new low- power 4-bit flash ADC structure with rail-to-rail input range. A new 1.7 volts, 120 dB op-amp structure was designed. To achieve 20 MHz sampling rate, the ADC makes use of time-interleaving, switched capacitor amplifiers, which perform dynamic frequency compensation to optimize speed and offset cancellation to meet resolution requirement. A 20- Msample/s rate has been obtained with supply voltages down to 2.4 volts to 2.4 volts and 60mW power consumption. This ADC has been fabricated and tested and will be integrated on the same chip with color image sensors in a BICMOS process.
Poster Session
Temperature effects on colorimetic accuracy of the BTJ color detector
Show abstract
Measurements of the spectral sensitivities of the Buried Triple p-n Junction color detector have been carried out in the -60 degrees C to 60 degrees C temperature range. Temperature behavior of the photo-currents are described. Variations in the BTJ CMFs have been calculated and a procedure for colorimetric characterization which consider the detector temperature is proposed. Using the proposed procedure, color differences between the detector specifications and the color coordinates in the CIE standard have been determined. The validity of this procedure is evaluated in terms of color shift between the detector specifications caused by a temperature change.
Imaging Technology
Novel CMOS electron imaging sensor
Show abstract
Electron detector arrays are employed in numerous imaging applications, from low-light-light-level imaging to astronomy, electron microscopy, and nuclear instrumentation. The majority of these detectors are fabricated with dedicated processes, use the semiconductor as a stopping and detecting layer, and utilize CCD-type charge transfer and detection. We present a new detector, wherein electrons are stopped by an exposed metal layer, and are subsequently detected either through charge collection in a CCD-type well, or by a measurement of a potential drop across a capacitor which is discharged by these electrons. Spatial localization is achieved by use of two metal planes, one for protecting the underlying gate structures, and another, with metal pixel structures, for 2D detection. The new device does not suffer from semiconductor non-uniformities, and blooming effects are minimized. It is effective for electrons with energies of 2-6 keV. The unique structure makes it possible to achieve a high fill factor, and to incorporate on-chip processing. An imaging chip implementing several test structures incorporating the new detector has been fabricated using a 2 micron double-poly double-metal process, and tested inside a JEOL 640 electron microscope.
Color and Optics
Project SVAVISCA: a space-variant color CMOS sensor
Giulio Sandini,
A. Alaerts,
Bart Dierickx,
et al.
Show abstract
The paper describes the result of the first phase of the ESPRIT LTR project SVAVISCA. The aim of the project was to add color capabilities to a previously developed monochromatic version of a retina-like CMOS sensor. In such sensor, the photosites are arranged in concentric rings and with a size varying linearly with the distance from the geometric center. Two different technologies were investigated: 1) the use of Ferroelectric Liquid Crystal filters in front of the lens, 2) the deposition of color microfilters on the surface of the chip itself. The main conclusion is that the solution based on microdeposited filters is preferable in terms of both color quality and frame rate. The paper will describe in more detail the design procedures and the test results obtained.
APS and A/D Conversion
Experimental characterization of CMOS APS imagers designed using two different technologies
Pierre Magnan,
Cyril Cavadore,
Anne Gautrand,
et al.
Show abstract
This paper presents measurements results performed on CMOS APS imagers implemented on two different technologies. Both PhotoMOS (PM) and PhotoDiode (PD) structures have been designed by the CIMI-SUPAERO group and high APS readout rate measurements have been performed by Matra Marconi Space. Every circuit also includes the pixel's address decoders and the readout circuit required to perform on-chip correlated double sampling and double delta sampling. The aim of this paper is to compare performances of those arrays operating at 5 Volts in terms of dark current, quantum efficiency, conversion gain, dynamic range.
Computational Sensors I
CMOS image sensor with logarithmic response and self-calibrating fixed pattern noise correction
Show abstract
Optical CMOS sensor arrays have to cope with the problem of mismatch between individual pixels. This paper describes a CMOS camera chip with logarithmic response that has an automatic analog fixed pattern noise correction on chip. It was fabricated in an 0.8 micron process and consists of 64 X 64 active pixels. The photoreceptors show a measured response of about 100 mV per decade in light intensity and a dynamic range of more than 6 decades. Each pixel includes a capacitor to store the offset correction voltage. After applying the calibration procedure the remaining fixed pattern noise of one column has a standard deviation of 2 mV corresponding to 2 percent of a decade. The result is a system with significantly reduced pixel to pixel variations in comparison to similar uncalibrated photoreceptor arrays.
APS and A/D Conversion
CMOS active pixel image sensor with CCD performance
Show abstract
A color CMOS image sensor has been developed which meets the performance of mainstream CCDs. The pixel combines a high fill factor with a low diode capacitance. This yields a high light sensitivity, expressed by the conversion gain of 9 (mu) V/electron and the quantum efficiency fill factor product of 28 percent. The temporal noise is 63 electrons, and the dynamic range is 67 dB. An offset compensation circuit in the column amplifiers limits the peak-to-peak fixed pattern noise to 0.15 percent of the saturation voltage.
Color and Optics
CMOS linear array of BDJ color detectors
Show abstract
A linear array of 64 BDJ cells has been designed and fabricated in a 1.2 micrometers CMOS process. It is aimed to build anew, self-calibrated micro-spectrophotometer. Each cell contains a BDJ detector which can operate as booth light intensity-sensitive and wavelength-sensitive device, and makes use of MOS transistors working in the weak inversion mode to perform logarithmic current-voltage conversion. Measurement of the fabricated chip has been carried out. A large detection light intensity dynamic range and a low fixed pattern noise have been obtained.
Poster Session
CMOS image sensor that outputs Walsh-Hadamard coefficents
Yang Ni
Show abstract
Waish-Hadamard transformation is one of the most interesting linear transformations in image processing domain. The particularity of Walsh-Hadamard transformation is that it uses a set of basis functions which contain only +1 and -1, this results in a direct and simple computation without multiplication. This simplicity can not only reduce the complexity of the conventional digital processing unit, but also make possible some special rn-sensor cellular computation. This paper presents an original CMOS image sensor which can output directly the coefficient Waish-Hadamard of the sensed image. When the pixel i,jis addressed, it outputs the coefficient ij of the transformation Walsh-Hadamard instead of the gray level of pixel ij in a conventional image sensor. This image sensor has the following advantages : 1 0 gray level image can be reconstructed from partially read coefficients. This permits possible exchange between sensor output bandwidth and reconstructed image resolution, for example only quarter of coefficients can give a pretty good image; 2° some transformation based algorithms such as pattern recognition, image compression, etc. can use directly the sensor output without any hardware/software cost.
Computational Sensors I
Analog multimode visual feature extraction retina
Yang Ni,
Jian Hong Guan
Show abstract
We propose a multimode retina integrating image sensing, normalization, smoothing, differentiation and visual feature extraction by incorporating: 1) a histogram equalization function onto the sensor focal plane, which normalizes the image signal at the same dynamic range under different illumination conditions and results a constant signal amplitude facilitating the post processing; 2) a capacitive Gaussian network and a spatial differentiator, which offer a bandpass filtering on the normalized image; 3) a visual feature extraction, which extracts and localizes the areas enveloping objects of interest. This novel multimode retina, when interfaced with a digital processing unit, does a first step processing and information salient region extraction suited for many real-time vision tasks.
Poster Session
Focal plane compression 128x128 image sensor based on column parallel architecture
Show abstract
In order to enhance the performance of image sensing, we have been investigating a novel image sensor which compresses image signal on the sensor focal plane. By the integration of sensing and compression, number of pixels in the image signal that has to be readout from the sensor can be significantly reduced, and the integration can consequently increase the pixel rate of the sensor. In this paper, we describe a new prototype sensor based on a column parallel architecture which has 128 X 128 pixels. We have improved the processing circuits of the new prototype to achieve much lower power dissipation and higher processing speed. We have verified that the processing circuits can be operated at 5000 frames/second.
Motion vector estimation on focal sensor plane by block matching
Show abstract
In this paper, a fast-2D motion vector estimation on the CMOS sensor focal plane is proposed. Edge detection circuit composed of 2 crossed differential OTAs with Time- Multiplexed sampling is adopted for getting horizontal and vertical edges. Short-time digital memory is designed by transmission gate array, which can keep edge information by a smaller layout area. High speed block matching is designed by a Local Parallel and Global Column Parallel processing structure, which fully makes use of the parallel nature of image signals. The size of block matching can be reduced to 4 by 4 pixels and a search area of +/- 1 pixel around the intentional block at a high frame rate of 1000 frame/s. The prototype chip is designed by 1-poly 2-metal 0.7 micrometers CMOS process.
Computational Sensors I
Compact real-time 2D gradient-based analog VLSI motion sensor
Show abstract
In this work we present the first working focal plane analog VLSI sensor for the spatially resolved computation of the 2D motion field based on temporal and spatial derivatives. Using an adaptive CMOS photoreceptor the temporal derivative and a function of the spatial derivative of the local high intensity are computed. By multiplying these values separately for both spatial dimensions a vector is obtained, which points in the direction of the normal optical flow and whose magnitude for a given stimulus is proportional to its velocity. The circuit consists of only 31 MOSFETs and three capacitors per pixel. We present measurements data from fully functional prototype 2D pixel arrays for natural stimuli of varying velocity, orientation, contrast and spatial frequency. High direction selectivity even for very low contrast input is demonstrated. As application it is shown how the pixel-parallel architecture of the sensor can favorably be used for real-time computation of the focus of expansion and the axis of rotation. Because of its compactness, its robust operation and its uncritical handling the sensor might be favorably applied in industrial applications.
Computational Sensors II
Making the most of 15k-lambda-2 silicon area for a digital retina PE
Fabrice Paillet,
Damien S. Mercier,
Thierry M. Bernard
Show abstract
Lodging a digital processing element (PE) in each pixel of a focal plane array is the challenge to be taken up to get programmable artificial retinas (PAR) that can be used in a large variety of applications. Using semi-static memory and communication structures together with charge sharing based computing circuitry, we elaborate a PE architecture of which the computational power versus area ratio improves over all previously known attempts. A key feature is the ability of neighbor PEs to be gathered into clusters allowing to get virtual memory through multigranularity computation. A 128 X 128 PAR, called PVLSAR 2.2, has been fabricated accordingly with 5 binary registers per PE. Each PE fits within a 15k(lambda) 2 silicon area$LR.
Poster Session
Near-infrared camera for solar research: a photometric application
Show abstract
We report here the main characteristics of a near IR camera devoted to astrophysical solar research, which has been developed by the Instituto de Astrofisica de Canarias (IAC). The system is now being used for photometric and spectroscopic applications, and it will also be used for spectropolarimetry in the near future. The first application is described below in detail. The IACs IR camera is based on a Rockwell 256 X 256 HgCdTe NICMOS3 array, sensitive from 1 to 2.5 microns. The necessary cooling system is a LN2- cryostat, designed and built by IR labs under out requirements. The main electronics are the standard VME- based, FPGA programmable MCE-3 system, also developed by IR labs. We have implemented different readout schemes to improve sped, reduce noise and avoid seeing effects, taking into account each specific application. Data are transferred via fiber optics to a control unit, which re-send them to the main data acquisition system. Several acquisition modes to select the best images have been implemented, and a real- time data processing is available, the entire camera has been characterized and calibrated, and the main radiometric parameters given. Preliminary test in spectroscopic observations have been made in the German Towers at the Observatorio del Teide in Tenerife, Spain, and a series of photometric measurements performed in the Swedish Solar Telescope, at the Observatorio del Roque de los Muchachos in La Palma, Spain. As examples, some scientific results are also presented.
Computational Sensors II
High-dynamic-range front end for automatic image processing applications
Gillian F. Marshall,
Stephen Collins
Show abstract
A camera system is described which can be used as a front end to automatic image processing applications, capable of imaging very high dynamic range scenes. We were inspired by the mammalian retina to build a system entirely upon commercially-available components which has the maximum flexibility and minimum risk and development cost. The result is a system which includes logarithmic photodetectors, together with simple robust fixed pattern noise correction and high-pass spatial filtering.
Computational Sensors I
VLIW processor architecture adapted to FPAs
Laurent Petit,
Jean-Didier Legat
Show abstract
A new processor architecture intended to be integrated with a CMOS image sensor is presented. This association allows to design an intelligent camera that can perform on-chip image processing tasks. The processor is based on a VLIW architecture with a reduced instruction bus, able to execute multiple instructions in a parallel without any loss of performance. In addition, no more instruction cache is required, decreasing in this way the hardware complexity.
Computational Sensors II
1920x1080 pixel color camera with progressive scan at 50 to 60 frames per second
William E. Glenn,
John W. Marcinka
Show abstract
For over a decade, the broadcast industry, the film industry and the computer industry have had a long-range objective to originate high definition images with progressive scan. This produces images with better vertical resolution and much fewer artifacts than interlaced scan. Computers almost universally use progressive scan. The broadcast industry has resisted switching from interlace to progressive because no cameras were available in that format with the 1920 X 1080 resolution that had obtained international acceptance for high definition program production. The camera described in this paper produces an output in that format derived from two 1920 X 1080 CCD sensors produced by Eastman Kodak.
Hardware and software platform for analog sensor processing using exposure control
Show abstract
Analog Sensor Processing Using Exposure Control (ASPEC) is a new concept for high speed image processing. By using an addressable image array with integrating output amplifiers, single processing can be performed directly on the sensors. The major gain in using ASPEC techniques is that the operations are fast, the approach can be implemented using existing hardware, and that the processing is executed in parallel on the sensor array. Furthermore, the data reduction is carried out early in the signal processing chain. In this paper we present a novel programmable camera architecture based on the CIVIS CMOS integrating addressable image sensor which is well suited for ASPEC applications.
Design and development of the smart machine vision sensor (SMVS)
Stefan Fischer,
Nikolaus Schibli,
Fabrice Moscheni
Show abstract
The Smart Machine Vision Sensor (SMVS) is a self-contained low-cost machine vision system that combines an image sensor, a processing unit and communication interfaces in a single unit. This paper describes the design of the SMVS, the concept of the operating system and the application software. The main characteristics of the SMVS are (1) the absence of an analog video signal and (2) the modular design concept. The signal processing is performed with the processing unit and the output consists either of image analysis results or a compressed digital image data stream. The modular design allows for easy customization for applications that require different sensor technologies, resolutions and spectral sensitivities. The modular design is made possible by the development of a generic sensor-CPU interface. This interface can be used with either CCD or CMOS sensors with a resolution of 512 X 512 pixels. It eliminates the need to redesign the processor board with changing sensor requirements. The SMVS is smaller and cheaper than traditional image processing systems based on desktop personal computers. It uses full-frame, non- interlaced sensor with square pixels to provide an optical image quality, avoiding the drawbacks of an analog, interlaced video signal with fixed frame rate.
Motion camera based on a custom vision sensor and an FPGA architecture
Miguel Arias-Estrada
Show abstract
A digital camera for custom focal plane arrays was developed. The camera allows the test and development of analog or mixed-mode arrays for focal plane processing. The camera is used with a custom sensor for motion detection to implement a motion computation system. The custom focal plane sensor detects moving edges at the pixel level using analog VLSI techniques. The sensor communicates motion events using the event-address protocol associated to a temporal reference. In a second stage, a coprocessing architecture based on a field programmable gate array (FPGA) computes the time-of-travel between adjacent pixels. The FPGA allows rapid prototyping and flexible architecture development. Furthermore, the FPGA interfaces the sensor to a compact PC computer which is used for high level control and data communication to the local network. The camera could be used in applications such as self-guided vehicles, mobile robotics and smart surveillance systems. The programmability of the FPGA allows the exploration of further signal processing like spatial edge detection or image segmentation tasks. The article details the motion algorithm, the sensor architecture, the use of the event- address protocol for velocity vector computation and the FPGA architecture used in the motion camera system.
Neuromorphic vision sensors and preprocessors in system applications
Show abstract
A partial review of neuromorphic vision sensors that are suitable for use in autonomous systems is presented. Interfaces are being developed to multiplex the high- dimensional output signals of arrays of such sensors and to communicate them in standard formats to off-chip devices for higher-level processing, actuation, storage and display. Alternatively, on-chip processing stages may be implemented to extract sparse image parameters, thereby obviating the need for multiplexing. Autonomous robots are used to test neuromorphic vision chips in real-world environments and to explore the possibilities of data fusion from different sensing modalities. Examples of autonomous mobile systems that use neuromorphic vision chips for line tracking and optical flow matching are described.
Imaging Technology
Uncooled IRFPA technologies: state of the art and developments at LETI/LIR
Show abstract
Today, large numbers of uncooled IR detector developments are under progress due to the availability of silicon technology that enables realization of low cost 2D IR arrays. Development of such a structure involves a lot of trade-offs between the different parameters which characterize these detectors: (i) IR flux absorption, (ii) measurement of the temperature increase due to the incoming IR flux absorption, (iii) thermal insulation between detector and readout circuit, (iv) readout of thermometer temperature variation. These trade-offs explain the number of different approaches which are under worldwide development. We present a rapid survey of the state of the art through these developments. If the most advance developments are found in the US and Great Britain, it is important to analyze the work which is being done by Japanese companies like MITSUBISHI, NEC,...which are involved since a few years in that area. LETI/LIR has chosen resistive amorphous silicon as thermometer for this uncooled microbolometer development. After a first phase dedicated to the acquisition of the most important detector parameters in order to help the modeling and the technological development, an IRCMOS laboratory model was realized and characterized. It was shown that NETD of 80 mK at f/1, 25 Hz and 300 K background can be obtained with high thermal insulation.
Poster Session
Missing pixel correction algorithm for image sensors
Show abstract
We describe a compact algorithm that can on the fly detect and correct isolated missing pixels in the output stream of an image sensor, without significantly degrading the image quality. The algorithm is in essence a small kernel non- linear filter. It is based on the prediction of the allowed range of gray values for a pixel, for the gray values of the neighborhood of that pixel. A few examples will illustrate the effect of the algorithm on realistic images.