On-chip time-of-flight estimation in standard CMOS technology
In the last decade, CMOS image sensors (CISs) have reached a considerable level of maturity and their performance is now comparable with CCD sensors, in terms of image quality. CISs have almost completely replaced CCDs in commercial photo cameras and mobile phones. The main advantage of using CMOS technology is the possibility of integrating additional intelligence at the sensor level. Complex image processing algorithms can be run on-chip at high frame rates. A possible future development for CIS technology is to capture 3D information from a scene. This, however, requires active illumination schemes.
The most popular approach is to use pulse-modulated illumination, with a jitter in the picosecond range. A properly clocked set of transfer gates is correlated with the received light pulses to derive the time of flight (ToF) and thus the 3D information. The transfer must be carefully designed to provide practical spatial resolution, but this does not work very well, in general, with standard CMOS technology. An alternative is to use single-photon avalanche diodes (SPADs),1 but this requires low defect density, which is rarely achieved with standard CMOS processes.2 Several techniques can be implemented on-chip, however, to mitigate these effects.
Our approach to this problem involves the use of time-gated SPADs, which permits a direct ToF to be determined even with a high dark count rate (DCR) and a low photon detection efficiency (PDE).3 We have designed an architecture that can perform ToF estimation in standard CMOS technology. Our latest imager, which incorporates this architecture, therefore pushes current technological limits. We fabricated our chip in a 0.18μm-1P6M-1.8V (i.e., using one polysilicon and six metal levels) process and our architecture is based on in-pixel time-to-digital converters (TDCs). Our experimental results indicate that this device is robust and that no pixel-level calibration is required.
A microphotograph of our latest chip is shown in Figure 1. A detailed description of this sensor, together with a full characterization of the TDC array, has previously been reported.4 We used our own design of a picosecond-incremental-resolution time-interval generator, on field-programmable gate array (FPGA) technology, to make measurements of the chip.5 The central part of the chip is an array of 64×64 SPAD cells that incorporate the photodiode itself, an active quenching/recharge circuit, a start/stop control logic, a TDC, a memory block, and the output buffers. We used the experimental setup shown in Figure 2 to evaluate the performance of the 3D imager. To do this, characterization of both the individual SPAD detectors and the uniformity of the array were equally important. We uniformly diffused the light spot on the surface of the imager, and each light pulse was triggered by a synchronization signal. In our system, whenever a photon is detected by a SPAD, the in-pixel TDC is turned on. It is subsequently turned off by a synchronization pulse. We calculate the actual ToF by subtracting the measured time interval from the laser time period.
One of the key features of our sensor is the ability to implement time gating of the SPAD operation. This feature can be used to reject high levels of uncorrelated noise, e.g., dark counts and background light. The active quenching/recharge circuit we use is similar to one that we have reported previously,6 but it incorporates two additional transistors to implement the time gating. Another important part of the smart pixel is the TDC—controlled by a global phase-locked loop (PLL)—which we can use to select a certain time resolution and to globally calibrate the imager against pressure, volume, and temperature variations.
The PDE of our imager is 5% at a wavelength of 540nm, the DCR is 42kHz, and the full-width half-maximum (FWHM) of the ToF histogram is 212ps. We made all our measurements at 1V excess voltage and at room temperature. The laser we used to characterize the sensor has a wavelength of 447nm at a 2.5MHz repetition rate. We set the equivalent irradiance to less than 10nW/mm2, to meet single-photon detection conditions. In our experiments, the time gate is about 400ns, the integration time is 20ms, and the time resolution for one TDC is 160ps. We measure a maximum deviation of 3.12 least significant bit (LSB) across the array for a laser-to-sensor distance that corresponds to a time-resolved interval of 5.66ns, and 20% of the array has a maximum deviation of 0.2LSB (see Figure 3). We obtained all these results without any pixel-level calibration. We have also reconstructed a 3D view of the spot surface by focusing the laser spot on the array (see Figure 4).
We have designed and experimentally demonstrated a new SPAD-based 3D imager that can be integrated into cost-effective standard CMOS technologies (e.g., for medical imaging and 3D vision applications). We have shown that 3D image coarse reconstruction can be achieved by performing accurate ToF estimations, even with large levels of uncorrelated noise. This is possible thanks to the time-gating strategy that we have incorporated in the sensor front-end. Our architecture is robust and gives promising results in a standard process with no special high-voltage and low-noise features. We are currently developing on-chip circuitry for ToF estimation averaging. This will provide enhanced spatial accuracy and maintain an acceptable frame rate, even in strong background light conditions.
This work has been funded by the US Office of Naval Research (grant N000141410355), and through Spanish government projects TEC2012-38921-C02/MINECO (European Region Development Fund), IPT-2011-1625-4300/MINECO, IPC-20111009 CDTI, and Junta de Andalucía, Consejería de Economía, Innovación, Ciencia y Empleo TIC 2012-2338.
Spanish Council of Scientific Research/University of Seville
Ion Vornicu's current research interests include the design and testing of CMOS sensors that are based on single-photon avalanche diodes. These can be used for 2D or 3D vision, and in nuclear medicine imaging techniques such as positron emission tomography.
Ricardo Carmona-Galán's main research areas are vision chips, smart CMOS imagers for low-power vision applications (e.g., robotics), vehicle navigation, and vision-enabled wireless sensor networks. He is also interested in CMOS-compatible single-photon detection.