A multi-aperture computational ultra-high-speed camera with ultra-fast charge modulators
Image sensors that use time-resolving CMOS technology have recently drawn significant attention. This is because they offer the pixel-level photogenerated charge modulation essential for fluorescence biomedical1 and time-of-flight imaging (which measures the time of flight of a light signal between camera and subject for each point of an image).2 At our laboratory, we have been developing CMOS image sensor technologies with ultra-low-noise pixels (noise level of 0.27 electrons),3 ultra-fast charge modulator pixels (electron transfer response of 180ps),4 and low-noise high-dynamic-range column-parallel analog-to-digital conversion (noise level of around 1 electron and dynamic range of more than 80dB).5 We have used lateral electric field charge modulator (LEFM) pixels6 to capture a moment in a time window with a few nanoseconds' width and sub-nanosecond rise and fall times. It is possible to accumulate signals for multiple time windows in the charge domain, thus achieving high photosensitivity. Furthermore, we may use multi-tap implementation—where multiple charge storage memories are prepared for one photodiode—to transfer photogenerated charges to one of the taps, without any loss from different time windows with adjacent opening timings.
Based on LEFM technology, we have developed the fastest silicon-based ultra-high-speed camera, which incorporates multi-aperture optics7 and compressive sensing8 (see Figure 1). For ultra-high-speed imaging, we use a burst readout scheme, where the number of sequential images is predefined (for example, 100 frames), unlike the continuous readout scheme widely used in camcorders. The images are stored on the sensor chip during capture, and read out later (since image readout takes far longer than image capture). Although LEFM enables use of an ultra-fast focal-plane shutter, it cannot capture more frames than taps for a single event. For example, in Figure 1 there are two taps in a pixel. Therefore, we may capture only two sequential frames. Multi-aperture optics solves this issue. If we prepare an individual pixel array for each lens, and every pixel array blinks quickly in turn, we acquire as many sequential images as lenses. Furthermore, compressive sensing enables improved sampling efficiency. We may observe an object with multiple temporally coded shutters, and the number of obtained images is less than that of the observed frames. Finally, all the frames are reconstructed based on sparsity. In other words, we achieve highly efficient sampling of more frames than lenses.
The most important aspect of this optoelectrocomputational architecture is how the frame rate is determined. Figure 2 compares our approach with conventional methods. The frame rate of conventional ultra-high-speed image sensors is defined by a pause related to the signal transfer from the pixel array to the frame buffer, namely, the multi-stage charge transfer in the charge coupled device,9 or the voltage signal transfer from the pixel to the column frame memory in the CMOS.10 However, in our scheme, the frame rate is determined only by the charge modulation speed in pixels. Unlike conventional ultra-high-speed image sensors, ours has no dedicated frame memory. Instead, the pixels work as a frame memory, and there is no pause caused by the signal transfer delay.
Figure 3 shows our prototype sensor, camera, and experimental results.11 We fabricated a prototype CMOS image sensor with 5×3 apertures. The pixel count per aperture is 64×108, and the pixel size is 11.2×5.6μm. We built a prototype camera equipped with a lens array with a focal length of 3.0mm and a pitch of 0.72×1.19mm. As a preliminary demonstration of filming a single-event ultra-high-speed phenomenon, we observed air breakdown plasma at 200M frames/s. We obtained 15 images for 15 random shutter patterns composed of 32 bits. In the experiment, we focused a short-pulse laser beam (using a neodymium-doped yttrium garnet second harmonic generation laser, λ=532nm, pulse width of 8ns) in the air. The images are multiplexed only in time. Therefore, the compressed images in Figure 3 became blurry, reflecting the temporal shutter patterns. We reproduced 32 frames out of 15 images with the algorithm TVAL3,12 so that the compression ratio was 47%.
In summary, we have produced an optoelectrocomputational ultra-high-speed camera, and now aim to improve frame rate, the number of sequential images, pixel count, photosensitivity, and artifact reduction. Although the multi-aperture architecture is advantageous in scalability, it requires a special lens array. For compatibility with conventional single-aperture optics, we are now developing a new image sensor with a sub-pixel structure, and we are also exploring applications of our sensor to long-distance time-of-flight range imaging13 and time-resolving 3D microscopy.
We are grateful for the support of S. Okihara, M. -w. Seo, T. Takasawa, K. Yasutomi, and M. Fukuda. This work is partially supported by Grants-in-Aid for Scientific Research (B) 15H03989 and (S) 25220905, and JSPS KAKENHI grant 15J10262. This work is also supported by the VLSI Design and Education Center (VDEC), University of Tokyo, with the collaboration of Cadence Corporation, Synopsys Corporation, and Mentor Graphics Corporation.
Keiichiro Kagawa received a PhD in engineering from Osaka University in 2001, and is currently an associate professor. His research interests include high-performance CMOS image sensors, imaging systems, and biomedical applications.
Futa Mochizuki received an ME from Shizuoka University in 2015, and is currently pursuing a PhD. He is a student member of the Institute of Image Information and Television Engineers (ITE) and IEEE. His current interest is in CMOS image sensors.
Shoji Kawahito received a PhD from Tohoku University, Sendai, in 1988. He is currently a professor and is chief technology officer of Brookman Technology Inc. He is a Fellow of the IEEE and ITE. His research interests are in analog circuits and pixel architecture designs for CMOS imagers.