A multi-aperture computational ultra-high-speed camera with ultra-fast charge modulators

A multi-aperture camera based on optoelectrocomputational architecture enables filming at 200M frames/s through use of a CMOS image sensor with ultra-fast charge modulators and compressive sensing.
15 February 2016
Keiichiro Kagawa, Futa Mochizuki and Shoji Kawahito

Image sensors that use time-resolving CMOS technology have recently drawn significant attention. This is because they offer the pixel-level photogenerated charge modulation essential for fluorescence biomedical1 and time-of-flight imaging (which measures the time of flight of a light signal between camera and subject for each point of an image).2 At our laboratory, we have been developing CMOS image sensor technologies with ultra-low-noise pixels (noise level of 0.27 electrons),3 ultra-fast charge modulator pixels (electron transfer response of 180ps),4 and low-noise high-dynamic-range column-parallel analog-to-digital conversion (noise level of around 1 electron and dynamic range of more than 80dB).5 We have used lateral electric field charge modulator (LEFM) pixels6 to capture a moment in a time window with a few nanoseconds' width and sub-nanosecond rise and fall times. It is possible to accumulate signals for multiple time windows in the charge domain, thus achieving high photosensitivity. Furthermore, we may use multi-tap implementation—where multiple charge storage memories are prepared for one photodiode—to transfer photogenerated charges to one of the taps, without any loss from different time windows with adjacent opening timings.

Purchase SPIE Field Guide to Image ProcessingBased on LEFM technology, we have developed the fastest silicon-based ultra-high-speed camera, which incorporates multi-aperture optics7 and compressive sensing8 (see Figure 1). For ultra-high-speed imaging, we use a burst readout scheme, where the number of sequential images is predefined (for example, 100 frames), unlike the continuous readout scheme widely used in camcorders. The images are stored on the sensor chip during capture, and read out later (since image readout takes far longer than image capture). Although LEFM enables use of an ultra-fast focal-plane shutter, it cannot capture more frames than taps for a single event. For example, in Figure 1 there are two taps in a pixel. Therefore, we may capture only two sequential frames. Multi-aperture optics solves this issue. If we prepare an individual pixel array for each lens, and every pixel array blinks quickly in turn, we acquire as many sequential images as lenses. Furthermore, compressive sensing enables improved sampling efficiency. We may observe an object with multiple temporally coded shutters, and the number of obtained images is less than that of the observed frames. Finally, all the frames are reconstructed based on sparsity. In other words, we achieve highly efficient sampling of more frames than lenses.

Figure 1. The fundamental components of the optoelectro-computational ultra-high-speed camera. LEFM: Lateral electric field charge modulator.

The most important aspect of this optoelectrocomputational architecture is how the frame rate is determined. Figure 2 compares our approach with conventional methods. The frame rate of conventional ultra-high-speed image sensors is defined by a pause related to the signal transfer from the pixel array to the frame buffer, namely, the multi-stage charge transfer in the charge coupled device,9 or the voltage signal transfer from the pixel to the column frame memory in the CMOS.10 However, in our scheme, the frame rate is determined only by the charge modulation speed in pixels. Unlike conventional ultra-high-speed image sensors, ours has no dedicated frame memory. Instead, the pixels work as a frame memory, and there is no pause caused by the signal transfer delay.

Figure 2. Comparison of conventional filming schemes with the ultra-high-speed approach.

Figure 3 shows our prototype sensor, camera, and experimental results.11 We fabricated a prototype CMOS image sensor with 5×3 apertures. The pixel count per aperture is 64×108, and the pixel size is 11.2×5.6μm. We built a prototype camera equipped with a lens array with a focal length of 3.0mm and a pitch of 0.72×1.19mm. As a preliminary demonstration of filming a single-event ultra-high-speed phenomenon, we observed air breakdown plasma at 200M frames/s. We obtained 15 images for 15 random shutter patterns composed of 32 bits. In the experiment, we focused a short-pulse laser beam (using a neodymium-doped yttrium garnet second harmonic generation laser, λ=532nm, pulse width of 8ns) in the air. The images are multiplexed only in time. Therefore, the compressed images in Figure 3 became blurry, reflecting the temporal shutter patterns. We reproduced 32 frames out of 15 images with the algorithm TVAL3,12 so that the compression ratio was 47%.

Figure 3. Prototype image sensor, camera, and experimental results for an air breakdown plasma emission captured at 200M frames/s.

In summary, we have produced an optoelectrocomputational ultra-high-speed camera, and now aim to improve frame rate, the number of sequential images, pixel count, photosensitivity, and artifact reduction. Although the multi-aperture architecture is advantageous in scalability, it requires a special lens array. For compatibility with conventional single-aperture optics, we are now developing a new image sensor with a sub-pixel structure, and we are also exploring applications of our sensor to long-distance time-of-flight range imaging13 and time-resolving 3D microscopy.

We are grateful for the support of S. Okihara, M. -w. Seo, T. Takasawa, K. Yasutomi, and M. Fukuda. This work is partially supported by Grants-in-Aid for Scientific Research (B) 15H03989 and (S) 25220905, and JSPS KAKENHI grant 15J10262. This work is also supported by the VLSI Design and Education Center (VDEC), University of Tokyo, with the collaboration of Cadence Corporation, Synopsys Corporation, and Mentor Graphics Corporation.

Keiichiro Kagawa, Futa Mochizuki, Shoji Kawahito
Research Institute of Electronics
Shizuoka University
Hamamatsu, Japan

Keiichiro Kagawa received a PhD in engineering from Osaka University in 2001, and is currently an associate professor. His research interests include high-performance CMOS image sensors, imaging systems, and biomedical applications.

Futa Mochizuki received an ME from Shizuoka University in 2015, and is currently pursuing a PhD. He is a student member of the Institute of Image Information and Television Engineers (ITE) and IEEE. His current interest is in CMOS image sensors.

Shoji Kawahito received a PhD from Tohoku University, Sendai, in 1988. He is currently a professor and is chief technology officer of Brookman Technology Inc. He is a Fellow of the IEEE and ITE. His research interests are in analog circuits and pixel architecture designs for CMOS imagers.

1. A. Periasamy, R. Clegg, FLIM Microscopy in Biology and Medicine, CRC Press, Boca Raton, 2010.
2. A. Payne, A. Daniel, A. Mehta, B. Thompson, C. Bamji, D. Snow, H. Oshima, et al.,  A 523×424 CMOS 3D time-of-flight image sensor with multi-frequency photo-demodulation up to 130MHz and 2GS/s ADC, ISSCC Dig. Tech. Papers, p. 134-135, 2014.
3. M.-W. Seo, S. Kawahito, K. Kagawa, K. Yasutomi, A 0.27e-rms read noise 220μV/e- conversion gain reset-gate-less CMOS image sensor with 0.11μm CIS process, IEEE Electron. Dev. Lett. 36, p. 1344-1347, 2015.
4. M.-W. Seo, K. Kagawa, K. Yasutomi, T. Takasawa, Y. Kawata, N. Teranishi, Z. Li, I. A. Halin, S. Kawahito, A 10.8ps-time-resolution 256×512 image sensor with 2-tap true-CDS lock-in pixels for fluorescence lifetime imaging, ISSCC Dig. Tech. Papers , p. 189-199, 2015.
5. M.-W. Seo, T. S. Suh, T. Iida, T. Takasawa, K. Isobe, T. Watanabe, S. Itoh, K. Yasutomi, K. Kawahito, A low-noise high intrascene dynamic range CMOS image sensor with a 13 to 19b variable-resolution column-parallel folding-integration/cyclic ADC, IEEE J. Solid-State Circuits 47, p. 272-283, 2012.
6. S. Kawahito, G. Baek, Z. Li, S.-M. Han, M.-W. Seo, K. Yasutomi, K. Kagawa, CMOS lock-in pixel image sensors with lateral electric field control for time-resolved imaging, Int'l Image Sens. Wrkshp., p. 1417-1429, 2013.
7. J. Tanida, T. Kumagai, K. Yamada, S. Miyatake, K. Ishida, T. Morimoto, N. Kondou, D. Miyazaki, Y. Ichioka, Thin observation module by bound optics (TOMBO): concept and experimental verification, OSA Appl. Opt. 11, p. 1806-1813, 2001.
8. E. J. Cades, M. B. Wakin, An introduction to compressive sampling, IEEE Signal Process. Mag. 25, p. 21-30, 2008.
9. T. Arai, J. Yonai, T. Hayashida, H. Ohtake, H. Kujik, T. G. Etoh, A 252-V/luxs, 16.7-million-frames-per-second 312-kpixel back-side-illuminated ultra high-speed charge-coupled device, IEEE Electron. Dev. 60, p. 3450-3458, 2013.
10. Y. Tochigi, K. Hanzawa, Y. Kato, R. Kuroda, H. Mutoh, R. Hirose, H. Tominaga, K. Takubo, Y. Kondo, S. Sugawa, A global-shutter CMOS image sensor with readout speed of 1-Tpixel/s burst and 780Mpixel/s continuous, IEEE J. Solid-State Circuits 48, p. 329-338, 2013.
11. F. Mochizuki, K. Kagawa, S. Okihara, M.-W. Seo, B. Zhang, T. Takasawa, K. Yasutomi, S. Kawahito, Single-shot 200Mfps 53-aperture compressive CMOS imager, ISSCC Dig. Tech. Papers, p. 116-117, 2015.
12. http://www.caam.rice.edu/~optimization/L1/TVAL3/ C. Li, W. Yin, and Y. Zhang, TVAL3: TV minimization by augmented Lagrangian and alternating direction algorithms. Accessed 15 January 2016.
13. F. Mochizuki, K. Kagawa, M.-W Seo, T. Takasawa, K. Yasutomi, S. Kawahito, A multi-aperture compressive time-of-flight CMOS imager for pixel-wise coarse histogram acquisition, Int'l Image Sens. Wrkshp., p. 178-181, 2015.
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research