Compound-eye-based multidimensional imager

Compound-eye optics enable a simple and scalable 2D scanner to observe more than 2D optical signals without sacrificing spatial or temporal resolution.

11 December 2014
Ryoichi Horisaki, Tomoya Nakamura and Jun Tanida

In imaging applications, such as security or biomedical imaging, increasing the number of dimensions of acquirable optical signals could help identify and classify objects in the image. However, conventional approaches to gain additional information, such as depth or spectral data, with a 2D image sensor, sacrifice spatial or temporal resolution. We have developed two promising approaches using compound-eye optics for future imaging systems that capture and make use of the additional dimensions without sacrificing resolution.

Our first approach is based on compressive sensing to observe the multidimensional signals. The optical design modulates each dimension, and a sparsity-based reconstruction solves the (ill-posed) inverse problem. The second approach uses dimensionally invariant optical design and signal processing to observe a 2D image of a 3D object with a wide field of view (FOV) and an extended depth of field (DOF).

The visual organs of insects and crustaceans are compound eyes. They are composed of lenslets, partitions, and detectors as shown in Figure 1. In the apposition type, each detector optically connects to a single lenslet. In the superposition type, a detector connects to multiple lenslets. Both types have unique features and are used in various innovative imaging systems, notably for compact imaging hardware.1


Figure 1. (a) Apposition and (b) superposition compound eyes.

Optical signals have multiple dimensions, including 3D position (x, y, z), time (t), wavelength (λ), and polarization (p). Conventional approaches to observing these parameters with an image sensor compromise the lateral spatial resolution (x, y) or the temporal resolution. We have introduced an apposition compound eye for single-shot multidimensional imaging based on compressive sensing (CS).2 In this approach, the basic optical unit in the compound eye—made up of a lenslet, detector (sensor), and partition—is differently modulated in each physical dimension except the lateral spatial ones (x, y). Figure 2 shows schemes for multispectral imaging based on this approach. The object datacube (x, y, λ) is sheared as in Figure 2(a) or multiplied with spectral transparencies as in Figure 2(b) along the λ-axis. We use a sparsity-based algorithm to reconstruct the full-sized datacube.


Figure 2. Multispectral compound-eye imaging based on (a) shear and (b) multiplication. λ: Wavelength.

We have applied this approach to depth imaging3 and multispectral polarimetric imaging.4 It is also applicable for wide-dynamic-range imaging and large-FOV imaging.5 We formerly proposed some CS-based methods for multidimensional imaging.6, 7 The approach mentioned here can reduce the size of the imaging hardware compared with our previous methods.

When an object is spread over a large 3D space (x, y, z), it is challenging to capture a 2D image of it. Wavefront coding is a well-known technique for increasing the DOF in an imaging system,8 and some wide-FOV imaging systems have been proposed with compound-eye optics.9 We have proposed two approaches to simultaneously increase the DOF and FOV using a compound eye without any additional optical element.

The first approach uses superposition compound-eye optics.10 Spherical aberration occurs by spherical arrangement of the basic optical units or distortion in them as shown in Figure 3.11 Spherical aberration increases the DOF by acting as a pseudo-depth-invariant point spread function (PSF).12Furthermore, the monocentric configuration of a spherically designed superposition compound eye increases the FOV. After capturing an intermediate image by the imaging optics with the 3D-invariant PSF, a sharp image with a large DOF and FOV is restored by deconvolution with the PSF. This method is also applicable for image projection,13 in which case a pre-deconvolved image is projected through the optics.


Figure 3. Superposition compound-eye imaging with an extended depth of field (DOF) based on (a) spherical arrangement and (b) distortion of erect basic optical units, which present uninverted images to the image sensor. z1,  2: Object distances.

The second approach is based on apposition compound-eye optics,14 in which the compound eye acts as a microcamera array and captures the light field.15 This enables us to computationally realize an arbitrary camera condition including a wavefront-coded camera. We calculate an intermediate image supposing a wavefront-coded camera from the physically captured light field of a 3D object. A sharp object image is restored by deconvolution with a wavefront-coded PSF: see the experimental results in Figure 4. A single captured elemental image for a 3D object in Figure 4(a) has a large DOF but is noisy because of a high F-number. The intermediate image with wavefront coding in Figure 4(c) has a larger DOF compared with that without wavefront coding in Figure 4(b). The final deconvolution result with the wavefront coding, which has a large DOF, is sharp and noiseless: see Figure 4(d). This method is also applicable for image projection.14


Figure 4. Experimental results of extending the DOF by an apposition compound eye. (a) An image captured by a single basic optical unit. (b) The image repaired by conventional reconstruction. (c) The image repaired by wavefront-coded reconstruction. (d) Deconvolution of (c).

In summary, we have presented several approaches to observing multidimensional optical signals based on compound eyes. They are promising foundations for next-generation imaging systems. Future work will focus on integrating each of them into compact imaging hardware. State-of-the-art manufacturing technologies, such as wafer fabrication, will be key to this.


Ryoichi Horisaki, Tomoya Nakamura, Jun Tanida
Osaka University
Suita, Japan

Ryoichi Horisaki is an assistant professor, working mainly on computational imaging. He received a PhD from Osaka University in 2010.

Tomoya Nakamura is a PhD candidate whose research interests include computational imaging. He received an MS from Osaka University in 2012.

Jun Tanida is a professor and researches multi-aperture imaging and photonic DNA computation. He received a PhD from Osaka University in 1986.


References:
1. J. W. Duparré, F. C. Wippermann, Micro-optical artificial compound eyes, Bioinspir. Biomim. 1(1), p. R1-16, 2006.
2. D. L. Donoho, Compressed sensing, IEEE Trans. Info. Theory 52(4), p. 1289-1306, 2006.
3. R. Horisaki, S. Irie, Y. Ogura, J. Tanida, Three-dimensional information acquisition using a compound imaging system, Opt. Rev. 14(5), p. 347-350, 2007.
4. R. Horisaki, X. Xiao, J. Tanida, B. Javidi, Feasibility study for compressive multi-dimensional integral imaging, Opt. Express 21(4), p. 4263-4279, 2013.
5. R. Horisaki, J. Tanida, Multidimensional TOMBO imaging and its applications, Proc. SPIE 8165, p. 816516, 2011. doi:10.1117/12.892432
6. R. Horisaki, J. Tanida, Multi-channel data acquisition using multiplexed imaging with spatial encoding, Opt. Express 18(22), p. 23041-23053, 2010.
7. R. Horisaki, J. Tanida, A. Stern, B. Javidi, Multidimensional imaging using compressive Fresnel holography, Opt. Lett. 37(11), p. 2013-2015, 2012.
8. E. R. Dowski Jr., W. T. Cathey, Extended depth of field through wave-front coding, Appl. Opt. 34(11), p. 1859-1866, 1995.
9. G. Druart, N. Guérineau, R. Haïdar, S. Thétas, J. Taboury, S. Rommeluère, J. Primot, M. Fendler, Demonstration of an infrared microcamera inspired by Xenos peckii vision, Appl. Opt. 48(18), p. 3368-3374, 2009.
10. T. Nakamura, R. Horisaki, J. Tanida, Computational superposition compound eye imaging for extended depth-of-field and field-of-view, Opt. Express 20(25), p. 27482-27495, 2012.
11. R. Horisaki, T. Nakamura, J. Tanida, Superposition imaging for three-dimensionally space-invariant point spread functions, Appl. Phys. Express 4(11), p. 112501, 2011.
12. P. Mouroulis, Depth of field extension with spherical optics, Opt. Express 16(17), p. 12995-13004, 2008.
13. T. Nakamura, R. Horisaki, J. Tanida, Computational superposition projector for extended depth of field and field of view, Opt. Lett. 38(9), p. 1560-1562, 2013.
14. T. Nakamura, R. Horisaki, J. Tanida, Computational phase modulation in light field imaging, Opt. Express 21(24), p. 29523-29543, 2013.
15. M. Levoy, P. Hanrahan, Light field rendering, Proc. ACM SIGGRAPH, p. 43-54, 1996. https://graphics.stanford.edu/papers/light/light-lores-corrected.pdf
PREMIUM CONTENT
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research