SPIE Membership Get updates from SPIE Newsroom
  • Newsroom Home
  • Astronomy
  • Biomedical Optics & Medical Imaging
  • Defense & Security
  • Electronic Imaging & Signal Processing
  • Illumination & Displays
  • Lasers & Sources
  • Micro/Nano Lithography
  • Nanotechnology
  • Optical Design & Engineering
  • Optoelectronics & Communications
  • Remote Sensing
  • Sensing & Measurement
  • Solar & Alternative Energy
  • Sign up for Newsroom E-Alerts
  • Information for:
SPIE Photonics West 2018 | Call for Papers

OPIE 2017

OPIC 2017




Print PageEmail PageView PDF

Electronic Imaging & Signal Processing

Panoramic full-frame imaging with monocentric lenses and curved fiber bundles

3D curved waveguides, cut from tapered fiber faceplates, are combined with spherical optics to make a compact, 125 Mpixel/frame, 360° video camera that is useful for immersive virtual reality content.
15 December 2016, SPIE Newsroom. DOI: 10.1117/2.1201611.006666

Panoramic imaging is important for many different applications, including content for immersive virtual reality. Although compact 360° cameras can be made from an array of small-aperture ‘smartphone’ imagers, their small (typically 1.1μm) pixels provide low dynamic range. Moreover, digital single-lens-reflex and cinematographic imagers have 4–8μm pixels, but require correspondingly longer focal length lenses. Conventional ‘fisheye’ lenses are also problematic because they are bulky and have low light collection (typically F/2.8 to F/4, where F is the focal length divided by the lens aperture).

Purchase SPIE Field Guide to Image ProcessingAn alternative path to panoramic imaging is ‘monocentric’ optics, where all surfaces—including the image surface—are concentric hemispheres.1 The symmetry of these lenses means that lateral color and off-axis aberrations (astigmatism and coma) are eliminated. In addition, the simple lens structures can be used to correct for spherical and axial color aberrations to yield extraordinarily wide angle resolution and light collection.2 The image that is produced can be coupled to a conventional focal plane, via a fiber bundle faceplate (with a curved input and flat output face).3 Fiber faceplates are solid glass elements made of small, high-index optical fibers separated by a thin, low-index cladding, used for nonimaging transfer of light between the input and output faces.

From our research, within the Defense Advanced Research Projects Agency (DARPA) SCENICC (Soldier Centric Imaging via Computational Cameras) program, we have shown that fiber bundles can reach a spatial resolution of 2μm.4 We have also demonstrated a 30 Mpixel 124° field-of-view ‘letterbox’-format imager, in which we use a closely fit linear array of six fiber bundles (each coupled to a 5 Mpixel color CMOS sensor that has 1.75μm pixels).5 However, refraction at an angled input face of the fiber bundle generally limits the field of a single straight fiber bundle to approximately 50°. Although a 2D array of fiber-coupled sensors can cover the full field of view, this can be achieved only at a significant cost to integration. Instead, we have fabricated a full-field panoramic imager that has a 3D fiber bundle structure (in which the internal curves couple light, over a full 126° field of view, to a single full-frame sensor).

Although the fabrication of such a 3D fiber bundle structure may seem formidable—especially given the high core/cladding index contrast required—imaging fiber bundle tapers (developed for demagnifying fiber faceplates) fortunately have nearly ideal internal structures, where the fiber curves to point toward the center of the lens. For our panoramic imager, we thus couple straight and curved fiber bundles (made from Schott 24AS fiber-optic material) to a monocentric lens (see Figure 1). In our design process for a 3D curved fiber bundle (with the required field of view), we define the taper ratio as well as the input/output surfaces (see Figure 2). Our experimental measurements for a bundle that couples light (over a 126° field of view) from a 12mm-focal-length lens into a 24mm-square full-frame image sensor are also shown in Figure 2.6 Losses were dominated by surface reflections and some absorption within the core of the high-index (n=1.81) fibers.

Figure 1. Geometry of a monocentric lens (left) and the spherical image surface it forms (right) can be coupled to CMOS focal plane(s) by an array of straight fiber bundles (top) or a single curved fiber bundle (bottom). The F-number is the focal length (f) divided by the lens aperture.

Figure 2. The model of the internal curved fiber structure inside a fiber taper (left) is used to calculate the ray intercept angle of the image on the fiber bundle face (center). Rays incident on the spherically polished face of a single straight fiber bundle reach the critical angle at a ±35°horizontal field of view (HFOV), and are not guided to the sensor. The fiber taper is designed to allow the entire 124° full-angle image to couple to a single focal plane. This was confirmed by experimental measurements (right) of the light transfer through the spherically polished 2.59:1 fiber tapers used in the imager prototypes.

Our curved fiber bundle imager that is shown in Figure 3 has a 12mm-focal-length monocentric lens made from a 7.2mm-diameter core of S-LAL13, which consists of a F/1.35 aperture, surrounded by a 3.6mm-thick layer of S-LAH79 (S-LAH13 and S-LAH79 are optical glasses from Ohara). We use a fixed meniscus to simplify mounting sensors to fiber bundles. This particular two-glass achromat provides a measured 300 line pairs (lp)/mm resolution over the 120° field.5 We also use a UV-cure adhesive to attach the fiber bundle to the meniscus lens and to an ON Semiconductor VITA25K full-frame color CMOS sensor. This camera has an integrated field-programmable gate array controller and a PCIe (Peripheral Component Interconnect Express) data output that can support up to 53 frames per second at full resolution. We determined the resolution of the sensor from the coupling that occurs between the 2.86μm-pitch fiber cores and the 4.5μm sensor pixels across the adhesive (which ranged in thickness from 2 to ∼4μm along the sensor face). We process the acquired image with a flat-field calibration, to reduce sensor defects and color moiré effects, followed by a color demosaic, gamma correction, block-matching 3D noise reduction, and unsharp masking. The MTF50 and MTF10 (i.e., measures of sharpness) of the fully processed image (shown on the left of Figure 3) were 71 and 113lp/mm, respectively.

Figure 3. Photograph (center) of the 126°field curved fiber bundle imager integrated with a 25 Mpixel sensor (left), next to a conventional lens on same sensor (right). Example images obtained from both setups are shown (on the left and right). The smaller F/1.35 (i.e., curved fiber bundle) monocentric lens causes less image distortion and collects more light than the (F/4) conventional lens.

We used a Canon EF series 8–15mm F/4 USM lens as our performance benchmark, because it provides a better angular resolution than our fixed-focal-length wide-angle lenses that we have tested. For the example image in Figure 3, we set the fisheye lens to match the 12mm focal length at the center of the field. The resultant image shows the barrel distortion that is typical of reverse telephoto wide-angle lenses. Our monocentric imager has less distortion, since the curved bundle geometry partially corrects for the projection of a spherical image onto a planar sensor. The F/1.35 monocentric imager was brighter than the F/4 fisheye lens over the entire field, but it was also nonuniform. This is because of the lossless fiber core expansion at the edges of the tapered fiber bundle and loss from the cosine aperture projection (50% at 60° field). This loss may be entirely eliminated by an advanced monocentric lens structure7 that we are currently investigating.

The final outcome of our SCENICC program work has been the panoramic ‘photonic mast’ (PMast) video imager illustrated in Figure 4.8 This compact imager head has five identical 25 Mpixel imagers, connected by a multimode fiber-optic ribbon cable to a rack of five servers to process and store the video stream (each with 1TB of solid-state memory). The power (approximately 60W) generated by the imagers when they are running at full resolution and 24fps is dissipated by water cooling. The lenses for this imager are slightly modified (from those described previously5) with a F/2.5 aperture, a fixed protective dome, and a removable IR/color/apodization filter that we use to obtain the image shown in Figure 4. Interactive panoramic PMast images, including fully processed video still frames and a brief reduced-resolution video clip, are posted on our website.9

Figure 4. Photograph of the photonic mast (PMast) head (top left). Five discrete 25 Mpixel images from it are shown (top center), along with a zoomed-in portion of one pixel (top right) and the total stitched panorama (bottom).

In summary, we have used a monocentric optics approach and 3D curved fiber bundles to realize novel full-field panoramic imagers. We expect a panoramic video camera (such as our PMast video imager) to be the first commercial application of monocentric imaging, because it can capture cinematographic-grade immersive video content without the obtrusive bulk of more conventional multicamera rigs. Someday, we will likely be watching the latest action adventure blockbuster on a virtual reality headset. Until then, we are continuing to conduct research on related exotic panoramic imagers, to explore much longer focal length lenses, and single-aperture fields of view of more than 180°.

Joseph Ford, Salman Karbasi, Ilya Agurok, Igor Stamenov, Glenn Schuster, Nojan Motamedi, Ash Arianpour, William Mellette
University of California San Diego
La Jolla, CA

Joseph Ford is a professor of electrical and computer engineering at the University of California San Diego. He leads the Photonics Systems Integration Laboratory, which performs advanced free-space physical optical system design, prototyping, and characterization for a range of applications.

Adam Johnson, Ryan Tennill, Rick Morrison, Ron Stack
Distant Focus Corporation
Champaign-Urbana, IL

1. T. Sutton, Panoramic photography, Photograph. J. 6, p. 184-188, 1860.
2. I. Stamenov, I. Agurok, J. E. Ford, Optimization of high-performance monocentric lenses, Appl. Opt. 52, p. 8287-8304, 2013.
3. T. S. Axelrod, N. J. Colella, A. G. Ledebuhr, The wide-field-of-view camera, Energy Technol. Rev., 1988.
4. N. Motamedi, S. Karbasi, J. E. Ford, V. Lomakin, Analysis and characterization of high-resolution and high-aspect-ratio imaging fiber bundles, Appl. Opt. 54, p. 9422-9431, 2015.
5. I. Stamenov, A. Arianpour, S. J. Olivas, I. P. Agurok, A. R. Johnson, R. A. Stack, R. L. Morrison, J. E. Ford, Panoramic monocentric imaging using fiber-coupled focal planes, Opt. Express 22, p. 31708-31721, 2014.
6. S. Karbasi, I. Stamenov, N. Motamedi, A. Arianpour, A. R. Johnson, R. A. Stack, C. LaReau, Curved fiber bundles for monocentric lens imaging, Proc. SPIE 9579, p. 95790G, 2015. doi:10.1117/12.2188901
7. I. P. Agurok, J. E. Ford, Angle-invariant imaging using a total internal reflection virtual aperture, Appl. Opt. 55, p. 5345-5352, 2016.
8. J. E. Ford, Panoramic high-resolution imaging using spherically-symmetric optics. Presented at SPIE Defense + Commercial Sensing 2016.
9. http://psilab.ucsd.edu Photonic Systems Integration Laboratory, UC San Diego. Accessed 25 October 2016.