SPIE Membership Get updates from SPIE Newsroom
  • Newsroom Home
  • Astronomy
  • Biomedical Optics & Medical Imaging
  • Defense & Security
  • Electronic Imaging & Signal Processing
  • Illumination & Displays
  • Lasers & Sources
  • Micro/Nano Lithography
  • Nanotechnology
  • Optical Design & Engineering
  • Optoelectronics & Communications
  • Remote Sensing
  • Sensing & Measurement
  • Solar & Alternative Energy
  • Sign up for Newsroom E-Alerts
  • Information for:

SPIE Photonics West 2019 | Register Today

SPIE Defense + Commercial Sensing 2019 | Call for Papers



Print PageEmail PageView PDF

Optical Design & Engineering

Innovative lens improves imaging for surveillance systems

By managing distortion and pixel coverage with the Panomorph lens, target detection and recognition improve significantly.
18 August 2006, SPIE Newsroom. DOI: 10.1117/2.1200607.0290

Historically, the army, border-security agencies, and transportation-security planners have recognized the advantages of panoramic imaging systems that contain fields of view of up to 180° in their coverage areas. Such advantages include increased coverage with fewer cameras, simultaneous tracking of multiple targets, full horizon detection, and immersion capability. However, existing panoramic imagers like fisheye, mirror, and PAL (short for panoramic annular imager) have disadvantages related to the number of effective pixels (the basic unit for video screen images) used on their sensors. These disadvantages are due to blind zones and circular or annular (ring-like) image areas that are commonly called footprints.

These drawbacks seriously reduce the use of panoramic imagers for surveillance. To solve these problems, we designed a new imager—the Panomorph lens—that incorporates the advantage of full-area coverage (with a 180° field of view) and the ability to increase resolution where needed. Thus, the Panomorph lens allows for a larger coverage zone and distributes its image more efficiently on the sensor.1,2

Two important parameters for the Panomorph lens are the amount and location of the resolution. These parameters, introduced at the optical design stage, allow the Panomorph lens to provide a higher resolution in a defined zone than any other standard panoramic imager. By comparison, a fisheye lens on a standard NTSC camera (i.e. using the accepted television format in North America and Japan) requires up to six times more pixels on its sensor to produce the same resolution.2

For the security and surveillance of buildings such as indoor parking garages, corporate offices, retail stores, shipping ports, storage facilities, airport terminals, and residential homes, a camera is mounted to the ceiling with its lens facing down (see Figure 1, left). The most significant objects are located in the zone at the periphery of the lens, also called the green zone or zone of interest. The green zone is the most important part of the lens because it allows faces to be identified (facial recognition). Thus, to maximize the optical performance, the lens is designed to increase the instantaneous field of view (in other words, the number of pixels per degree) in a particular green zone. So, each pixel is important to the overall performance of the lens. In fact, even one pixel above or below the ideal number results in the inefficient use of the sensor.

Figure 1. (left) A camera is mounted on the ceiling in an indoor surveillance application. (right) The graph shows the pixel-to-angle coverage, with the dashed line representing a linear pixel coverage.

The circular footprint (see Figure 2, left) of a standard panoramic lens when used with a CCD is inefficient because it under-utilizes the available pixels on the sensor. When Thibault introduced an anamorphic (or image distortion) lens to produce an elliptically shaped footprint (see Figure 2, right), a significant improvement is achieved. (An elliptical footprint is produced by different axial focal lengths; thus, producing different magnification of the image in each of the two perpendicular directions.) Consequently, in a standard 4:3 format sensor, the researchers achieved a gain of 30% in the number of pixels on the horizontal axis. In addition, Figure 2 (left) shows the distortion correction that spreads the green zone on a larger number of pixels within the sensor: thus, both anamorphic coverage and distortion correction is present. As shown, the Panomorph lens increases the figures' height.

Figure 2. (left) The fisheye lens produces a circular footprint image whereas (right) the Panomorph footprint is elliptical.

We calculated that the fisheye, mirror, and PAL panoramic imagers all use less than 60% of the sensor area to image the field of view. On the other hand, the Panomorph lens uses up to 80% of the sensor area, over 30% more than any other panoramic imager. By managing distortion, the number of pixels available in any defined green zone is controlled, which allows for better detection and recognition of the target. With this approach, the number of pixels used in the defined green zone is doubled.

In the example shown in Figure 1, the Panomorph lens provides a face recognition range between 2.35 and 3.10m using a NTSC 360-kilopixel (KPx) sensor (see Figure 3). With the Panomorph lens, the detection range for both axes is different due to anamorphisis. This range is about four to six times longer than the range provided by a fisheye lens on the same sensor. To produce a similar recognition range with a fisheye lens or PAL, a 2-megapixel (MPx) sensor is needed.

Figure 3. Face recognition range (30 pixels/face).

We used an approach that involved managing distortion and pixel coverage when designing the Panomorph lens. When combined with a closed-circuit television (CCTV) camera on a 0.3MPx sensor, the lens provides the same range of detection as a fisheye lens on a 2MPx sensor. Thus, the Panomorph requires about eight times fewer pixels than the fisheye: that means fewer pixels to manage, transfer, and record.

The next research step is to develop unique and efficient Panomorph lenses—each requiring a different pixel coverage—for such camera types as infrared, endoscopy, space, and projection.

Simon Thibault
Optical Division, ImmerVision
Montréal, Québec, Canada
Physics, Engineering Physics and Optics, Laval University
Québec, Canada
Simon Thibault is the principal optical designer at ImmerVision. Thibault joined the company in April 2005 after five years as head of the optical design department at the National Optics Institute (INO). He holds 12 patents and is the author/co-author of more than 70 journal articles. Thibault is also associate professor at Laval University, where he has taught optical design since 1999. In addition, he has been a committee member of SPIE's Optical System Design conference for the last three years and a committee member of the Current Developments in Lens Design and Optical Engineering conference at SPIE's Annual Meeting.