SPIE Membership Get updates from SPIE Newsroom
  • Newsroom Home
  • Astronomy
  • Biomedical Optics & Medical Imaging
  • Defense & Security
  • Electronic Imaging & Signal Processing
  • Illumination & Displays
  • Lasers & Sources
  • Micro/Nano Lithography
  • Nanotechnology
  • Optical Design & Engineering
  • Optoelectronics & Communications
  • Remote Sensing
  • Sensing & Measurement
  • Solar & Alternative Energy
  • Sign up for Newsroom E-Alerts
  • Information for:
SPIE Photonics West 2019 | Register Today

SPIE Defense + Commercial Sensing 2019 | Call for Papers

2019 SPIE Optics + Photonics | Call for Papers



Print PageEmail PageView PDF

Defense & Security

Creating a distortion-free omnidirectional alignment system

High resolution (milli-radian) laser pointing/acquisition systems, useful for free-space communication systems alignment, laser weapons, and lidar applications, can be achieved by using a perspective imaging system as a sensor.
6 March 2006, SPIE Newsroom. DOI: 10.1117/2.1200602.0051

To achieve an autonomous pointing capability, an optical alignment system requires a sensor to provide angular information with a high degree of accuracy. A perspective imaging system is an ideal solution because the lens is optimized as a first-order imaging device.

The mapping between the world and image coordinates can be described by a pin-hole model, which includes two main elements. First there is an extrinsic matrix: a perspective transformation from the world (R4) to camera coordinates (R3). This is a 3×4 matrix and includes a rotation matrix (R) and a translation vector (T). Second is the intrinsic matrix: an affine transformation from the camera (R3) to image coordinates (R3). This is an upper triangular matrix, containing the focal length, the skewness, the aspect ratio, and the image center. Typically, the incident angle (θ)—defined as the angle between the optical axis and incident ray—can be computed as  image radius focal length. The geometric model is shown in Figure 1(a).

Figure 1. (a) Regular camera model and (b) imaging model for wide-angle lenses.

From the above equation, when the incident angle (θ) increases, the image radius must correspondingly increase for a given focal length. The image radius is proportional to the charge-coupled device (CCD) size, which is limited. Therefore, for a regular imaging system, the field of view (FOV) is narrowed to 15° from 40°. Increasing the viewing angle inevitably brings in higher-order aberrations: particularly strong barrel distortion.1

Consequently, when a large field of view (greater than that available with an imaging system) is required, other sensors with omni-directional capability are preferred. For example, Saw 2 is developing an optical alignment system that uses the Global Positioning System (GPS) to determine the location of a single node with respect to the Earth (e.g. East-North-up or ENU coordinates). The coordinates of other nodes will be acquired via a radio frequency (RF) communication device to compute the pointing vector. However, this system suffers from three problems: GPS does not work indoors; an RF signal does not require line-of-sight, which leads to false alarms in optical alignment systems; and, in mobile applications, GPS requires a coordinate transformer to convert the Earth coordinates into the local alignment coordinates. This transformer is typically another sensor (e.g. a gyroscope) with an error that accumulates over time. This leads to erroneous and unreliable pointing results. Another alternative is to measure the phase difference using an RF antenna array: an approach that has been used widely in radar applications. The resulting system is usually bulky (depending on the operating frequency) and can only retrieve angles when the target distance is far away compared with the antenna size. It suffers from false-alarm problems as well. In conclusion, we believe that an optical imaging system will be the best choice for optical alignment if the higher-order distortions can be eliminated.

Our group has developed a wide-angle lens calibration algorithm that includes two steps: intrinsic-matrix estimation and barrel-distortion estimation. Our calibration object is L-shaped, and consists of two sets of orthogonal coordinates.

The first step is recovering the transformation caused by the CCD array, including skewness, aspect ratio, and image center. Note that the focal length is excluded because the incident angle (θ) is not preserved in a wide-angle imaging system, as shown in Figure 1(b). Figure 2 shows that the center of the CCD array is different from the image center.3 This implies that (320,240) cannot be directly used as the image center of a 640×480 camera.

Figure 2. Distorted image taken from an Omnitech wide-angle lens. Shown in yellow, the star is the center of the CCD, the circle is the principal center estimated by our algorithm, and the 21 plus signs are the selected control points for our algorithm.

The geometric basis of our algorithm lies in the preservation of the angle on the X-Y plane (φ), defined as the angle formed by the ray projected on the image plane and the X-axis. Figure 1(b) show this geometric relationship.

The preservation of (φ) can be used to retrieve the distortion center. The two sets of orthogonal coordinates in the calibration target can provide the estimates of skewness and aspect ratio. After intrinsic matrix estimation, the extrinsic matrix can be further recovered except for the depth of the calibration object, which we denote as tz.

The wide-angle distortion model can be divided into twocategories: the R-R model, which is the distorted image radius P versus its corresponding perspective image radius P' and the R-A model, which is the distorted image radius P versus ray incident angle θ. The latter is described as follows:

We choose to use the R-R model in our implementations because it provides a simpler procedure for the estimation of the depth of the object (tz) along with the distortion coefficients. The detailed discussions of the distortion model and its performance related to R-A can be found elsewhere.6 Once the R-R model is adapted, two linear equations can be formed that contain distortion coefficients and tz from each corresponding pair. Thus, we are able to estimate the maximum likelihood of the four coefficients.

Figure 3 shows the results of a calibration. The wide-angle imaging lens we applied is from Omnitech, which has a semi FOV up to 95° on a 1/3in CCD array with a resolution of 640×480. In this experiment, 21 control points were selected. The corrected image is shown in Figure 3.

Figure 3. A corrected scene (with FOV ± 70°) after our algorithm. The 21 plus signs (in black) indicate the control points after correction.

We also compared our results with the characteristic curve (incident angle versus projection radius) posted on the Omnitech website.4 The coefficients we extracted from their curve using the R-R model were [164.84,-1.66e-3,-2.79e-8], versus our estimation results, which were [167.26,-2.31e-3,-1.08e-8]. Also, we selected 190 angles uniformly distributed within ±70° to compute the difference between those from two different sets of coefficients. The maximum angular error in the test was smaller than 1°. The error is mainly caused by measurement noise. The error histogram is plotted in Figure 4.

Figure 4. Angular error histogram with 190 different incident angles (from 0° to 70°) while using the estimated coefficients.

Once the sensor is capable of providing correct angular information, the mapping between the sensor coordinates and the local alignment coordinates is only a 3×3 linear transformation for targets at sufficiently large distances.7 For short-distance applications, two or multiple cameras have to be applied. Their mapping can easily be described by a trifocal or quad-focal tensor.5

The resolution of CCD arrays has increased dramatically recently, but the FOV of the camera does not increase correspondingly because of the limit of perspective vision. Here we describe a wide-angle lens correction scheme that we hope will provide a way to enhance cameras' ability to measure. This technique has been incorporated into free-space optical communication systems at the University of Maryland.

Tzung-Hsien Ho, John Rzasa, Christopher Davis
Department of Electrical and Computer Engineering
College Park, MD
Tzung-Hsien is a PhD candidate in electrical engineering at the University of Maryland, College Park. He received his BS from National Taiwan University and his MS from the University of Maryland. He conducts research on pointing, acquisition, and tracking systems for optical wireless communication links. In addition, he has presented papers related to his research in pointing, acquisition, and tracking within both SPIE's Photonics West and Optics and Photonics symposia.
John Rzasa is a graduate student currently designing the next generation of gimbals to be used in adaptive-pointing and tracking research. He received his BSEE from the University of Maryland in 2005.
Christopher C. Davis is Professor of Electrical and Computer Engineering at the University of Maryland, College Park. He is a Fellow of both the IEEE and the Institute of Physics. His current research interests include optical wireless, atmospheric turbulence, nano-optics and plasmonics, and optical sensors. In addition, he is currently conference co-chair of SPIE's Free Space Laser Communications} conference and has published many papers in SPIE proceedings.

1. M. Born, E. Wolf,
Principles of Optics,
2. W. Saw, H. Rafai, J. Sluss, Free-space optical alignment system: using GPS,
Proc. SPIE,
Vol: 5712, 2005.
3. R. Wilson, S. Shafer, What is the center of the image,
Journal of the Optical Society of America,
Vol: 11, no. 11, pp. 2946-2955, 1994.
4. http://www.omnitech.com/fisheye.htm
5. R. I. Hartley, A. Zisserman, ,
Multiple View Geometry in Computer Vision,
6. Z. Zhang, A flexible new technique for camera calibration,
IEEE Transactions on Pattern Analysis and Machine Intelligence,
Vol: 22, no. 11, pp. 1330-1334, 2000.