 Astronomy
 Biomedical Optics & Medical Imaging
 Defense & Security
 Electronic Imaging & Signal Processing
 Illumination & Displays
 Lasers & Sources
 Micro/Nano Lithography
 Nanotechnology
 Optical Design & Engineering
 Optoelectronics & Communications
 Remote Sensing
 Sensing & Measurement
 Solar & Alternative Energy
 Sign up for Newsroom EAlerts
 Information for:
Advertisers




Defense & Security
Creating a distortionfree omnidirectional alignment system
TzungHsien Ho, John Rzasa, and Christopher Davis
High resolution (milliradian) laser pointing/acquisition systems, useful for freespace communication systems alignment, laser weapons, and lidar applications, can be achieved by using a perspective imaging system as a sensor.
6 March 2006, SPIE Newsroom. DOI: 10.1117/2.1200602.0051
To achieve an autonomous pointing capability, an optical alignment system requires a sensor to provide angular information with a high degree of accuracy. A perspective imaging system is an ideal solution because the lens is optimized as a firstorder imaging device.
The mapping between the world and image coordinates can be described by a pinhole model, which includes two main elements. First there is an extrinsic matrix: a perspective transformation from the world (R^{4}) to camera coordinates (R^{3}). This is a 3×4 matrix and includes a rotation matrix (R) and a translation vector (T). Second is the intrinsic matrix: an affine transformation from the camera (R^{3}) to image coordinates (R^{3}). This is an upper triangular matrix, containing the focal length, the skewness, the aspect ratio, and the image center. Typically, the incident angle (θ)—defined as the angle between the optical axis and incident ray—can be computed as image radius focal length. The geometric model is shown in Figure 1(a).
Figure 1. (a) Regular camera model and (b) imaging model for wideangle lenses.
From the above equation, when the incident angle (θ) increases, the image radius must correspondingly increase for a given focal length. The image radius is proportional to the chargecoupled device (CCD) size, which is limited. Therefore, for a regular imaging system, the field of view (FOV) is narrowed to 15° from 40°. Increasing the viewing angle inevitably brings in higherorder aberrations: particularly strong barrel distortion.^{1}
Consequently, when a large field of view (greater than that available with an imaging system) is required, other sensors with omnidirectional capability are preferred. For example, Saw ^{2} is developing an optical alignment system that uses the Global Positioning System (GPS) to determine the location of a single node with respect to the Earth (e.g. EastNorthup or ENU coordinates). The coordinates of other nodes will be acquired via a radio frequency (RF) communication device to compute the pointing vector. However, this system suffers from three problems: GPS does not work indoors; an RF signal does not require lineofsight, which leads to false alarms in optical alignment systems; and, in mobile applications, GPS requires a coordinate transformer to convert the Earth coordinates into the local alignment coordinates. This transformer is typically another sensor (e.g. a gyroscope) with an error that accumulates over time. This leads to erroneous and unreliable pointing results. Another alternative is to measure the phase difference using an RF antenna array: an approach that has been used widely in radar applications. The resulting system is usually bulky (depending on the operating frequency) and can only retrieve angles when the target distance is far away compared with the antenna size. It suffers from falsealarm problems as well. In conclusion, we believe that an optical imaging system will be the best choice for optical alignment if the higherorder distortions can be eliminated.
Our group has developed a wideangle lens calibration algorithm that includes two steps: intrinsicmatrix estimation and barreldistortion estimation. Our calibration object is Lshaped, and consists of two sets of orthogonal coordinates.
The first step is recovering the transformation caused by the CCD array, including skewness, aspect ratio, and image center. Note that the focal length is excluded because the incident angle (θ) is not preserved in a wideangle imaging system, as shown in Figure 1(b). Figure 2 shows that the center of the CCD array is different from the image center.^{3} This implies that (320,240) cannot be directly used as the image center of a 640×480 camera.
Figure 2. Distorted image taken from an Omnitech wideangle lens. Shown in yellow, the star is the center of the CCD, the circle is the principal center estimated by our algorithm, and the 21 plus signs are the selected control points for our algorithm.
The geometric basis of our algorithm lies in the preservation of the angle on the XY plane (φ), defined as the angle formed by the ray projected on the image plane and the Xaxis. Figure 1(b) show this geometric relationship.
The preservation of (φ) can be used to retrieve the distortion center. The two sets of orthogonal coordinates in the calibration target can provide the estimates of skewness and aspect ratio. After intrinsic matrix estimation, the extrinsic matrix can be further recovered except for the depth of the calibration object, which we denote as t_{z}.
The wideangle distortion model can be divided into twocategories: the RR model, which is the distorted image radius P versus its corresponding perspective image radius P' and the RA model, which is the distorted image radius P versus ray incident angle θ. The latter is described as follows:
We choose to use the RR model in our implementations because it provides a simpler procedure for the estimation of the depth of the object (t_{z}) along with the distortion coefficients. The detailed discussions of the distortion model and its performance related to RA can be found elsewhere.^{6} Once the RR model is adapted, two linear equations can be formed that contain distortion coefficients and t_{z} from each corresponding pair. Thus, we are able to estimate the maximum likelihood of the four coefficients.
Figure 3 shows the results of a calibration. The wideangle imaging lens we applied is from Omnitech, which has a semi FOV up to 95° on a 1/3in CCD array with a resolution of 640×480. In this experiment, 21 control points were selected. The corrected image is shown in Figure 3.
Figure 3. A corrected scene (with FOV ± 70°) after our algorithm. The 21 plus signs (in black) indicate the control points after correction.
We also compared our results with the characteristic curve (incident angle versus projection radius) posted on the Omnitech website.^{4} The coefficients we extracted from their curve using the RR model were [164.84,1.66e^{3},2.79e^{8}], versus our estimation results, which were [167.26,2.31e^{3},1.08e^{8}]. Also, we selected 190 angles uniformly distributed within ±70° to compute the difference between those from two different sets of coefficients. The maximum angular error in the test was smaller than 1°. The error is mainly caused by measurement noise. The error histogram is plotted in Figure 4.
Figure 4. Angular error histogram with 190 different incident angles (from 0° to 70°) while using the estimated coefficients.
Once the sensor is capable of providing correct angular information, the mapping between the sensor coordinates and the local alignment coordinates is only a 3×3 linear transformation for targets at sufficiently large distances.^{7} For shortdistance applications, two or multiple cameras have to be applied. Their mapping can easily be described by a trifocal or quadfocal tensor.^{5}
The resolution of CCD arrays has increased dramatically recently, but the FOV of the camera does not increase correspondingly because of the limit of perspective vision. Here we describe a wideangle lens correction scheme that we hope will provide a way to enhance cameras' ability to measure. This technique has been incorporated into freespace optical communication systems at the University of Maryland.
Authors
TzungHsien Ho, John Rzasa, Christopher Davis
Department of Electrical and Computer Engineering
College Park, MD
TzungHsien is a PhD candidate in electrical engineering at the University of Maryland, College Park. He received his BS from National Taiwan University and his MS from the University of Maryland. He conducts research on pointing, acquisition, and tracking systems for optical wireless communication links. In addition, he has presented papers related to his research in pointing, acquisition, and tracking within both SPIE's Photonics West and Optics and Photonics symposia.
John Rzasa is a graduate student currently designing the next generation of gimbals to be used in adaptivepointing and tracking research. He received his BSEE from the University of Maryland in 2005.
Christopher C. Davis is Professor of Electrical and Computer Engineering at the University of Maryland, College Park. He is a Fellow of both the IEEE and the Institute of Physics. His current research interests include optical wireless, atmospheric turbulence, nanooptics and plasmonics, and optical sensors. In addition, he is currently conference cochair of SPIE's Free Space Laser Communications} conference and has published many papers in SPIE proceedings.


