SPIE Startup Challenge 2015 Founding Partner - JENOPTIK Get updates from SPIE Newsroom
  • Newsroom Home
  • Astronomy
  • Biomedical Optics & Medical Imaging
  • Defense & Security
  • Electronic Imaging & Signal Processing
  • Illumination & Displays
  • Lasers & Sources
  • Micro/Nano Lithography
  • Nanotechnology
  • Optical Design & Engineering
  • Optoelectronics & Communications
  • Remote Sensing
  • Sensing & Measurement
  • Solar & Alternative Energy
  • Sign up for Newsroom E-Alerts
  • Information for:
SPIE Defense + Commercial Sensing 2017 | Register Today

OPIE 2017

OPIC 2017




Print PageEmail PageView PDF

Electronic Imaging & Signal Processing

Dealing with camera calibration in videogrammetry

A new high-precision camera calibration method that effectively compensates for lens distortion can easily be implemented.
7 February 2007, SPIE Newsroom. DOI: 10.1117/2.1200701.0509

Camera calibration is a fundamental photogrammetry issue, affecting performance in applications as varied as videogrammetry, machine vision, industrial control and object tracking. Several calibration methods have been proposed: these include direct linear translation (DLT),1 radial alignment constraint (RAC),2 and Zhang's method.3 Of note is the pinhole model used to represent an ideal camera projection, based on the assumption that the object point, the perspective center of the lens and the ideal image point all lie on a straight line. Lu et al.4 expressed this as a set of collinear equations that relate the coordinates of a 3D point in the object frame to the camera frame.

The main purpose of camera calibration is to determine the projective model and then resolve the camera elements with the goal of achieving high precision and stability. However, imperfections in lens design and assembly prevent perfect agreement between the actual camera system and the ideal pinhole model, mainly as a result of lens distortions between ideal and actual image points. Thus, besides resolving the camera elements, the lens distortion problem must be addressed.

In our work, we consider radial and decentering distortions, the two main types of distortions that occur in a lens.5 The former is the important one, with the latter often negligible.2 The approximate relationship between radial distortion and radial distance is illustrated in Figure 1. At small radial distance values, the radial distortion is so small that the curve describing the distortion profile is effectively linear for the first few points. If more than five points whose corresponding object coordinates are known can be projected close to the first point, an approximately ideal projective model can be calculated. Moreover, when these object points are projected to the image plane based on the projective model, the resulting image points can be considered ideal. So our approach yields not only the camera calibration parameters, but also the extent of lens distortions.

Our method divides the camera calibration procedure into two steps. First, the camera parameters are calibrated using a linear projective model. Second, the nonlinear lens distortions are determined based on the model obtained in the first step.

Figure 1. Gaussian radial distortion profiles for a Videk Megaplus CCD camera with a 20mm lens.6

We used it to design a calibration grid for our experiments. The grid, shown in Figure 2(a), consists of 851 (23 lines, 37 rows) coplanar black-filled circular targets, all of which can be divided into two groups based on their role in the calibration: base points (near the principal point or geometric center) and correction points (others). The former are used to construct the projective model and the latter to compute the distortion model. The camera is fixed, and the calibration grid is moved on the guide apparatus with precision control. We take photos of the grid before and after its movement using a digital camera.

Figure 2(b) shows the image points processed by the sub-pixel technology implemented in PhotoModeler (EOS Systems Inc.). The effect of lens distortions is obvious, especially at the image edge. The projective model and the camera elements are easily obtained, and we then project the correction points to the image plane as ideal—Figure 2(c)—and compute the non-linear distortion model with a least squares method. Figure 2(d) shows the resulting map of deviation vectors in the image plane.

Figure 2. (a) Calibration grid (partial). (b) The center of the real observed targets extracted with sub-pixel technology. (c) The ideal projective image points. (d) The deviation vector map in the image plane.

This new calibration method is theoretically sound and experimentally efficient, accurate, and easy to implement. We plan to exploit its high potential in further work aimed at expanding its scope to a wide variety of photogrammetric applications.

Jun Wang, Naiguang Lu
School of Electronic Engineering Beijing
University of Posts and Telecommunications
Department of Electronic Information Engineering
Beijing Institute of Machinery
Beijing, China
Mingli Dong
Department of Electronic Information Engineering
Beijing Institute of Machinery
Department of Optic-electronic Engineering Beijing Institute of Technology Beijing, China
Chunhui Niu
Department of Electronic Information Engineering
Beijing Institute of Machinery

Jun Wang is a PhD student at the Beijing University of Posts and Telecommunications.

Prof. Naiguang Lu is the director of the Department of Electronic Information Engineering at the Beijing Institute of Machinery.

Mingli Dong is a professor at the Department of Electronic Information Engineering at the Beijing Institute of Machinery.

Dr. Chunhui Niu is a teacher in the Department of Electronic Information Engineering at the Beijing Institute of Machinery.