SPIE Startup Challenge 2015 Founding Partner - JENOPTIK Get updates from SPIE Newsroom
  • Newsroom Home
  • Astronomy
  • Biomedical Optics & Medical Imaging
  • Defense & Security
  • Electronic Imaging & Signal Processing
  • Illumination & Displays
  • Lasers & Sources
  • Micro/Nano Lithography
  • Nanotechnology
  • Optical Design & Engineering
  • Optoelectronics & Communications
  • Remote Sensing
  • Sensing & Measurement
  • Solar & Alternative Energy
  • Sign up for Newsroom E-Alerts
  • Information for:
SPIE Defense + Commercial Sensing 2017 | Register Today

OPIE 2017

OPIC 2017




Print PageEmail PageView PDF

Remote Sensing

Restoration of atmospheric- turbulence-degraded video

A new algorithm allows optical distortions caused by atmospheric turbulence to be suppressed in long-range video.
28 March 2006, SPIE Newsroom. DOI: 10.1117/2.1200602.0158

Image degradation associated with atmospheric turbulence often occurs when viewing remote scenes: the objects of interest will appear blurred, and the severity of this blurring will typically change over time. In addition, the stationary scene may appear to waver spatially. A classic example of this kind of distortion is the jet on the tarmac at an airport on a hot day: although the jet is not moving, we see dynamic optical distortions that are generally most visible behind the running engines.

The goal of the work we present here is to compensate for optical distortion and to produce a clear and accurate restoration of the scene. Building on the classical distortion models employed in the image-restoration literature, we have elected to model the degradation as having two components: a dispersive component (blurring) and a time-varying distortion component (geometric distortion). This designation fits the types of distortions we are attempting to address and enables us to construct a reasonably robust algorithm.

To address the dispersive component, we employ a model based on the optical transfer function (OTF) suggested by Hufnagel and Stanley.1 The Hufnagel-Stanley OTF resembles a Gaussian blurring function with a parameter λ in the exponent that controls the blur severity. As λ increases in value, so does the degree of the blur. In the physical world, a number of factors affect the blurring distortion that we observe, such as temperature, humidity, elevation, and wind speed. In most cases these atmospheric conditions are not known, nor is there generally any external information available to help specify the blur function. Thus, the restoration is blind in that regard and we must find a way to approximate λ without additional information.

Once λ is estimated, we use a Wiener restoration filter to compensate for the blur. That is to say, we model the degraded image as the linear convolution of the original image and the point spread function associated with the Hufnagel-Stanley OTF plus noise. The restored image is obtained by convolving the degraded image with a Wiener restoration filter. Since the latter is a function of λ, the first order of business is to determine its value.

To do this, we employ kurtosis as our criterion. Generally speaking, the kurtosis measures how prone to outliers a given distribution is. This is indirectly-related to smoothness or blur. Consequently, an image with low kurtosis tends to imply sharpness or, in our case, the restored image closest to the original. To use this for restoration, we first compute a set of images that each use a different value of λ, and then choose the value that gives the image with the the minimum kurtosis. Further discussion on the kurtosis measure for restoration can be found in a paper we published last year.2

To address the geometric distortion component, we consider the turbulence-induced motion between consecutive frames of the video. Using an adaptive control-grid-interpolation warping algorithm,3 we track the motion of each pixel in the video and compute the displacement centroid along the trajectories.4 The centroid location is then used as the displacement vector to compensate for motion distortion. Since turbulence typically introduces low amplitude quasi-random or quasi-periodic fluctuations in the motion vectors, the centroid calculation serves as a filter to mitigate these effects, while at the same time preserving true motion that may occur naturally.

Combining the restoration processes for the dispersive and geometric distortion components, we obtain an improved algorithm for suppressing atmospheric turbulence relative to earlier work.4 To illustrate the improvement,Figure 1 shows an original frame of a sequence that was degraded (via simulation) by atmospheric turbulence. The small truck in the video is in motion, which is a big challenge to many algorithms of this type.Figure 2 shows the image frame restored using the method described above. These results are far more dramatic when viewing the video sequence. Nonetheless, the comparison enables one to see both the suppression of blur as well as the compensation of the geometric distortion, most visible in the area outlined in red. Additional information about the proposed method may be found in our recent SPIE paper.5

Figure 1. A frame of a parking-lot video sequence degraded by simulated atmospheric turbulence.

Figure 2. Restored version of the frame shown in Figure 1 using the new algorithm.

Dalong Li
Center for Signal and Image Processing, Georgia Institute of Technology
Atlanta, GA
Dalong Li is currently working towards his PhD at Georgia Tech. He received his master degree in the Chinese Academy of Sciences in Beijing. His major research interests are in image/video restoration, pattern recognition and machine learning. Over the years, Dalong has spent several internships at Hewlett Packard Laboratories, Eastman Kodak Research Lab, the Mathworks and Philips Research.
Mark Smith
Purdue University
West Lafayette, IN
Russell Mersereau
Center for Signal and Image Processing, Georgia Institute of Technology
Atlanta, GA