SPIE Membership Get updates from SPIE Newsroom
  • Newsroom Home
  • Astronomy
  • Biomedical Optics & Medical Imaging
  • Defense & Security
  • Electronic Imaging & Signal Processing
  • Illumination & Displays
  • Lasers & Sources
  • Micro/Nano Lithography
  • Nanotechnology
  • Optical Design & Engineering
  • Optoelectronics & Communications
  • Remote Sensing
  • Sensing & Measurement
  • Solar & Alternative Energy
  • Sign up for Newsroom E-Alerts
  • Information for:
SPIE Photonics West 2018 | Call for Papers




Print PageEmail PageView PDF


Extracting useful information from distorted images with multiframe blind deconvolution

Satellite details were obtained from distorted telescope images restored with a multiframe deconvolution method using a maximum-likelihood object spectrum.
5 April 2007, SPIE Newsroom. DOI: 10.1117/2.1200702.0518

Telescopic viewing through the atmosphere yields blurred, distorted images that render objects such as stars and satellites unrecognizable. This distortion is evident in Figures 1 through 3, which show images of three different satellites obtained in incoherent solar light by a telescope from Astrophysika in Moscow, Russia.

We know that the image from a telescope is a convolution of the satellite's true image and the optical system's impulse, or point, response. For a telescope observing through the atmosphere, neither function is known—the satellite is unfamiliar, and there is atmospheric distortion.

Different approaches to restoring such images have been tried using real—as opposed to simulated—data from telescopes. One of the most common is blind deconvolution, in which the goal is to restore both functions from their convolution—that is, from the telescopic image.

An early iterative deconvolution method developed by Ayers and Dainty1 using a single-frame approach cannot resolve complex objects such as satellites. Many other methods give very satisfactory results if the main features of objects are recognizable even in blurred frames.2 Still other approaches use a priori distortion models,3 although with actual images we cannot define all possible distortions. When none of these advantages are available, an empirical approach, such as the one we describe below, would seem to be the most appropriate one.

Figure 1. Image of the Lacrosse-2 satellite obtained by telescope.

Figure 2. Image of the Nadezhda-6 satellite.

Figure 3. Image of the KH-12 satellite.

Our method4 is a multiframe, blind-deconvolution algorithm that incorporates a maximum-likelihood approximation of the object spectrum into the Ayers-Dainty1 algorithm. It processes several distorted frames simultaneously to obtain independent estimates of blurred transfer functions of the optical system. The method makes no assumptions about the object or the impulse responses other than presuming they are positive. Moreover, the algorithm includes no cost functions, input parameters, probability distributions, or analytically specified transfer functions, and it can be used with any type of distortion. Exemplary results of restoration using this approach are detailed below.


Fourteen frames of 128×128 pixels were used in recovering the image of the Lacrosse-2 satellite (see Figures 1 and 4). The restored image (Figure 4)—shown enlarged at twice the original image's size to enhance visualization—reveals four construction elements in the satellite: at right, a large, round transmitting antenna in a form of a big bowl; beneath it, a long, straight antenna for synthesizing the aperture in time; the body of the satellite, with a large antenna to the right of the body (possibly for communications); and a solar panel in the shape of a flat surface directly over the satellite attached to the body by a vertical truss.

Figure 4. Restored image of Lacrosse-2, enlarged by a factor of 2.

Nine frames of 256×256 pixels were used to reconstruct the the Nadezhda-6 satellite, which is part of a seven-element Russian navigation network used to monitor natural disasters. The original image is shown in Figure 2, and the restored image in Figure 5, which is enlarged by a factor of two in Figure 6. Satellite elements evident in the restored image include: a conical body; an antenna for Earth sounding; a second antenna possibly for aperture synthesis; and a small solar panel under the body.

Figure 5. Restored image of Nadezhda-6, 43rd iteration.

Figure 6. Restored image of Nadezhda-6, enlarged by a factor of 2.

As with Nadezhda-6, reconstitution of KH-12 was done in nine frames of 256×256 pixels. The original image is shown in Figures 3, its restored version in Figure 7, and its two-times enlargement in Figure 8. The enlargement reveals a satellite body that is narrow at its nose and round on its bottom; a telescope extending out of the body and perpendicular to it; and a solar panel (one of two) that folds out, also perpendicular to the body, when it is working. This panel performs simultaneous functions of screening the telescope from direct sunlight, as well as capturing energy from the Sun. The vector normal to the lens of the telescope is always perpendicular to the direction of the sun's rays, which is optimal for photographing and keeps the lens from overexposing.

Figure 7. Restored image of KH-12, 722nd iteration.

Figure 8. Restored image of KH-12, enlarged by a factor of 2.

Our results confirm the possibility of restoring telescope images when the originals have been blurred after passing through the atmosphere. The technique has application in cases where an optical system located at Earth must provide information for making a correction to a satellite located far away. The method could also be used in observing other astronomical objects, such as double stars that appear to be single ones. Closer to home, our restoration method could be used in imaging objects on the Earth's surface that are moving through rain, snow, smoke, or even a very thick fog. We intend to test the algorithm on objects other than satellites.

Yulia Zhulina
Radiolocation, Vympel Corporation
Moscow, Russia