Real-time processing for long-range imaging

A hybrid system that incorporates advanced adaptive optics and algorithm acceleration techniques can compensate wavefront aberrations caused by turbulence and improve image quality.
29 May 2015
Jony Liu, Gary W. Carhart, Leonid A. Beresnev, John McElhenny, Christopher Jackson, Garrett Ejzak, Tyler Browning, Furkan Cayci and Fouad Kiamilev

For tactical long-range imaging and reconnaissance systems, atmospheric turbulence constantly causes image distortion and degradation. These problems present a significant challenge to target identification, recognition, and combat operation tracking activities. The conventional solution is to acquire the necessary images and videos, and then conduct the subsequent processing or analysis. In modern warfare situations, however, it is critical to obtain useful processed images and video data in real time to support decision-making requirements.

Purchase SPIE Field Guide to IR Systems, Detectors and FPAsIn the past, our team at the Army Research Laboratory (ARL) has developed an advanced adaptive optics (AO) system to increase the speed of image processing.1 This system contained a set of high-performance deformable mirrors (DMs) and a fast stochastic parallel gradient descent (SPGD) control system. The AO system enables the compensation of turbulence-induced wavefront aberrations and results in a significant improvement to image and video quality.

We have recently further developed the AO system by applying a sophisticated digital synthetic imaging and processing technique. We use this ‘lucky-region’ fusion (LRF) approach to mitigate the image degradation over a large field of view.2 With our LRF algorithm, we extract sharp regions from each image—obtained from a series of short exposure frames—and then fuse them back into a final improved image. The LRF approach requires substantial processing power to generate fused images and videos. As such, it requires implementation on a PC and it is mostly used as a post-detection method. A regular processor (or even a graphics processing unit), however, may not be sufficient for required real-time image extraction, processing, and reconstruction. We therefore use a ‘hardware acceleration’ approach, for which we have developed hardware electronics and implemented the LRF algorithm into a field programmable gate array (FPGA) processor.

At the ARL, we have developed the complete technology required to fabricate the necessary DMs (see Figure 1) and control algorithms. We have also successfully developed and experimentally tested a delayed-SPGD control program. The setup of our AO imaging laboratory for these tests is shown in Figure 2. We used live images and videos—taken from 2.3km away on an approximately horizontal path—for our experiments, as well as a far-field laser beacon as the metric signal for the SPGD control mechanism. Using the SPGD algorithm and two asynchronous DMs, we performed successful wavefront compensation for the received images and videos.


Figure 1. Photograph (bottom) and pixellation patterns (top and middle) of deformable mirrors (DMs) fabricated at the US Army Research Laboratory (ARL). Each DM has a diameter of 33mm, and either 31 or 63 channels.

Figure 2. The adaptive optics imaging laboratory setup at the ARL. Red and pink lines indicate optical pathways. PM: Primary mirror.

We have developed two different versions of our latest hardware acceleration systems, one at the ARL and one at the University of Delaware.3 For these latest iterations of our system, we have adopted a ‘black-box’ approach. The LRF algorithm is implemented on a separate FPGA, and the experimental platform—which contains the FPGA—interacts with an independent camera. Our experimental black-box setup is composed of a Xilinx VC709 connectivity kit with a Virtex 7FPGA. We also use a Toyon Boccaccio FPGA Mezzanine Card to connect our system with any camera that uses a Camera Link interface. The Virtex 7FPGA on the VC709 has 52,920kb of block RAM. The VC709 is also equipped with two 4GB, 1866-million transfer, double data rate (DDR3) small outline dual in-line memory modules. We conducted our experiments by connecting the LRF system to a Basler CA 340km high-speed camera, with 512 × 512 resolution, at 100 frames per second (maximum 444 frames per second). Our data was delivered from the LRF platform to an Imperx FrameLink Express frame grabber on a PC.

We performed our FPGA hardware-accelerated LRF-processing system experiments on images affected by artificial atmospheric turbulence in the laboratory, and on real video data collected through turbulence at a distance of 2.3km. Our real-time atmospheric image and video data was collected using the laboratory periscope imaging system at various times throughout the day. We acquired an Engineering Design Team Camera Link Simulator and installed it on a PC. We operated this simulator like a ‘reverse frame grabber,’ which enabled the video of turbulence data to be passed through the VC709 LRF platform as if it were data from a Camera Link instrument (see Figure 3).


Figure 3. Real-time (left) and lucky-region fusion (LRF)-processed (right) video images that were obtained through atmospheric turbulence. These images were fed through the Camera Link simulator using the second-generation LRF system black box. The frame rate for the LRF-processed video reached 100 frames per second, which is the same rate as for the live video.

We have developed a new hybrid imaging and processing system that incorporates AO and an FPGA-accelerated LRF algorithm. Our experiments have demonstrated that this approach is an effective and advanced technique for future real-time atmospheric imaging applications. Our use of high-performance DMs and a fast SPGD allow wavefront aberrations to be compensated and image quality to be restored. We now plan to further develop our AO system by producing high-resolution DMs. We will also conduct additional work to complete the DDR3 interface with the existing LRF system. This will create a reliable frame buffer to store an adjustable number of synthetic frames simultaneously. In the future, we would also like to improve the system so that it can be operated with a color camera.

We thank members of the partnering research groups at University of Delaware and University of Maryland for providing technical support.


Jony Liu, Gary W. Carhart, Leonid A. Beresnev, John McElhenny
US Army Research Laboratory
Adelphi, MD
Christopher Jackson, Garrett Ejzak, Tyler Browning, Furkan Cayci, Fouad Kiamilev
Department of Electrical and Computer Engineering
University of Delaware
Newark, DE

References:
1. G. W. Carhart, M. A. Vorontsov, Synthetic imaging: nonadaptive anisoplanatic image correction in atmospheric turbulence, Opt. Lett. 23, p. 745-747, 1998.
2. M. Aubailly, M. A. Vorontsov, G. W. Carhart, M. T. Valley, Automated video enhancement from a stream of atmospherically-distorted images: the lucky-region fusion approach, Proc. SPIE 7463, p. 74630C, 2009. doi:10.1117/12.828332
3. C. R. Jackson, G. A. Ejzak, M. Aubailly, G. W. Carhart, J. J. Liu, F. Kamilev, Hardware acceleration of lucky-region fusion (LRF) algorithm for imaging, Proc. SPIE 9070, p. 90703C, 2014. doi:10.1117/12.2053898
PREMIUM CONTENT
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research