A next-generation system enables persistent surveillance of wide areas

DARPA's planned ubiquitous imaging system is designed to cover a 60-degree field of view at high resolution and at video rates greater than 12Hz.
09 April 2008
Brian Leininger

Finding, tracking, and monitoring events and activities of interest on a continuous basis are critically important for intelligence, surveillance, and reconnaissance (ISR). In particular, we want to monitor large areas (tens of square kilometers) with sufficiently high resolution for distinguishing and tracking dismounts (persons) and vehicles. This is a challenge because imagery for a square kilometer at 15cm ground space distance (i.e., the size of the pixels on the ground) requires more than 44 million pixels. We also want to monitor these areas on a persistent basis using unmanned autonomous systems (UASs).

Aeronautical systems such as the Predator UAS1 have demonstrated value for ISR in Afghanistan during Operation Enduring Freedom and in Iraq during Operation Iraqi Freedom. The Predator provides images of high resolution and narrow field of view or low resolution and wide field of view. Our Autonomous Real-time Ground Ubiquitous Surveillance-Imaging System (ARGUS-IS) will improve this paradigm by providing high-resolution images over a wide field of view. This improvement is possible because of dramatic increases in performance associated with field programmable gate arrays and small-pixel CMOS imaging sensors available today.

Our systems approach for developing the ARGUS-IS is shown in Figure 1. Each of the three main subsystems—gigapixel sensor, airborne sensor processing, and ground processing—presents unique challenges. The gigapixel sensor comprises four focal plane mosaics with 92 five-megapixel imagers in each, for a total of 368 focal plane arrays using four sets of optics. The raw pixel data from the sensor is transmitted to the airborne sensor processing subsystem via a set of 16 twelve-fiber fiber-optic ribbon cables. This subsystem is composed of 16 processing modules. Each module processes the image data from 23 of the five-megapixel focal plane arrays. The airborne sensor and processing systems will be packaged in a pod flown on an A-160 Hummingbird UAS. The ground processing subsystem provides the interface for the user and determines what will be downlinked in real time to the ground.


Figure 1. The ARGUS-IS consists of three major subsystems: gigapixel sensor, airborne sensor processing, and ground processing. The A-160 pod is carried on the Hummingbird, an unmanned autonomous system. LAN/WAN: Local or wide area network.

Upon completion, ARGUS-IS will provide video window capability (see Figure 2) and a vehicle moving target indicator. ARGUS-IS uses a common data link operating at a raw bit rate of 274Mbps.2 Downlinked imagery is specified by the ARGUS-IS ground system. Our system provides for a minimum of 65 color VGA (640×480) windows at a video rate of 12Hz. The amount of imagery that can be downlinked is determined principally by the amount of bandwidth available. The size of a video window, video rate, and level of image compression are determined by users based on their requirements. Video windows are electronically steerable, and resolution can be reduced to provide windows with a wider field of view. The moving target indicator functionality provides image chips of vehicle-sized moving objects across the entire field of view. This functionality is available regardless of the altitude of the UAS.

The airborne sensor processing subsystem performs image preprocessing, such as intensity and uniformity corrections, and demosaicking. All pixels produced by the gigapixel sensor go through image preprocessing as the pixels are received by the airborne processing system. Intensity and uniformity correction parameters are determined from radiometric calibration of a camera. A sensor model derived from overlapping pixels among the individual focal plane arrays is employed for seamless image mosaicking, illumination equalization across the entire field of view, and stabilization of the virtual video windows. The airborne system performs all processing associated with video window formation, and all imagery preparations, including compression for downlinking to the ground. We use JPEG 2000 for image compression. This provides extensive flexibility for variable image frame rates and image compression ratios.


Figure 2. The ARGUS-IS will image a wide field of view at video rates greater than 12Hz. The system will downlink a large number of distinct windows into the field of view.

The ground processing subsystem enables users to interact with the ARGUS-IS airborne systems. The user interface, based on NASA World Wind3 software, facilitates specification of areas where imagery is desired throughout the entire ARGUS-IS field of view. In addition, users can request a video window for a particular target of interest or additional video windows for following targets in a given region. The ground processing subsystem generates a request for these services to the airborne sensor processing subsystem, which responds in turn by creating the requested video windows or by keeping a specified target within a video window.

In sum, ARGUS-IS will improve ISR by providing an unprecedented level of persistent video surveillance. This next-generation system will help find small events in large areas in enough time to respond.

We would like thank BAE Systems, ObjectVideo, and the Air Force Research Laboratory Sensors Directorate for their contributions to the ARGUS-IS program. This article is approved for public release, distribution unlimited.


Brian Leininger
Defense Advanced Research Projects Agency (DARPA)
Arlington, VA

Recent News
PREMIUM CONTENT
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research