High-quality situational awareness is critical to the continued existence of a medium-weight force that cannot depend on extensive armor for survival. Current reconnaissance, surveillance, and target-acquisition (RSTA) capabilities are not sufficient to cover intelligence gaps or provide the beyond-line-of-sight (BLOS) targeting and ambush avoidance that are necessary for combat forces operating in complex terrain and urban areas near enemy forces. Networked Sensors for the Future Force (NSfFF) is a U.S. Army program designed to develop and demonstrate a new generation of networked, low-cost, distributed unmanned sensor systems. The goal of the program is to extend the eyes and ears of the RSTA element to provide BLOS situational understanding and targeting information.
The NSfFF project is developing, integrating, and demonstrating sensor suites for three types of unmanned platforms: unmanned ground vehicles (UGVs; see figure 1), small unmanned aerial vehicles (SUAVs), and unattended ground sensors (UGS). The battlefield sensors under development for these platforms include uncooled-IR and gated short-wave IR (SWIR) cameras, eye-safe flash lasers, and acoustic and seismic technologies augmented by sensor cueing and target-prediction algorithms. These sensor systems will be networked with a secure, jam-resistant, and stealthy low-power communication system. The communication system will be self-forming and self-healing, and it will have anti-jam capability, low probability of detection and interception, unique waveforms, and the capability to rapidly interface with the tactical Internet and tactical command and control (C2) systems.
Fig. 1 Mobile platforms can include (insets, from top) uncooled forward-looking IR (FLIR) systems, SWIR systems, aided target recognition (ATR) algorithms, and display panoramas with region-of-interest (ROI) and target image segmentation capabilities.
The NSfFF project will also develop a suite of C2 software tools for sensor planning and placement and control, and it will demonstrate a system-of-systems capability by fusing information from these various unmanned sensor systems. Data fusion and smart sensor management will enable the information collected by these sensors to be processed, communicated, and presented to human decision makers in real time, thereby replicating human "eyes on target" without exposing an individual to the hazards of a given situation and/or limiting coverage by his or her location.
The SUAV sensor package will provide day/night, all-weather, tactical imagery of non-line-of-sight areas (see figure 2). Sensor payloads include either an electro-optic IR imager or a high-resolution CCD camera. IR cameras are 640 × 480- or 320 × 240-pixel vanadium oxide uncooled arrays. While the SUAV has autonomous flight capability, target search and detection is conducted via man-in-the-loop, due to platform payload limitations. The sensor package will transmit full-resolution imagery to the ground station, where the operator uses a touch screen for rapid target image segmentation. Target images are geo-registered and include target location data generation with target locations. The SUAV operates through a C2 system used for mission planning and supports dynamic target retasking while in flight.
Fig. 2 SUAVs fitted with sensors (inset, lower right) can capture reconaissance imagery and targeting data (inset, upper right).
A dual-color countermine sensor currently under developmental testing leverages a low-cost approach to realize a significant improvement in search and detection of buried mines in roadways. The passive long-wave IR (LWIR) system scans the road from an altitude of 100 ft, looking for thermal signatures that may have been generated by soil disturbances due to mine emplacements. Any areas that show disturbances are classified as possible targets for further investigation. Detection algorithms run in near real time on a ground station in a vehicle following the SUAV.
The LWIR bands from this system are processed to take advantage of a spectral anomaly associated with disturbed soil. A near-IR imager boresighted in the mine-detection system improves the clutter rejection of the detection algorithm by enabling discrimination between vegetation and the normal road surface. The algorithms in the ground station process all of the spectral bands together in order to identify areas of interest. Although we are developing this technology as part of the NSfFF project, it will not fly on an SUAV during the life of the project; for testing, we will mount it on a manned aircraft.
On the Ground
The UGS effort is concentrated on developing key UGS component technologies for integration and demonstration under the NSfFF program. One initial program goal was to develop a more affordable UGS node to allow the creation of large UGS clusters. In an effort to reduce cost, we sought to relax some of the technical requirements, but some critical challenges still remained. Maintaining network connectivity over a large UGS cluster requires ad-hoc communications networks, but network traffic at nodes near the gateway, or at other critical sites within the UGS field, would create significant power consumption at these node locations.
To overcome this problem, we developed distributed ground-based sensing and data fusion capabilities to reduce network loading and improve reliability with respect to a comparable centralized or gateway-based UGS approach. This capability requires fusing acoustic line-of-bearing data from multiple nodes into target-track reports at the node level. Combining data from several nodes establishes a track; only then is the track data sent to the gateway for transmission over the long-haul communications link. This approach eliminates the need for all line-of-bearing reports to be sent to the gateway for fusing. The capability to produce target-track reports at the node level reduces overall network traffic of the UGS cluster and, combined with ad-hoc communications, offers the capability to create larger UGS clusters.
The cost-effective targeting systems (CETS) project integrates low-cost technologies such as uncooled IR focal plane arrays, SWIR arrays, and laser-flash illuminators into an affordable, highly capable targeting system intended as an RSTA mission package for manned platforms or UGVs. The system will consist of a 640 × 480-pixel, electron-bombarded complementary-semiconductor-metal-oxide SWIR chip and a 640 × 480-pixel, low-power uncooled IR (LPUIR) camera. The LPUIR camera is designed for target detection, while the SWIR chip is designed for high-resolution target identification at detection ranges by means of range-gated flash laser illumination. The goal is to produce two fully operational systems, one of which will be integrated and demonstrated on a UGV platform. The system is capable of both autonomous and manual RSTA modes of operation.
Fig. 3 Massively deployed UGSs use ad-hoc networking and distributed fusion to return battlefield intelligence.
The first demonstration was conducted in July 2004 on a U.S. Army test range. Troops operated the reconnaissance and surveillance vehicle (RSV) and technical personnel deployed and operated the sensor systems. Since this demonstration occurred during the development cycle, surrogate sensors were used in place of some items that were not yet field-ready. These included CETS and massively deployable UGSs (see figure 3). The overall objective of the testing was to demonstrate connectivity between the various sensors and the sensor management tools in the RSV, prior to sensor performance testing. Overall, the individual sensors performed well.
Recent testing has concentrated on evaluating the technical performance of each of the individual sensors. We conducted technical testing at a U.S. Army proving ground, pitting the objective sensor systems against the vehicle targets. The sessions produced sufficient data to perform a statistical analysis for documentation in a final report. That analysis is ongoing.
The program is currently participating in an operational evaluation at a U.S. Army base. Troops have been trained in RSV operation and on the capabilities of the sensor systems. U.S. Army personnel will assess the technical performance of the individual systems and evaluate the operational value of both the individual and combined system of systems. The operational exercise will include an integrated live/virtual environment that places the NSfFF systems in a realistic operational context.
As described above, the program is meant to demonstrate that distributed unmanned systems and networked communications can enable small unit elements to conduct BLOS reconnaissance and targeting. In addition to unmanned sensors, other key technological developments needed to support this capability include ad-hoc networked communications and battle command applications used in sensor planning and placement, mission execution and monitoring, and data fusion to reduce the operator workload. Maturation of these technologies is ongoing, however; most of that work will take place under separate programs.
In the case of CETS, continued optimization of the SWIR image-acquisition process is needed to produce consistent, high-quality imagery under all environmental and target background conditions. Improvements to distributed fusion for UGSs to optimize tracking performance and node cluster configurations are needed. It also is necessary to optimize wireless networked communications to enhance peer-to-peer networking ability and improve power efficiency. Sensor planning and placement must be integrated more highly into future battle command applications.
The NSfFF program concludes at the end of 2005, culminating in a final technical report detailing results of the technical and operational tests. Results to date show much promise in meeting technical and operational goals and demonstrating the potential value of networking sensors on the battlefield. oe
Gene Klager is chief (A) of the Networked Sensors Applications Branch, Air and Netted Sensors Division at the U.S. Army CERDEC Night Vision and Electronic Sensors Directorate, Ft. Belvoir, VA.