SPIE Startup Challenge 2015 Founding Partner - JENOPTIK Get updates from SPIE Newsroom
  • Newsroom Home
  • Astronomy
  • Biomedical Optics & Medical Imaging
  • Defense & Security
  • Electronic Imaging & Signal Processing
  • Illumination & Displays
  • Lasers & Sources
  • Micro/Nano Lithography
  • Nanotechnology
  • Optical Design & Engineering
  • Optoelectronics & Communications
  • Remote Sensing
  • Sensing & Measurement
  • Solar & Alternative Energy
  • Sign up for Newsroom E-Alerts
  • Information for:

SPIE Photonics West 2017 | Register Today

SPIE Defense + Commercial Sensing 2017 | Call for Papers

Get Down (loaded) - SPIE Journals OPEN ACCESS


Print PageEmail PageView PDF

Defense & Security

Multi-agent modeling and analysis for space situation awareness

Innovative algorithms and software tools developed within a game-theoretic framework aid automatic decision making and enhance the effectiveness and robustness of space surveillance.
10 September 2009, SPIE Newsroom. DOI: 10.1117/2.1200909.1739

Over the past decade, the space environment has become more complex because of a significant increase in space debris and a greater density of spacecraft. This poses difficulties for efficient and reliable space operations. In this context, ‘predictive situation awareness’ is important for a commander to adequately respond to conflicts in a timely manner. To obtain the common operating picture (COP: the relevant operational information shared by more than one command, enabling collaborative planning) and facilitate the transition from the existing, distributed space-based target observers to the layered-sensing hierarchical architecture, the space community needs to develop a real-time, decentralized system aiding decision making, which must be equipped with data-mining and information-fusion capabilities. Such a ‘space situational awareness’ (SSA) collaborative tool has to meet a number of key challenges. These include options for decision making based on fully decentralized target detection and tracking results, and for high-level data fusion performed on demand to predict the intent (adversary strategies and tactics) of threats to compensate for noncooperative strategies and asymmetrical information. Other key issues relate to sensor data processing and management for optimal tasking and scheduling with real-time requirements, and optimal response through a series of executable actions with associated resources for the decision maker, as well as a translation capability for converting meta-tasks into collections of executable tasks.

Figure 1. Game-theoretic collaborative situation awareness. GMAT: General mission analysis tool. COA: Course of action. HEA: Hierarchical entity aggregation. EO(SBV): Earth observation (space-based visible).

We have developed—and successfully demonstrated—a collection of innovative algorithms and software tools built upon space-based target tracking within a game-theoretic framework. Our main aim was to automate decision aiding in a collaborative environment and to enhance the effectiveness and robustness of space surveillance. Our SSA tool provides a COP and predictive battlespace awareness as part of a real-time operation module that is useful in complex situations. The framework is shown schematically in Figure 1, which illustrates the major system modules. We will discuss each of these in turn from the bottom up.

The data-mining module extracts relevant information from numerous data sets, discovers new patterns and/or features, and evaluates and generalizes new patterns. It performs data extraction and association, fuses measurements, signals, and information from heterogeneous sensors to form a consistent set of information, compares relevant data with existing patterns and/or features, and determines whether the observational data can be explained by existing patterns or features.

The level 1 data-fusion module applies signal-processing techniques to integrate relevant space-time data and builds up objects for targets or events. Subsequently, the level 2/3 data-fusion component (situation and impact refinement) identifies the objects' organization, provides references about object intents, evaluates the situations, generates the objects' structure and/or organization, estimates relationships, and predicts the capabilities and/or intents of the objects. Level 3 data fusion predicts likely future courses of action (COAs), provides evaluation, and advises possible reactive strategies. Such COAs may be associated with friendly forces, enemies, or the civilian environment (i.e., a civilian satellite or vehicle, space debris, meteorites, and so on). The level 4 data-fusion stage (process refinement) then optimizes the collaborative sensing and analyzing procedures, routing, and scheduling. It is responsible for simultaneous search, classification, tracking, and prediction. Level 4 data fusion evaluates the performance and cost of the entire operational pathway, which is fed back to the system and used for performance refinement for subsequent time steps. It determines the routing and scheduling of fixed and/or moving platforms.

Next, the meta-task decomposition module decomposes different levels of commands (which might be meta-capability, meta-function, or meta-task levels, or directly the executable-task level) from level 5 human commanders or level 4 sensor management and data fusion into tasks that can be executed directly by the objects. In other words, it transforms commands and orders to executable tasks. Specific sensor-action control units are directly implemented on heterogeneous sensors, which are evaluated by the level 2/3 module. The task-decomposition module can accept commands or action inputs from the highest meta-capability to the atomic-executable level for sensing and/or analysis operations in later steps.

The knowledge base fuses offline knowledge (from initialization), learns online knowledge (from human operators and processes), and supports the operational process. It stores available data (for example, numbers, identities, classes, kinematics, features, capabilities, or intents of targets) and knowledge (such as pre-information, or online experiences or lessons). The module accepts knowledge inputs from the operator and learns knowledge from system modules through data mining. It also supports the operation of other modules for level 5 fusion.1

Finally, the general mission analysis tool (GMAT) module provides a visual user interface for human operators. In addition, the GMAT presentation provides performance analysis of SSA metrics. The user interface is implemented within GMAT, an analysis and optimization platform developed by NASA.2 Users can select which display to access. System states such as kinematics, identity, and/or status can be illustrated by displaying object images (or icons) on screen. Relevant process information, such as important statistical data or knowledge, or newly learned, discarded, or updated knowledge, resource readiness, and sensing or maneuvering commands are reported to the operator through GMAT.

A typical multisensor multi-object tracking scenario is shown in the bottom right-hand corner of Figure 2. We selected four satellites as observers (Ariane 44L, OPS 856, Vanguard 1, and Echo 1) and two geostationary-orbit satellites as objects (Echostar 10 and Cosmos 2350). We used GMAT as the system controller as well as the visualization tool in our simulation study. Discussions on filter design can be found in our companion papers.3,4 The GMAT visualization of the tracking results is shown in Figure 2.

Figure 2. Snapshot of sensor-assignment results for a multi-object tracking scenario. The left-hand, middle, and right-hand panels show the file inputs, track results, and the game-theoretic scheduling of assets (top) and satellite location (bottom), respectively.

We have implemented game-theoretic data fusion and a decision-aiding framework for collaborative-system SSA. We performed extensive simulations on low-earth-orbit (LEO) to LEO and LEO to geostationary-orbit multisensor multi-target tracking. The automated decision-aid tool can enhance situation awareness in hostile and uncertain environments, significantly compress decision timelines, reduce manpower requirements, and increase mission effectiveness to achieve a common understanding of the readiness of space resources in real time. Our future directions will include development of novel fusion-performance metrics, implementation of distributed-control theoretical planning, and experimentation of possible threat-event predictions for operator scenario assessment.

Genshe Chen
DCM Research Resources LLC
Germantown, MD

Genshe Chen is chief technology officer. He provides strategic direction for government services and commercial solutions, and leads the company's research and development activities. He has contributed to aspects of automatic target recognition, target tracking, information fusion, cooperative control, sensor and network management, and game theory.

Erik P. Blasch
US Air Force Research Laboratory (AFRL)
Wright Patterson Air Force Base, OH

Erik Blasch is a fusion evaluation program manager at the AFRL, professor of electrical engineering at Wright State University (where he teaches classes in fusion, communications theory, and stochastic processes), and a reserve officer with the AFRL/Air Force Office of Scientific Research (AFOSR). His basic research interests include fusion, tracking, and automatic target recognition. He is a SPIE Fellow. He has been a board member of the International Society of Information Fusion (ISIF), as well as its treasurer, 2007 president, and website manager. In addition, he has given fusion tutorials, led the sponsors program, coordinated board elections, drafted an initial set of bylaws, and served on the ISIF editorial board.

Huimin Chen
University of New Orleans
New Orleans, LA

Huimin Chen received BE and ME degrees in electrical engineering from the Department of Automation of Tsinghua University in Beijing, China, in 1996 and 1998, respectively, and a PhD in electrical engineering from the Department of Electrical and Computer Engineering at the University of Connecticut in 2002. He has been a postdoctoral research associate in the Physics and Astronomy Department of the University of California at Los Angeles, and a visiting researcher with the Department of Electrical and Computer Engineering of Carnegie Mellon University from July 2002, where his research focus was on weak-signal detection for single-electron-spin microscopy. He joined the Department of Electrical Engineering of the University of New Orleans in January 2003 as an assistant professor. His research interests are in the general areas of signal processing, estimation theory, and information theory with applications to target detection and tracking.

Khanh Pham
Kirtland Air Force Base, NM

Khanh Pham is an aerospace engineer with the AFRL/Space Vehicles Directorate and technical advisor to the Decision Support Systems group. He earned BS and MS degrees in electrical engineering from the University of Nebraska (Lincoln) and a PhD from the Electrical Engineering Department at the University of Notre Dame, based on research focusing on control optimization and structural vibration suppression. He was the recipient of the Best Paper Presentation awards of the American Control Conference in 2000 and 2004. In 2008, he was the recipient of the 2008 Air Force Engineer of the Year award for his incorporation of statistical-control game theories into control systems and astrodynamic models to show how satellites can theoretically prevent hostile spacecraft from performing proximity operations. His recent research interests in optimal statistical control and estimation, uncertainty analysis, multilevel decision making, and game and team theory are being developed at the Decision Support Systems group in the Space Vehicles Directorate of the AFRL. He is also principal investigator of a number of AFOSR projects. He is a member of IEEE, SPIE, the Society for Industrial and Applied Mathematics, and Sigma Xi, as well as a senior member of the American Institute of Aeronautics and Astronautics.