SPIE Startup Challenge 2015 Founding Partner - JENOPTIK Get updates from SPIE Newsroom
  • Newsroom Home
  • Astronomy
  • Biomedical Optics & Medical Imaging
  • Defense & Security
  • Electronic Imaging & Signal Processing
  • Illumination & Displays
  • Lasers & Sources
  • Micro/Nano Lithography
  • Nanotechnology
  • Optical Design & Engineering
  • Optoelectronics & Communications
  • Remote Sensing
  • Sensing & Measurement
  • Solar & Alternative Energy
  • Sign up for Newsroom E-Alerts
  • Information for:

SPIE Photonics West 2017 | Register Today

SPIE Defense + Commercial Sensing 2017 | Call for Papers

Get Down (loaded) - SPIE Journals OPEN ACCESS


Print PageEmail PageView PDF


Biologically inspired computation for intelligent autonomous exploration

Processing massive and complex data with brainlike neural computation is critical to achieving onboard decision-making systems for robotic missions on Earth and in space.
30 April 2008, SPIE Newsroom. DOI: 10.1117/2.1200804.1114

Autonomous operation of unmanned spacecraft, aircraft, and other robotic vehicles has long been a goal for the exploration of faraway, hostile frontiers of space and hard-to-access or dangerous environments on Earth. Robotic capabilities already exist that allow hazard avoidance by smart navigation systems using fast, fault-tolerant, and reliable onboard computing devices that can withstand harsh environments.1,2

However, systems do not yet exist that are able to employ a sophisticated enough understanding of scientific data to enable trustworthy autonomous decision making based on information learned in situ. For example, NASA's Mars Exploration Rovers have excellent hazard avoidance capabilities based on perceived terrain properties, but do not have onboard understanding of scientific data capable of recognizing scientifically interesting surface features. The rovers thus cannot autonomously decide to examine interesting science opportunities instead of passing them by based on preprogrammed navigation commands.

Autonomous robotic operations are based on the information provided by the data collected on board and provided for decision making. In the case of scientific (as well as surveillance and other) applications this often means extracting, in sufficient detail, relevant information from a mass of complicated high-dimensional (multivariate) data. A prime example in space and Earth applications is analysis of hyperspectral imagery, which employs more than the standard three to eight channels, acquired in most missions for the wealth of information it contains. Detailed analysis of this and other similarly complex data, however, has proved difficult with conventional approaches.

Figure 1. A simple simulated six-band image with five classes for concept demonstration. Each pixel is a 6D stack vector (a ‘spectrum’). The color blocks at left show how the spectral types, plotted right, are distributed in the image. Class U contains only one pixel. The spectra are offset for clarity.

Figure 2. (top) The weight vectors in a 10×10 self-organizing map (SOM) after learning the data set described in Figure 1. (bottom) The same SOM, with the vector distances between neighbor weights shown as ‘fences’ on a black-to-white gray scale. The fences delineate groups of prototypes each of which collectively represents one of the classes in Figure 1.3

Intelligent data interpretation is a core challenge, which in turn requires complex algorithms that can be computationally expensive. In centralized environments on Earth supercomputing clusters can be employed, but in onboard, embedded system scenarios this is not possible. Fortunately, long before supercomputers were invented, nature invented and engineered a solution that combined speed and intelligence—brains—which are compact, light, power-efficient, fault-tolerant, robust, adaptive, and fast. A brain effectively detects targets of predefined character, recognizes unknown surprises, and can resolve relevant information all of which can contribute toward optimal decision making based on the immediate environmental stimuli.

Neural computing architectures strive to mimic the intelligent information processing of brains and nervous systems by characteristics such as massive parallelism, where many instructions are carried out simultaneously, the dense interconnectivity of many simple processor units, which are like individual neurons, and other observed properties of brains. Massive parallelism makes neural architectures well suited to compact hardware, which is then embedded in onboard processing and decision-making systems.

Our group uses a neural architecture called a self-organizing map (SOM) to capture some of the ways that the cerebral cortex is believed to organize sensory data and derive detailed knowledge of the environment.

Discovery through self-organized learning

The neurons in SOMs4 learn to collectively represent the a priori unknown structure of a data set by simultaneous competition and collaboration among the locally acting neural units in an unsupervised iterative procedure. This involves finding an optimal distribution of prototype vectors, the neural weights, in the data set (which is an adaptive vector quantization process), and simultaneously organizing the prototypes on a rigid lattice, according to their similarity relations. For example, consider the simple synthetic spectral image described in Figure 1. The SOM is given the spectral signatures (the 6D stack vectors at each image pixel) but not the class labels. Figure 2 reveals the knowledge acquired by a SOM after learning these signatures. At the top, the prototype vectors of the neurons are plotted in the corresponding grid cells. The prototypes have molded themselves to look like the signatures of the spectral classes and organized themselves in five distinct regions of this lattice. These regions, color-coded at the bottom, manifest through the differences (vector distances) of the adjacent prototypes, which are visualized as ‘fences’ on a black-to-white gray scale. Black means no difference, and white means a large difference. The groups of prototypes that represent similar data vectors—the clusters in the data—are separated. The 1-pixel class U gained representation by one SOM prototype, while the others have a roughly equal size area (discounting border effects caused by the finite size of the SOM). In more noisy real data with many clusters, the representation of rare species can be suppressed in a quantization, by SOM or other means, or not resolved at all. Nature's way to ensure that important rare signals are noticed is through a `perceptual magnet’ effect,5 which preferentially magnifies the area of the cortex that represents the rare stimuli. Such magnification can be induced in a SOM-based system on the theory by Bauer et al.,6 without prior knowledge of the data distribution. We showed in systematic studies that this theory—formally proved only for 1D data and 2D data with uncorrelated dimensions—can be applied to broader classes of data.3

Figure 3. The simulated rare event `alert` caused a magnification of the area in the artificial brain which represents the one-pixel class, represented by the 10 shaded prototypes.3 The sensitivity and thus the chance of a discovery of this class greatly increased. The location of the same signatures is different in the two SOMs because of random starting conditions.

We demonstrated this approach to be effective in detailed information extraction from intricate volumes of real data. Previously successful examples are the detection of rare mineralogy from Mars Pathfinder's multispectral imagery,7 and the finding of very small, 3–10-pixel spatial objects (such as in Figure 4) from a hyperspectral urban image.8 Importantly, knowledge of the data distribution or the fact that rare clusters exist in the data is not required for the magnification to take place. Hence, SOM magnification is a genuine tool of discovery.

Figure 4. A real example of discovering small, unique urban objects from a hyperspectral image of Ocean City, MD.8

Autonomous robotic discovery is one of the most promising applications of self-organized learning by neural computing systems. We also use self-organized neural learning to achieve faithful, detailed segmentation of a data space, and to aid precise supervised classification of a complex data set into predefined classes. These capabilities, augmented by neural feature extraction, can be packaged together to produce systems that will be able to facilitate highly intelligent data understanding.7 This approach, implemented in massively parallel hardware on board autonomous vehicles, will enable unexpected discoveries, as well as detection of targets with known signatures, represented within massive and complex data sets. While fabrication of the necessary neural chips with appropriate scaling properties is still a challenge, nanotechnology is expected to provide that capability soon. This approach promises to combine the intelligence of neural computing algorithms with the speed needed for real-time exploration, decision making, and operations on board vehicles on Earth and in space.

This article samples joint work with students and collaborators. Support by the Applied Information Systems Research and Mars Data Analysis Programs of NASA's Science Mission Directorate is greatly appreciated.

Erzsébet Merényi
Electrical and Computer Engineering
Rice University
Houston, TX