SPIE Startup Challenge 2015 Founding Partner - JENOPTIK Get updates from SPIE Newsroom
  • Newsroom Home
  • Astronomy
  • Biomedical Optics & Medical Imaging
  • Defense & Security
  • Electronic Imaging & Signal Processing
  • Illumination & Displays
  • Lasers & Sources
  • Micro/Nano Lithography
  • Nanotechnology
  • Optical Design & Engineering
  • Optoelectronics & Communications
  • Remote Sensing
  • Sensing & Measurement
  • Solar & Alternative Energy
  • Sign up for Newsroom E-Alerts
  • Information for:

SPIE Photonics West 2017 | Register Today

SPIE Defense + Commercial Sensing 2017 | Call for Papers

Get Down (loaded) - SPIE Journals OPEN ACCESS


Print PageEmail PageView PDF

Optoelectronics & Communications

Neural network with distributed nodes provides fault tolerance

A system of simple, redundant, interconnected processing nodes forms a neural net that is able to recover when one or more nodes fail.
6 March 2006, SPIE Newsroom. DOI: 10.1117/2.1200602.0015

Long space missions and hazardous operations require circuits that can suffer damage and yet still recover and preserve at least partial operation. To do that, redundancy must be built in at several levels: that is, there must be multiple, redundant inputs 1 as well as redundancy within the processing circuit itself; and, possibly, redundant or multiple outputs. In this way, if one component or sub-circuit within the processing-chain fails, then other units can adjust and the circuit can continue operating. Thus, there must be multiple processors: or at least multiple ways of processing information. One possibility is to simply duplicate the processing circuit, but this doubles its complexity, size, cost and power consumption. Making the processing circuit modular represents a better approach.

An example of a modular processing system is a neural network: an interconnection of simple processing elements based on a model of biological nervous systems. The computational elements, or nodes, used in neural nets are nonlinear, typically analog, and arranged in a series of layers. Each node (except at the input) sums the weighted inputs from nodes in the previous layer and passes the result through a nonlinear function. The interconnection weights are usually determined via an iterative training process.

Most hardware implementations of neural networks have all of the nodes located on the same integrated circuit. However, a network located on a single chip is susceptible to failure caused by a single incident such as an external impact/collision, power failure, thermal overload, etc. Our approach to fault tolerance is to isolate or separate the hardware elements of the neural network. This separation provides a buffer against catastrophic failure, since the nodes are completely independent: even the power for each node is local. Thus, we built the nodes out of small, self-contained devices called motes2 that are able to sense, process, and wirelessly transmit data. This is the first use of motes to build a neural network: normally they are used merely to sense and transmit data.

We built and trained several different neural network architectures.3 The first had two layers (an input and an output) with two or more input nodes and one output node. We trained the two-layer neural network architecture using two embedded training algorithms. Embedded means that the training algorithm was part of the neural network structure: there was no external computer or controller to monitor the process. We compared two different algorithms of this type, a back-propagation algorithm4 and a particle-swarm-optimization5 algorithm, and both successfully trained the network. We then simulated a failure by turning off one (or more) input motes to test fault tolerance. The network was able to recover from multiple node failures and still perform the desired operation.

The second neural-net architecture we built had three layers (input, hidden, and output) with two input nodes, three hidden layer nodes, and one output node (see Figure 1). We developed an embedded back-propagation training algorithm to determine the weights of the network. The algorithm was distributed among all of the hidden layer nodes and the output node. Essentially, each nodecalculated its own weights and threshold based on the current output value of the network. We successfully trained the network to perform an AND operation (training took approximately 10 seconds) and an XOR operation (two minutes training time). After training, the response of the network was approximately 200ms.

Figure 1. Motes arranged in neural-network configuration. Input nodes are on the left and arrows show (wireless) communications between nodes.

Our approach is to make circuits robust with respect to component failure by using a neural network where the individual ‘neurons’ or nodes are physically separate. Commercial devices, motes, are used to implement the individual nodes. Our literature search indicates that this is the first use of motes to build a neural network and the first time that a neural network has been built and programmed using multiple, independent, processing elements. We were able to introduce failures into the network at specific nodes and have the network recover and continue to function. In the future, we hope to make the nodes small enoughthat they can be embedded on the side of a structure (‘smart skin’), or even inside a person, and to sense and process data from there.

James Hereford and Tuze Kuyucu
Department of Physics and Engineering, Murray State University
Murray, KY
Dr. James Hereford is an assistant professor at Murray State University. He received his BS from Stanford University and his MSEE and PhD degrees from Georgia Tech.

1. J. Hereford, C. Pruitt, Robust sensor systems using evolvable hardware,
2005 NASA/DoD Conference on Evolvable Hardware,
pp. 161-168, 2004.
2. D. Culler, H. Mulder, Smart sensors to network the world,
Scientific American,
pp. 84-91, 2004.
3. J. Hereford, T. Kuyucu, Robust neural networks using motes,
2005 NASA/DoD Conference on Evolvable Hardware,
pp. 117-124, 2005.
4. R. Lippmann, An introduction to computing with neural nets,
IEEE ASSP Magazine,
pp. 4-22, 1987.
5. J. Kennedy, R. Eberhart, Particle swarm optimization,
Proc. IEEE Int'l Conf. on Neural Networks IV,
pp. 1942-1948, 1995.