Proceedings PaperAutonomous Reconfiguration Of Sensor Systems Using Neural Nets
|Format||Member Price||Non-Member Price|
Neural networks are ideally suited for sensing images and waveforms, processing them into intermediate levels of representation and outputing identification and/or characteristics of the sensed object. These networks can solve problems that conventional algorithms haven't and already in several cases this new technology has performed better than humans (e.g. sonar signal classification). A brief review of where autonomous agents may use neural networks and their learning algorithms is presented. A high yielding area is seen in the self-repair of damaged or faulted components. Architectures are proposed for implementing self-repairing sensor and identification systems aboard autonomous agents. One example is presented for a system which identifies visual objects. This system has four layers of massively connected simple parallel processors. Each connection has a weight attribute and the collected assignment of weights in a layer determines what function the layer will perform. The first layer (the imput layer) is simply the pixel detector layer. The second layer has eight sublayers which are sensitive to short line segments in eight different orientations. The third layer detects elementary combinations of the lower lines such as oriented corners or curve segments. The fourth layer has one sublayer for each macroscopic object to be identified which may be fused with a pinpoint location sensor. The crux of using reconfiguration in this type of sensor is that when one (or several) of the units or detectors become inoperative then neighboring detectors in that layer may be used to reprogram the weights connecting surviving units to restore functionality. This strategy takes advantage of the redundancy of parallel processors present in most types of neural networks. Alternatively a properly functioning agent may teach the injured agent or competitive learning for repairing middle processing layers may be utilized when an operative after-the-fact sensor is available for teaching the output layer.