Proceedings PaperA Neural Network Architecture For Evidence Combination
|Format||Member Price||Non-Member Price|
|GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free.||Check Access|
Neural network models have attracted the attention of many researchers in the pattern recognition domain. These models possess many interesting computational properties which include content addressable memory, automatic generalization, and the ability to modify their processing (learn) based on their input data. They also promise extremely fast implementations if they can be realized in special purpose hardware. Such special purpose implementations, due to the limits of integration, imply a finite number of neurones for any one system . Under such constraints, the construction of large neural network systems implies parallelism among sub-modules. This paper presents an architecture based on fusing the outputs of several independent neural network systems in order to define a single aggregate system. The system presented here recognizes handwritten ZIP code digits taken from pieces of United States Postal Service (USPS) mail. The overall system is composed of several sub-modules, each of which could be realized in a neural network of reasonable size. Parts of the system have been shown to achieve up to 75% accuracy processing digits at the rate of about one digit per second (real time). Currently, the neural net-work paradigm on which the system is based is being simulated on a serial machine (a Symbolics 3600 series lisp machine). In order to keep the total time of the system within a reasonable limit of -100 time steps, each network module has been limited to a few (<10) time steps. Current work involves the definition of other modules whose evidence will be combined with the described module. The gross system architecture is designed to integrate multiple evidence sources. The overall goal is to have both neural network and symbol processing paradigms in a single system.