Proceedings Volume 1469

Applications of Artificial Neural Networks II

cover
Proceedings Volume 1469

Applications of Artificial Neural Networks II

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 1 August 1991
Contents: 3 Sessions, 15 Papers, 0 Presentations
Conference: Orlando '91 1991
Volume Number: 1469

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Session 10
  • Session 11
  • Session 3
Session 10
icon_mobile_dropdown
Modeling of local neural networks of the visual cortex and applications to image processing
Ilya A. Rybak, Natalia A. Shevtsova, Lubov N. Podladchikova
A model of an iso-orientation domain in the visual cortex is developed. The iso-orientation domain is represented as a neural network with retinotopically organized afferent inputs and anisotropic lateral inhibition formed by feedback connections via inhibitory interneurons. Temporal dynamics of neuron responses to oriented stimuli is studied. The results of computer simulations are compared with those of neurophysiological experiments. It is shown that the later phase of a neuron response has a more sharp orientation tuning than the initial one. It is suggested that the initial phase of a neuron response encodes intensity parameters of visual stimuli, whereas the later phase encodes its orientation. The design of the neural network preprocessor and the architecture of the system for visual information processing, based on the idea of parallel-sequential processing, are proposed. An example of a test image processing is presented.
Relaxation properties and learning paradigms in complex systems
With respect to the three different paradigms of neural networks generally studied (convergent, oscillatory, chaotic), a fourth is proposed. In some general sense, it makes the precedent ones three particular cases of itself. It is defined as a nonstationary model of a spin- glass like neural net. It has both a dynamics on the spins and on the weights in view of granting to the net a continuous redefinition of its phase space on a purely dynamic basis. So the system displays different behaviors (noisy, chaotic, stable) in function of its finite temporal order parameter, i.e., in function of a finite correlation among the spins acting on the weight dynamics. A first analysis of this model, capable of making nonstationary the probability distribution function on the spins, is developed in comparison with several paradigms of relaxation neural nets, developed in the classical framework of statistical mechanics. The nonstationary, analytically unpredictable, but deterministic and hence computable behavior of such a model is useful to make a neural net able to reckon with recognition tasks of nonsteady inputs and semantical problems.
Session 11
icon_mobile_dropdown
Generation of exploratory schedules in closed loop for enhanced machine learning
Allon Guez, Ziauddin Ahmad
The work presented here is an extension of previous work, where estimation of the parameters of a plant was incorporated through exploratory schedules (ES), which are reference input trajectories designed to enhance the learning of system parameters. ESes were earlier generated off-line and used in an open-loop fashion. Moreover, these ESes were used between actual control tasks, therefore limiting the process of estimation during idle time. Here the authors attempt to generate ESes in a closed-loop manner. Such trajectories in general may not be the desired trajectories, resulting in larger tracking errors. However, ESes offer faster convergence to the system parameters and therefore yield smaller long-term tracking errors. The automation for the design of ESes requires on-line modification of the desired trajectory to enhance learning at the expense of poorer initial tracking.
Video-image-based neural network guidance system with adaptive view-angles for autonomous vehicles
Paul G. Luebbers, Abhijit S. Pandya
This paper describes the guidance function of an autonomous vehicle based on a neural network controller using video images with adaptive view angles for sensory input. The guidance function for an autonomous vehicle provides the low-level control required for maintaining the autonomous vehicle on a prescribed trajectory. Neural networks possess unique properties such as the ability to perform sensor fusion, the ability to learn, and fault tolerant architectures, qualities which are desirable for autonomous vehicle applications. To demonstrate the feasibility of using neural networks in this type of an application, an Intelledex 405 robot fitted with a video camera and vision system was used to model an autonomous vehicle with a limited range of motion. In addition to fixed-angle video images, a set of images using adaptively varied view angles based on speed are used as the input to the neural network controller. It was shown that the neural network was able to control the autonomous vehicle model along a path composed of path segments unlike the exemplars with which it was trained. This system was designed to assess only the guidance system, and it was assumed that other functions employed in autonomous vehicle control systems (mission planning, navigation, and obstacle avoidance) are to be implemented separately and are providing a desired path to the guidance system. The desired path trajectory is presented to the robot in the form of a two-dimensional path, with centerline, that is to be followed. A video camera and associated vision system provides video image data as control feedback to the guidance system. The neural network controller uses Gaussian curves for the output vector to facilitate interpolation and generalization of the output space.
Parameter estimation for process control with neural networks
Tariq Samad, Anoop Mathur
An application of neural networks to the problem of parameter estimation for process systems is described. Neural network parameter estimators for a given parametrized model structure can be developed by supervised learning. Training examples can be dynamically generated using a process simulation, resulting in trained networks that are capable of high generalization. This approach can be used for a variety of parameter estimation applications. A proof-of-concept open-loop delay estimator is described, and extensive simulation results detailed. Some results of other parameter estimation networks are also given. Extensions to recursive and closed-loop identification and application to higher-order processes are discussed.
Knowledge representation by dynamic competitive learning techniques
Janos Racz, Tamas Klotz
The competitive learning technique is a well-known algorithm used in neural networks which classifies the input vectors, so that the vectors (samples) belonging to the same class have similar characteristics. Each class is represented by one unit. Dynamic competitive learning is an unsupervised learning technique consisting of two additional parts related to conventional competitive learning: a method of generation of new units within a cluster and a method of generating new clusters. As seen in a description of the multilayered neural networks, the number of clusters, their connections, and the generation of new units is determined dynamically during learning. The model is capable of high-level storage of complex data structures and their classification, including exception handling.
Controller implemented by recording the fuzzy rules by backpropagation neural networks
Xingren Ying, Nan Zeng
A more natural way using the experiences than the fuzzy reasoning is provided in this paper. An abstract ''concept" is expressed by a set of neurons with different exciting degrees. So the abstract experience rules are transformed to the input-output samples of multi-layer neural network and these samples are recorded in network by Back-Propagation algorithm. The controller utilizes these experiences according to associative memory. The design simulation result feature and further of this controller are discussed in this paper. KEYWORDS: Neural Network Intelligent Control Back-propagation Fuzzy Control 1.
Ways of the high-speed increasing of magneto-optical spatial light modulators
Vladimir V. Randoshkin
The problem of a high-speed realization in magneto-optical spatial light modulators (MO SLM), including the diffusion-annealed MO SLM as well as one with a full cell decoupling, is discussed. Factors influencing the high-speed increase include film composition, orientation, and structure variations, as well as different cell switching mechanisms. For the usual current control, there are three possibilities for the increase of domain wall velocity: bismuth- substituted iron garnet (BiIG) films with a high gyromagnetic ratio; (210)-oriented BiIG films with orthorhombic anisotropy; and in-plane magnetic field application. One of the possibilities of the cell-switching speed increasing at the usual current control consists in usage of BiIG films with bistable magnetic bubble domains.
Invariant pattern recognition via higher order preprocessing and backprop
Jon P. Davis, William A. Schmidt
Higher-order neural networks are a variation of the standard back-propagation neural network, using geometrically motivated nonlinear combinations of scene pixel values as a feature space. The effects of varying feature size (in number of pixels), scene size, number of features, summation-over-scene versus maximum-over-scene, and number of hidden layers, are examined.
Multisensor object segmentation using a neural network
Patrick T. Gaughan, Gerald M. Flachs, Jay B. Jordan
A neural network architecture is presented to segment objects using multiple sensor/feature images. The neural architecture consists of a region growing net to separate an object from the surrounding background based upon local statistical properties. The region growing net consists of a lattice of neural processing elements for propagating a similarity activity between image pixels. A potential function approach is presented to define the neural weights by measuring pixel similarity in multisensor/feature images. The performance of the neural segmenter is demonstrated by comparing its performance to that of an architecture using a statistical decision theoretic technique.
Texture classification and neural network methods
Some neural network based methods for texture classification and segmentation have been published. The motivation for this kind of work might be doubted, because there are many traditional methods that work well. In this paper, a neural network based method for stochastic texture classification and segmentation suggested by Visa is compared with traditional K- means and k-nearest neighbor classification methods. Both simulated and real data are used. The complexity of the considered methods is also analyzed. The conclusion is the K-means method is the least successful of the three tested methods. The developed method is slightly more powerful than the k-nearest neighbor method for map sizes 9 X 9 and 10 X 10. The differences are, however, quite small. This means that the choice of classification method depends more on other aspects, like computational complexity and learning capability, than on the classification capability.
Feature extractor giving distortion invariant hierarchical feature space
Jouko Lampinen
A block structured neural feature extraction system is proposed whose distortion tolerance is built up gradually by successive blocks in a pipeline architecture. The system consists of only feedforward neural networks, allowing efficient parallel implementation. The feature extraction is based on distortion-tolerant Gabor transformation and minimum distortion clustering by hierarchical self-organizing feature maps (SOFM). Due to unsupervised learning strategy, there is no need for preclassified training samples or other explicit selection for training patterns during the training. A subspace classifier implementation on top of the feature extractor is demonstrated. The current experiments indicate that the feature space has sufficient resolution power for a small number of classes with rather strong distortions. The amount of supervised training required is very small, due to many unsupervised stages refining the data to be suitable for classification.
Self-organized criticality in neural networks
Vladimir I. Makarenkov, A. B. Kirillov
Possible mechanisms of creating different types of persistent states for informational processing are regarded. It is presented two origins of criticalities - self-organized and phase transition. A comparative analyses of their behavior is given. It is demonstrated that despite a likeness there are important differences. These differences can play a significant role to explain the physical issue of such highest functions of the brain as a short-term memory and attention. 1.
Session 3
icon_mobile_dropdown
Artificial neural net learns the Sobel operators (and more)
Scott W. Weller
Well-known techniques for image segmentation and edge detection involve the Sobel operators. A single slab function link extended neural net trained using the back propagation delta rule to perform edge detection without a priori knowledge of any particular detection algorithm is described.
Fast optoelectronic neurocomputer for character recognition
Optical implementations of neural networks utilize the inherent parallelism of optics to form the large number of interconnections required by neural networks. By carrying out computations in parallel, the processing speed of such systems can be substantial, despite the relatively slow response times of the optical devices. In this paper, a single-layer neural network is presented, which uses ferroelectric liquid crystal (FLC) spatial light modulators (SLM) to represent input patterns and weighted interconnections. The learning example for the network is handwritten character recognition. The experiment shows that this network successfully recognizes 58 of the handwritten patterns from the training set, when the synaptic weights have five grey levels and a dynamic range from -1 to +1. Computer simulations of networks indicate that by increasing the grey levels to eleven, and the dynamic range from -12.5 to +12.5, this net easily learns to recognize all the handwritten patterns in the training set. It also correctly recognizes 60 of the test patterns.