Proceedings Volume 3812

Applications and Science of Neural Networks, Fuzzy Systems, and Evolutionary Computation II

Bruno Bosacchi, David B. Fogel, James C. Bezdek
cover
Proceedings Volume 3812

Applications and Science of Neural Networks, Fuzzy Systems, and Evolutionary Computation II

Bruno Bosacchi, David B. Fogel, James C. Bezdek
View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 1 November 1999
Contents: 7 Sessions, 24 Papers, 0 Presentations
Conference: SPIE's International Symposium on Optical Science, Engineering, and Instrumentation 1999
Volume Number: 3812

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Evolutionary Computation and Applications
  • Session 6
  • Computational Intelligence in Communications
  • Session 3
  • Session 4
  • Session 5
  • Session 6
  • Poster Session
Evolutionary Computation and Applications
icon_mobile_dropdown
Theoretical developments in evolutionary computation
Recent developments in the theory of evolutionary computation offer evidence and proof that overturns several conventionally held beliefs. In particular, the no free lunch theorem and other related theorems show that there can be no best evolutionary algorithm, and that no particular variation operator or selection mechanism provides a general advantage over another choice. Furthermore, the fundamental nature of the notion of schema processing is called into question by recent theory that shows that the schema theorem does not hold when schema fitness is stochastic. Moreover, the analysis that underlies schema theory, namely the k- armed bandit analysis, does not generate a sampling plan that yields an optimal allocation of trials, as has been suggested in the literature for almost 25 years. The importance of these new findings is discussed in the context of future progress in the field of evolutionary computation.
Medical image segmentation using genetic snakes
In this paper an approach is described for segmenting medical images. We use active contour model, also known as snakes, and we propose an energy minimization procedure based on Genetic Algorithms (GA). The widely recognized power of deformable models stems from their ability to segment anatomic structures by exploiting constraints derived from the image data together with a priori knowledge about the location, size, and shape of these structures. The application of snakes to extract region of interest is, however, not without limitations. As is well known, there may be a number of problems associated with this approach such as initialization, existence of multiple minima, and the selection of elasticity parameters. We propose the use of GA to overcome these limits. GAs offer a global search procedure that has shown its robustness in many tasks, and they are not limited by restrictive assumptions as derivatives of the goal function. GAs operate on a coding of the parameters (the positions and the total number of snake points) and their fitness function is the total snake energy. We employ a modified version of the image energy which consider both the magnitude and the direction of the gradient and the Laplacian of Gaussian. Experimental results on synthetic images as well as on medical images are reported. Images used in this work are ocular fundus images, snakes result very useful in the segmentation of the Foveal Avascular Zone. The experiments performed with ocular fundus images show that the proposed method is promising in the early detection of the diabetic retinopathy.
Investigation of image feature extraction by a genetic algorithm
We describe the implementation and performance of a genetic algorithm which generates image feature extraction algorithms for remote sensing applications. We describe our basis set of primitive image operators and present our chromosomal representation of a complete algorithm. Our initial application has been geospatial feature extraction using publicly available multi-spectral aerial-photography data sets. We present the preliminary results of our analysis of the efficiency of the classic genetic operations of crossover and mutation for our application, and discuss our choice of evolutionary control parameters. We exhibit some of our evolved algorithms, and discuss possible avenues for future progress.
Investigation of new operators for a diploid genetic algorithm
Sima Etaner Uyar, A. Emre Harmanci
This study involves diploid genetic algorithms in which a diploid representation of individuals is used. This type of representation allows characteristics that may not be visible in the current population to the preserved in the structure of the individuals and then be expressed in a later generation. Thus it prevents traits that may be useful from being lost. It also helps add diversity to the genetic pool of the population. In conformance with the diploid representation of individuals, a reproductive scheme which models the meiotic cell division for gamete formation in diploid organisms in nature is employed. A domination strategy is applied for mapping an individual's genotype onto its phenotype. The domination factor of each allele at each location is determined by way of a statistical scan of the population in the previous generation. Classical operators such as cross-over and mutation are also used in the new reproductive routine. The next generation of individuals are chosen via a fitness proportional method from among the parents and the offspring combined. To prevent early convergence and the population overtake of certain individuals over generations, an age counter is added. The effectiveness of this algorithm is shown by comparing it with the simple genetic algorithm using various test functions.
Session 6
icon_mobile_dropdown
Adaptive resource allocation in telecommunications
Timothy X. Brown, Hui Tong
This paper looks at the general problem of resource allocation in telecommunication networks. It gives an overview of the problem and argues for adaptive methods in the complex telecommunication environment. In particular it discusses a general methodology known as reinforcement learning. The paper presents two examples--admission control in packet data networks, and battery management for mobile communication.
Computational Intelligence in Communications
icon_mobile_dropdown
Computational intelligence in management of ATM networks: a survey of the current state of research
Y. Ahmet Sekercioglu, Andreas Pitsillides, Athanasios V. Vasilakos
Designing effective control strategies for Asynchronous Transfer Mode (ATM) networks is known to be difficult because of the complexity of the structure of networks, nature of the services supported, and variety of dynamic parameters involved. Additionally, the uncertainties involved identification of the network parameters cause analytical modeling of ATM networks to be almost impossible. This renders the application of classical control system design methods (which rely on the availability of these models) to the problem even harder. Consequently, a number of researchers are looking at alternative non-analytical control system design and modeling techniques that have the ability to cope with these difficulties to devise effective, robust ATM network management schemes. Those schemes employ artificial neural networks, fuzzy systems and design methods based on evolutionary computation. In this survey, the current state of ATM network management research employing these techniques as reported in the technical literature is summarized. The salient features of the methods employed are reviewed.
Soft computing and soft communications for synchronized data
In this paper a new algorithmic and hardware approach to real-time processing, computing, compression and transmission of multi-media (video, imagery, audio, sensor, telemetry, computer data) information, in the form of synchronized data, was proposed. The proposed approach, called Soft Computing and Soft Communication, leads to multi-media throughput minimization and data homogenization.
Training perceptrons for document search over the World Wide Web
Zhixiang Chen, Xiannong Meng, Richard K. Fox, et al.
In this paper we study the problem of searching documents over the world wide web through training perceptrons. We consider that web documents can be represented by vectors of n boolean attributes. A search process can be viewed as a way of classifying documents over the web according to the user's requirements. We design a perceptron training algorithm for the search engine, and give a bound on the number of trails needed to search for any collection of documents represented by a disjunction of the relevant boolean attributes.
Session 3
icon_mobile_dropdown
Generalized clustering using optimization from a statistical mechanics approach
Sujit Joshi, Sunanda Mitra
This work evolves from the concept of deterministic annealing (DA) as a useful tool to solve non-convex optimization problems. DA is used in order to avoid local minima of the given application specific cost function in which traditional techniques get trapped. It is derived within a probabilistic framework from basic information theoretic principles. The application specific cost is minimized subject to a level of randomness (Shannon entropy), which is gradually lowered. A hard (non random) solution emerges at the limit of low temperature after the system goes through an annealing process. This paper deals with the important and useful application of DA to vector quantization of images. An extension of the basic algorithm by incorporating a structural constraint of mass or density is used to allow optimization of vector quantizers. The constrained algorithm is modified to work for a set of systems to generate a more generalized codebook. Experimental results show considerable performance gains over conventional methods.
Time series prediction by estimating Markov probabilities through topology preserving maps
Gerhard Dangelmayr, Sabino Gadaleta, Douglas Hundley, et al.
Topology preserving maps derived from neural network learning algorithms are well suited to approximate probability distributions from data sets. We use such algorithms to generate maps which allow the prediction of future events from a sample time series. Our approach relies on computing transition probabilities modeling the time series as a Markov process. Thus the technique can be applied both to stochastic as well as to deterministic chaotic data and also permits the computation of `error bars' for estimating the quality of predictions. We apply the method to the prediction of measured chaotic and noisy time series.
Project DANA: multiagent simulation and fuzzy rules for international crisis detection--can we forestall wars?
Roger F. Cozien, Andre Colautti
Assessing the conflicting potential of an international situation is very important in the exercise of Defence duty. Mastering a formal method allowing the detection of risky situations is a necessity. Our aim was to develop a highly operational method twinned with a computer simulation tool which can explore a huge number of potential war zones, and can test many hypotheses with high accuracy within reasonable time. We use a multi-agents system to describe an international situation. The agent coding allows us to give computer existence to very abstract concepts such as: a government, the economy, the armed forces, the foreign policy... We give to these agents fuzzy rules of behavior, those rules represent human expertise. In order to yardstick our model we used the Falklands war to make our first simulations. The main distortion between the historical reality and our simulations comes from our fuzzy controller which causes a great loss of information. We are going to change it to a more efficient one in order to fit the historical reality. Agent coding with fuzzy rules allows human experts to keep close to their statements and expertise, and they can handle this kind of tool quite easily.
How is luminance information passed into the cortex: emergent multifunctional behavior of a simple cell model
In modeling brightness perception, one problem of high biological relevance is how luminance information is transmitted into the primary visual cortex. This is especially interesting in the light of recent neurophysiological studies, which suggest that simple cells are responding shallowly to homogeneous illuminated surfaces. This indicates that simple cells possess far more functional complexity as the wide-spread notion of mere line and edge detectors. Here we present new neural circuits for modeling even and odd simple cells, capable of transmitting brightness information without using an extra `luminance- channel'. Although these circuits taken for themselves can not be regarded yet as a full brightness model, however, they might gain some insight in why the visual system is using certain processing strategies. These include e.g. the segregation in ON and OFF channels and the mutual inhibition of simple cell pairs which are in anti-phase relation. These simple cell circuits turn out to be robust against noise, and thus might find its application in a border detection scheme, beside of being a building block for a more sophisticated brightness-model.
Neural RNA as a principal dynamic information carrier in a neuron
Quantum mechanical approach has been used to develop a model of the neural ribonucleic acid molecule dynamics. Macro and micro Fermi-Pasta-Ulam recurrence has been considered as a principle information carrier in a neuron.
Session 4
icon_mobile_dropdown
Hybrid neural networks and their application to particle accelerator control
Emile Fiesler, Shannon R. Campbell
We have tested several predictive algorithms to determine their ability to learn from and find relationships between large numbers of variables. The purpose of this test is to produce control algorithms for sophisticated devices like particle accelerators. In particular we use COMFORT, a particle accelerator simulator, to generate large amounts of data. We then compared results among several fundamentally different types of algorithms, including least squares and hybrid neural networks. Our data indicate which algorithms perform the best on the basis of performance and training times.
Granular computing for system modeling
Witold Pedrycz
The study is concerned with the fundamentals of granular computing and its use to system modeling and system simulation. In contrast to numerically-driven identification techniques, in granular modeling we concentrate on building meaningful information granules in the space of experimental data and forming the ensuing model as a web of associations between such constructs. As such models are designed at the level of information granules and generate results in the same granular rather than pure numeric format. First, we elaborate on the role of information granules viewed as basic building modules exploited in model development. Second, we show how information granules are constructed. It is shown how to express relationships (links) between information granules; in this case two measures of linkage are discussed, namely a relevance index and a notion of a fuzzy correlation. Granular computing invokes a number of layers whose existence is implied by different levels of information granularity. We show how to move between these layers by using transformations of encoding and decoding of information granules. Subsequently, some generic architectures of granular modeling are discussed.
Analysis of large-scale digital optical neural networks by Feynman diagrams
A new method to study large scale neural networks is presented in this paper. The basis is the use of Feynman- like diagrams. These diagrams allow the analysis of collective and cooperative phenomena with a similar methodology to the employed in the Many Body Problem. The proposed method is applied to a very simple structure composed by an string of neurons with interaction among them. It is shown that a new behavior appears at the end of the row. This behavior is different to the initial dynamics of a single cell. When a feedback is present, as in the case of the hippocampus, this situation becomes more complex with a whole set of new frequencies, different from the proper frequencies of the individual neurons. Application to an optical neural network is reported.
Session 5
icon_mobile_dropdown
Application of fuzzy logic in intelligent software agents for IP selection
Jian Liu, Eugene B. Shragowitz
IPs (Intellectual Properties) are becoming increasingly essential in today's electronic system design. One of important issues in design reuse is the IP selection, i.e. finding an existing solution that matches the user's expectations best. This paper describes the Internet-based intelligent software system (Software Agent) that helps the user to pick out the optimal designs among those marketed by the IP vendors. The Software Agent for IP Selection (SAFIPS) conducts dialogues with both the IP users and IP vendors, narrowing the choices after evaluating general characteristics first, followed by matching behavioral, RTL, logic, and physical levels. The SAFIPS system conducts reasoning based on fuzzy logic rules derived in the process of dialogues of the software agent with the IP users and vendors. In addition to the dialogue system and fuzzy logic inference system, the SAFIPS includes a HDL simulator and fuzzy logic evaluator that are used to measure the level of matching of the user's behavioral model with the IP vendor's model.
Neural nets for modeling, optimization, and control in semiconductor manufacturing
Roop L. Mahajan
This paper provides an overview of our recent work in the development of neural network models for optimization and control of electronic manufacturing processes. The concept of physical-neural network models and model transfer are described and demonstrated to be effective in building accurate neural network models economically. Process diagnostic techniques using multiple neural networks are reviewed and shown to be accurate for fault diagnosis. Finally, recent strategies in integration of statistical and neural network tools for process control are discussed. Several examples from electronics manufacturing such as chemical vapor deposition and fine pitch stencil printing are described to illustrate application of the basic concepts discussed.
Solving nonlinear engineering problems with the aid of neural networks
Andrew H. Sung, Hujun J. Li, Shih-Hsien Chang, et al.
In this paper, a technique is presented for using neural networks as an aid for solving nonlinear engineering problems, which are encountered in optimization, simulations and modeling, or complex engineering calculations. Iterative algorithms are often used to find the solutions of such problems. For many large-scale engineering problems, finding good starting points for the iterative algorithms is the key to good performance. We describe using neural networks to select starting points for the iterative algorithms for nonlinear systems. Since input/output training data are often easily obtained from the problem description or from the system equations, a neural network can be trained to serve as a rough model of the underlying problem. After the neural network is trained, it is used to select starting points for the iterative algorithms. We illustrate the method with four small nonlinear equation groups, two real applications in petroleum engineering are also given to demonstrate the method's potential application in engineering.
Session 6
icon_mobile_dropdown
Applications of soft computing in petroleum engineering
Andrew H. Sung
This paper describes several applications of neural networks and fuzzy logic in petroleum engineering that have been, or are being, developed recently at New Mexico Tech. These real-world applications include a fuzzy controller for drilling operation; a neural network model to predict the cement bonding quality in oil well completion; using neural networks and fuzzy logic to rank the importance of input parameters; and using fuzzy reasoning to interpret log curves. We also briefly describe two ongoing, large-scale projects on the development of a fuzzy expert system for prospect risk assessment in oil exploration; and on combining neural networks and fuzzy logic to tackle the large-scale simulation problem of history matching, a long- standing difficult problem in reservoir modeling.
Bayesian networks for satellite payload testing
Krzysztof Wojtek Przytula, Frank Hagen, Kar Yung
Satellite payloads are fast increasing in complexity, resulting in commensurate growth in cost of manufacturing and operation. A need exists for a software tool, which would assist engineers in production and operation of satellite systems. We have designed and implemented a software tool, which performs part of this task. The tool aids a test engineer in debugging satellite payloads during system testing. At this stage of satellite integration and testing both the tested payload and the testing equipment represent complicated systems consisting of a very large number of components and devices. When an error is detected during execution of a test procedure, the tool presents to the engineer a ranked list of potential sources of the error and a list of recommended further tests. The engineer decides this on this basis if to perform some of the recommended additional test or replace the suspect component. The tool has been installed in payload testing facility. The tool is based on Bayesian networks, a graphical method of representing uncertainty in terms of probabilistic influences. The Bayesian network was configured using detailed flow diagrams of testing procedures and block diagrams of the payload and testing hardware. The conditional and prior probability values were initially obtained from experts and refined in later stages of design. The Bayesian network provided a very informative model of the payload and testing equipment and inspired many new ideas regarding the future test procedures and testing equipment configurations. The tool is the first step in developing a family of tools for various phases of satellite integration and operation.
Adaptive image segmentation neural network: application to Landsat images
Jose L. Alba Castro, Susana M. Rey, Laura Docio
In this paper we introduce an adaptive image segmentation neural network based on a Gaussian mixture classifier that is able to accommodate unlabeled data in the training process to improve generalization when labeled data is insufficient. The classifier is trained by maximizing the joint-likelihood of features and labels over all the data set (labeled and unlabeled). The classifier builds grey- level images with estimation of class-posteriors (as many images as classes) that feed the segmentation algorithm. The paper is focused on the adaptive classification part of the algorithm. The classification tests are performed over Landsat TM mini-scenes. We assess the efficiency of the adaptive classifier depending on the model complexity and the proportion of labeled/unlabeled data.
Application of fuzzy logic to feature extraction from images of agricultural material
Imaging technology has extended itself from performing gauging on machined parts, to verifying labeling on consumer products, to quality inspection of a variety of man-made and natural materials. Much of this has been made possible by faster computers and algorithms used to extract useful information from the image. In the application of agricultural material, specifically tobacco leaves, the tremendous amount of natural variability in color and texture creates new challenges to image feature extraction. As with many imaging applications, the problem can be expressed as `I see it in the image, how can I get the computer to recognize it?' In this application, the goal is to measure the amount of thick stem pieces in an image of tobacco leaves. By backlighting the leaf, the stems appear dark on a lighter background. The difference in lightness of leaf versus darkness of stem is dependent on the orientation of the leaf and the amount of folding. Because of this, any image thresholding approach must be adaptive. Another factor that allows us to identify the stem from the leaf is shape. The stem is long and narrow, while dark folded leaf is larger and more oblate. These criteria under the image collection limitations create a good application for fuzzy logic. Several generalized classification algorithms, such as fuzzy c-means and fuzzy learning vector quantization, are evaluated and compared. In addition, fuzzy thresholding based on image shape and compactness are applied to this application.
Poster Session
icon_mobile_dropdown
Neural networks vs. nonparametric neighbor-based classifiers for semisupervised classification of Landsat imagery
Perry J. Hardin
Semisupervised classification is one approach to converting multiband optical and infrared imagery into landcover maps. First, a sample of image pixels is extracted and clustered into several classes. The analyst next combines the clusters by hand to create a smaller set of groups that correspond to a useful landcover classification. The remaining image pixels are then assigned to one of the aggregated cluster groups by use of a per-pixel classifier. Since the cluster aggregation process frequently creates groups with multivariate shapes ill-suited for parametric classifiers, there has been renewed interest in nonparametric methods for the task. This research reports the results of an experiment conducted on six Landsat TM images to compare the accuracy of pixel assignment performed by four nearest neighbor classifiers and two neural network paradigms in a semisupervised context. In all the experiments, both the neighbor-based classifiers and neural networks assigned pixels with higher accuracy than the maximum likelihood approach. There was little substantive difference in accuracy among the neighborhood-based classifiers, but the feed-forward network was significantly superior to the probabilistic neural network. The feed-forward network classifier generally produced the highest accuracy on all six of the images, but it was not significantly better than the accuracy produced by the best neighbor-based classifier.