Proceedings Volume 5200

Applications and Science of Neural Networks, Fuzzy Systems, and Evolutionary Computation VI

Bruno Bosacchi, David B. Fogel, James C. Bezdek
cover
Proceedings Volume 5200

Applications and Science of Neural Networks, Fuzzy Systems, and Evolutionary Computation VI

Bruno Bosacchi, David B. Fogel, James C. Bezdek
View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 30 December 2003
Contents: 9 Sessions, 23 Papers, 0 Presentations
Conference: Optical Science and Technology, SPIE's 48th Annual Meeting 2003
Volume Number: 5200

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Keynote Paper
  • Session 1
  • Session 2
  • Session 3
  • Session 4
  • Session 5
  • Session 6
  • Session 7
  • Session 8
Keynote Paper
icon_mobile_dropdown
Routine human-competitive machine intelligence by means of genetic programming
John R. Koza, Matthew J. Streeter, Martin Keane
Genetic programming is a systematic method for getting computers to automatically solve a problem. Genetic programming starts from a high-level statement of what needs to be done and automatically creates a computer program to solve the problem. The paper demonstrates that genetic programming (1) now routinely delivers high-return human-competitive machine intelligence; (2) is an automated invention machine; (3) can automatically create a general solution to a problem in the form of a parameterized topology; and (4) has delivered a progression of qualitatively more substantial results in synchrony with five approximately order-of-magnitude increases in the expenditure of computer time. Recent results involving the automatic synthesis of the topology and sizing of analog electrical circuits and controllers demonstrate these points.
Session 1
icon_mobile_dropdown
Achieving laser control of quantum phenomena: balancing computational and experimental capabilities
An increasing number of experiments have demonstrated that shaped laser pulses can successfully manipulate a broad variety of quantum phenomena. These achievements are drawing on a balance of computational design and high duty cycle closed loop learning control experiments. This paper will consider the special issues involved in best utilizing these capabilities to meet the control objectives. In addition, several topics in the analysis of controlled quantum phenomena will also be considered that draw on these same capabilities.
Session 2
icon_mobile_dropdown
Computational intelligence in bacterial spore detection and identification
Bruno Bosacchi, Manjusha Mehendale, Warren S. Warren, et al.
Optical techniques are very promising for detecting and identifying bacterial spores. They are potentially superior to the existing “wet chemistry” approaches regarding several important features of an effective alarm system, such as speed, in-field use, continuous monitoring, and reliability. In this paper we discuss the role that computational intelligence (CI) can play in the control and optimization of optical experiments, and in the analysis and interpretation of the large amount of data they provide. After a brief discussion of the use of CI in the classification of optical spectra, we introduce the recently proposed FAST CARS (Femtosecond Adaptive Spectroscopic Techniques for Coherent Anti-Stokes Raman Scattering) technique. Here the role of CI is essential: using an adaptive feedback approach based on genetic algorithms, the hardware system evolves and organizes itself to optimize the intensity of the CARS signal.
Evolutionary pulse shaping in CARS signal enhancement
Manjusha Mehendale, Bruno Bosacchi, Warren S. Warren, et al.
We discuss the role of evolutionary adaptive algorithms in shaping femtosecond pulses with an eye toward their use in the quantum control of optical properties. In particular, we report preliminary results from an ongoing attempt to implement the recently proposed FAST CARS technique for the detection and identification of bacterial spores. In the initail phase of this project, we are studying the CARS signal from a deuterated water (D2O) solution of Dipicolinic Acid (DPA), which is an important constituent of the spores. enhancement of the CARS intensity associated with the DPA vibrational resonance at ~ 3000cm-1. This effect is weak, but significant. It is premature to ascribe it to any particular mechanism, but its detection encourages its optimization by searching the space of all possible pulse shapes via an evolutionary feedback algorithm.
Session 3
icon_mobile_dropdown
New results on evolving strategies in chess
David B. Fogel, Tim Hays
Evolutionary algorithms have been used for learning strategies in diverse games, including Othello, backgammon, checkers, and chess. The paper provides a brief background on efforts in evolutionary learning in chess, and presents recent results on using coevolution to learn strategies by improving existing nominal strategies. Over 10 independent trials, each executed for 50 generations, a simple evolutionary algorithm was able to improve a nominal strategy that was based on material value and positional value adjustments associated with individual pieces. The improvement was estimated at over 284 rating points, taking a Class A player and evolving it into an expert.
Realistic avatar eye and head animation using a neurobiological model of visual attention
Laurent Itti, Nitin Dhavale, Frederic Pighin
We describe a neurobiological model of visual attention and eye/head movements in primates, and its application to the automatic animation of a realistic virtual human head watching an unconstrained variety of visual inputs. The bottom-up (image-based) attention model is based on the known neurophysiology of visual processing along the occipito-parietal pathway of the primate brain, while the eye/head movement model is derived from recordings in freely behaving Rhesus monkeys. The system is successful at autonomously saccading towards and tracking salient targets in a variety of video clips, including synthetic stimuli, real outdoors scenes and gaming console outputs. The resulting virtual human eye/head animation yields realistic rendering of the simulation results, both suggesting applicability of this approach to avatar animation and reinforcing the plausibility of the neural model.
Session 4
icon_mobile_dropdown
Multimodal approach to feature extraction for image and signal learning problems
Damian R. Eads, Steven J. Williams, James Theiler, et al.
We present ZEUS, an algorithm for extracting features from images and time series signals. ZEIS is designed to solve a variety of machine learning problems including time series forecasting, signal classification, image and pixel classification of multispectral and panchromatic imagery. An evolutionary approach is used to extract features from a near-infinite space of possible combinations of nonlinear operators. Each problem type (i.e. signal or image, regression or classification, multiclass or binary) has its own set of primitive operators. We employ fairly generic operators, but note that the choice of which operators to use provides an opportunity to consult with a domain expert. Each feature is produced from a composition of some subset of these primitive operators. The fitness for an evolved set of features is given by the performance of a back-end classifier (or regressor) on training data. We demonstrate our multimodal approach to feature extraction on a variety of problems in remote sensing. The performance of this algorithm will be compared to standard approaches, and the relative benefit of various aspects of the algorithm will be investigated.
Biologically motivated analog-to-digital conversion
Eugene K Ressler, Barry L Shoop, Brian C Watson, et al.
Biologically-motivated analog-to-digital (A/D) conversion considers the charge-fire cycles of neurons in biological systems as binary oversampled A/D conversion processes. Feedback mechanisms have been hypothesized that coordinate charge-fire cycles in a manner that suppresses noise in the signal baseband of the power spectrum of output spikes, also a central goal of A/D converter design. Biological systems succeed admirably despite the slow and imprecise characteristics of individual neurons. In A/D converters of very high speed and precision, where electronic/photonic devices also appear slow and imprecise, neural architectures offer a path for advancing the performance frontier. In this work, we provide a new analysis framework and simulation results directed toward that goal.
Implications of the Turing machine model of computation for processor and programming language design
A computational process is classified according to the theoretical model that is capable of executing it; computational processes that require a non-predeterminable amount of intermediate storage for their execution are Turing-machine (TM) processes, while those whose storage are predeterminable are Finite Automation (FA) processes. Simple processes (such as traffic light controller) are executable by Finite Automation, whereas the most general kind of computation requires a Turing Machine for its execution. This implies that a TM process must have a non-predeterminable amount of memory allocated to it at intermediate instants of its execution; i.e. dynamic memory allocation. Many processes encountered in practice are TM processes. The implication for computational practice is that the hardware (CPU) architecture and its operating system must facilitate dynamic memory allocation, and that the programming language used to specify TM processes must have statements with the semantic attribute of dynamic memory allocation, for in Alan Turing’s thesis on computation (1936) the “standard description” of a process is invariant over the most general data that the process is designed to process; i.e. the program describing the process should never have to be modified to allow for differences in the data that is to be processed in different instantiations; i.e. data-invariant programming. Any non-trivial program is partitioned into sub-programs (procedures, subroutines, functions, modules, etc). Examination of the calls/returns between the subprograms reveals that they are nodes in a tree-structure; this tree-structure is independent of the programming language used to encode (define) the process. Each sub-program typically needs some memory for its own use (to store values intermediate between its received data and its computed results); this locally required memory is not needed before the subprogram commences execution, and it is not needed after its execution terminates; it may be allocated as its execution commences, and deallocated as its execution terminates, and if the amount of this local memory is not known until just before execution commencement, then it is essential that it be allocated dynamically as the first action of its execution. This dynamically allocated/deallocated storage of each subprogram’s intermediate values, conforms with the stack discipline; i.e. last allocated = first to be deallocated, an incidental benefit of which is automatic overlaying of variables. This stack-based dynamic memory allocation was a semantic implication of the nested block structure that originated in the ALGOL-60 programming language. AGLOL-60 was a TM language, because the amount of memory allocated on subprogram (block/procedure) entry (for arrays, etc) was computable at execution time. A more general requirement of a Turing machine process is for code generation at run-time; this mandates access to the source language processor (compiler/interpretor) during execution of the process. This fundamental aspect of computer science is important to the future of system design, because it has been overlooked throughout the 55 years since modern computing began in 1048. The popular computer systems of this first half-century of computing were constrained by compile-time (or even operating system boot-time) memory allocation, and were thus limited to executing FA processes. The practical effect was that the distinction between the data-invariant program and its variable data was blurred; programmers had to make trial and error executions, modifying the program’s compile-time constants (array dimensions) to iterate towards the values required at run-time by the data being processed. This era of trial and error computing still persists; it pervades the culture of current (2003) computing practice.
Data modeling of network dynamics
Holger M. Jaenisch, James W. Handley, Jeffery P. Faucheux, et al.
This paper highlights Data Modeling theory and its use for text data mining as a graphical network search engine. Data Modeling is then used to create a real-time filter capable of monitoring network traffic down to the port level for unusual dynamics and changes in business as usual. This is accomplished in an unsupervised fashion without a priori knowledge of abnormal characteristics. Two novel methods for converting streaming binary data into a form amenable to graphics based search and change detection are introduced. These techniques are then successfully applied to 1999 KDD Cup network attack data log-on sessions to demonstrate that Data Modeling can detect attacks without prior training on any form of attack behavior. Finally, two new methods for data encryption using these ideas are proposed.
Using local response analysis to reduce the computational cost of image registration with the hybrid genetic algorithm
The hybrid genetic algorithm (HGA) is used to solve image registration problem formulated as an optimization problem of finding components of a parameter vector minimizing the least squared difference between images. Analysis of image local response helps reduce the computational cost of local search, and genetic operations of selection and recombination. Unit variations of the components of the parameter vector are applied to images subject to registration. Corresponding variations of the objective function in small localities form an image response matrix. The reproduction phase of the algorithm includes a two-phase operation of local search and correction performed on the set of the best chromosomes in the reproduction pool. The step size of the local search is modified according to the values of the response matrix in the localities where the search is performed, which reduces the averaged computational cost of the correction over all iterations. The crossover and mutation phases of the HGA are based on the comparison of the response matrices of the images. The operation of correlation is applied to the response matrices of the reference and the registered images. The result serves as the probability matrix reducing the entire search space to subspaces that most likely contain the optimal solution to the problem. The operations of selection and recombination are performed only on those subspaces. Computational experiments with 2D grayscale images show that in some cases the proposed approach can significantly reduce the computational cost of image registration with the hybrid genetic algorithm.
Session 5
icon_mobile_dropdown
Soft computing and metaheuristics: using knowledge and reasoning to control search and vice-versa
Meta-heuristics are heuristic procedures used to tune, control, guide, allocate computational resources or reason about object-level problem solvers in order to improve their quality, performance, or efficiency. Offline meta-heuristics define the best structural and/or parametric configurations for the object-level model, while on-line heuristics generate run-time corrections for the behavior of the same object-level solvers. Soft Computing is a framework in which we encode domain knowledge to develop such meta-heuristics. We explore the use of meta-heuristics in three application areas: a) control; b) optimization; and c) classification. In the context of control problems, we describe the use of evolutionary algorithms to perform offline parametric tuning of fuzzy controllers, and the use of fuzzy supervisory controllers to perform on-line mode-selection and output interpolation. In the area of optimization, we illustrate the application of fuzzy controllers to manage the transition from exploration to exploitation of evolutionary algorithms that solve the optimization problem. In the context of discrete classification problems, we have leveraged evolutionary algorithms to tune knowledge-based classifiers and maximize their coverage and accuracy.
Avoiding the accuracy-simplicity trade-off in pattern recognition
Statistical pattern recognition begins with a training set of what we hope are fair samples from multiple sets and seeks to devise a rule whereby new samples (not in the training set) are likely to be classified accurately. In so doing it seeks simple classifiers not likely to be attending either to noise or the extraneous in the training set examples, but it also seeks accuracy in classifying members of the training set. It is provable that the optimum lies in a compromise between accuracy and simplicity. I show here a way to achieve both good things at once and hence free pattern recognition of this crippling central tradeoff.
Session 6
icon_mobile_dropdown
Computational and experimental studies of spin chaos in magnetic resonance
Two prevalent interactions in high-field solution magnetic resonance -radiation damping and the dipolar field - are shown to generate instability in spin systems with bulk magnetization. The instability is studied numerically by computing the largest Lyapunov exponent associated with the long-time spin dynamics, which is shown to be positive. An algorithm for investigating dynamical fluctuations in a spatiotemporally chaotic spin system is presented, and the finite-time largest Lyapunov exponents are calculated to gain insight into the growth rates of the system as a function of time. Numerical simulations and experimental results are compared to account for the appearance of experimental anomalies observed in multiple spin echo and pulsed gradient spin echo experiments.
Implementation of linguistic models by holographic technique
In this paper we consider linguistic model as an algebraic model and restrict our consideration to the semantics only. The concept allows “natural-like” language to be used by human-teacher to describe for machine the way of the problem solving, which is based on human’s knowledge and experience. Such imprecision words as “big”, “very big”, “not very big”, etc can be used for human’s knowledge representation. Technically, the problem is to match metric scale, used by the technical device, with the linguistic scale, intuitively formed by the person. We develop an algebraic description of 4-f Fourier-holography setup by using triangular norms based approach. In the model we use the Fourier-duality of the t-norms and t-conorms, which is implemented by 4-f Fourier-holography setup. We demonstrate the setup is described adequately by De-Morgan’s law for involution. Fourier-duality of the t-norms and t-conorms leads to fuzzy-valued logic. We consider General Modus Ponens rule implementation to define the semantical operators, which are adequate to the setup. We consider scales, formed in both +1 and -1 orders of diffraction. We use representation of linguistic labels by fuzzy numbers to form the scale and discuss the dependence of the scale grading on the holographic recording medium operator. To implement reasoning with multi-parametric input variable we use Lorentz function to approximate linguistic labels. We use an example of medical diagnostics for experimental illustration of reasoning on the linguistic scale.
Session 7
icon_mobile_dropdown
Soft computing techniques in network packet video
A new approach to low-bandwidth network packet video quality maximization has been proposed, based on software agent and global optimization algorithm, including: environmental factors (noise, multi-path fading); compression ratio; bit-error-correction; maximum available bandwidth; video format; and encryption. This is important for 2G-wireless RF cellular GSM visual communication, and other low-bandwidth homeland security visibility, and civilian RF WLANs.
Soft computing and minimization/optimization of video/imagery redundancy
Tomasz P. Jannson, Andrew A. Kostrzewski, Wenjian Wang, et al.
This paper investigates the application of soft computing techniques to minimize video/imagery tactical redundancy, to enable video and high-resolution still imagery transmission through low-bandwidth tactical radio channels in the Future Combat System, including spatial, temporal event, and shape extraction for object-oriented processing.
Fuzzy system solution for digital image watermarking
David J Coumou, Athimoottil Mathew
The proposed spatial domain watermarking system is based on fuzzy logic and was designed with the intent of embedding watermark features such that they are undetectable to the human visual system. The objective is to manipulate the host image to a maximum that is below the threshold of detection using as a basis the texture and contrast surrounding the insertion subimage. To achieve this objective, the development and design of the watermarking scheme targets three of the five perceptual holes of the human visual system. The resulting watermarking scheme was evaluated using image processing techniques typical of an accidental attack process. The evaluation of the embedded watermark was subjected to a limited sample of human visual system observers. The watermarking scheme demonstrated excellent resiliency to image compression and ability to be implemented in images over a varied range of spectral characteristics.
Session 8
icon_mobile_dropdown
Automatic screening and multifocus fusion methods for diatom identification
Manuel Forero, Filip Sroubek, Jan Flusser, et al.
The first part of this paper presents a new method for the classification and screening of diatoms in images taken from water samples. The technique can be split into three main stages: segmentation, object feature extraction and classification. The segmentation part consists of two modified thresholding and contour tracing techniques in order to detect the majority of objects present at the sample. From the segmented objects, several features have been extracted and analyzed. For the classification, a diatom training set was considered and the centroids, means and variances of four different classes were found. For the identification process diatoms were classified according with their Mahalanobis distance. The results show the method ability to select at least 80% of usable diatoms from images contaminated with debris. Secondly, full automation of the diatom classification is achieved when multi-focal microscopy is utilized for water sample acquisition. In this case, a necessary preprocessing step is image fusion. A novel wavelet-based fusion method proposed here returns a sharp image that can be directly used for segmentation. For a better understanding of the diatom shape, a 2.5D reconstruction is given.
On the 1/f model for cloud generation
It is generally accepted that cloud like images have an 1/fγ power spectra. We investigate whether other spectra also produce cloud like images, and we show by numerical simulation that the hypothesis is true. Also, we show how systems defined by fractional differential equations can generate such spectra.
Expert classifiers and the ordered veracity-experience response (OVER) curve
The training of good generalizations must mitigate both memorization and arrogance. Memorization is characterized as being too timid in associating new observations with previous experience. Contrarily, arrogance is being too bold. In classification problems, memorization is traditionally assessed via error matrices and iterative error-based techniques such as cross validation. These techniques, however, do nothing to assess arrogance in classification. To identify arrogant classifications, we propose a confusion-based figure of merit which we shall call the ordered veracity-experience response curve, or OVER curve. To produce the OVER curve, one must employ expert classifiers. An expert is a special classifier - a relational computation with not only a mechanism for decision making but also a quantifiable skill level. In this paper, we define the elements of both the expert classifier and OVER curve and, then, demonstrate their utility using the multilayer perceptron.
Fuzzy image segmentation for lung nodule detection
Yue Shen, Ravi T. Sankar, Wei Qian, et al.
This paper focuses on evaluating three fuzzy image segmentation algorithms in lung nodule detection scenario: fuzzy entropy-based method, multivariate fuzzy C-means method (MFCM), adaptive fuzzy C-means method (AFCM) and comparing them with the iterative threshold selection method. The experimental result shows that all three methods outperform iterative threshold selection method. The two fuzzy C-means clustering based algorithms achieve better segmentation performance without losing true positives. However, fuzzy entropy-based image segmentation removes the false positives at the cost of losing some true positives, which is a risky approach and hence it is not recommended for lung nodule detection. Moreover, although AFCM outperforms MFCM in true positive detection significantly, in the sense of TPR/FP, MFCM is comparable to AFCM in the confidence interval of significant level 0.95, since AFCM brings in more false positives than MFCM.
Density-based unsupervised classification for spherical objects
Human interpreters are very sensitive to spatial information in supervised classification. A well-known isodata algorithm in unsupervised classification requires many parameters to be set by human being. Some other unsupervised algorithm focuses on spectral information, but spatial information is lost in the process. Biased sampling is one good approach to get some information about the global structure. For local structures, many techniques have been used. For example similarity and local density are discussed in many papers. In biased sampling, images are divided into many l x l patches and a sample pixel is selected from each patch. Similarity at a point p, denoted by sim(p), measures the change of gray level between point p and its neighborhood N(p). In this article we introduce a method to use biased sampling to combine spectral and spatial information. We use similarity and local popularity in selecting sample points to get better results. To use similarity (sim(p)≤δ), one must determine δ. One way is to make it adapted such that a sample point can be selected from each patch. Here after normalization, we choose a sample point with a minimum value of [equation] for some positive numbers α and β. There is no precondition for δ needed and the selected pixel is a better representative, especially near the border of an object. Kernel estimators are employed to obtain smooth density approximation before final classification. Some experiments have been conducted using the proposed methods and the results are satisfactory.