Proceedings Volume 8059

Evolutionary and Bio-Inspired Computation: Theory and Applications V

cover
Proceedings Volume 8059

Evolutionary and Bio-Inspired Computation: Theory and Applications V

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 11 May 2011
Contents: 7 Sessions, 19 Papers, 0 Presentations
Conference: SPIE Defense, Security, and Sensing 2011
Volume Number: 8059

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 8059
  • Keynote Session I
  • Layered-Sensing Intelligence
  • Knowledge Extraction
  • Medical Imaging
  • Image Intelligence
  • Computer/Network Security
Front Matter: Volume 8059
icon_mobile_dropdown
Front Matter: Volume 8059
This PDF file contains the front matter associated with SPIE Proceedings Volume 8059, including the Title Page, Copyright information, Table of Contents, and the Conference Committee listing.
Keynote Session I
icon_mobile_dropdown
Using concepts from biology to improve problem-solving methods
Erik D. Goodman, Edward J. Rothwell, Ronald C. Averill
Observing nature has been a cornerstone of engineering design. Today, engineers look not only at finished products, but imitate the evolutionary process by which highly optimized artifacts have appeared in nature. Evolutionary computation began by capturing only the simplest ideas of evolution, but today, researchers study natural evolution and incorporate an increasing number of concepts in order to evolve solutions to complex engineering problems. At the new BEACON Center for the Study of Evolution in Action, studies in the lab and field and in silico are laying the groundwork for new tools for evolutionary engineering design. This paper, which accompanies a keynote address, describes various steps in development and application of evolutionary computation, particularly as regards sensor design, and sets the stage for future advances.
Layered-Sensing Intelligence
icon_mobile_dropdown
PADF RF localization experiments with multi-agent caged-MAV platforms
Christopher Barber, Miguel Gates, Rastko Selmic, et al.
This paper provides a summary of preliminary RF direction finding results generated within an AFOSR funded testbed facility recently developed at Louisiana Tech University. This facility, denoted as the Louisiana Tech University Micro- Aerial Vehicle/Wireless Sensor Network (MAVSeN) Laboratory, has recently acquired a number of state-of-the-art MAV platforms that enable us to analyze, design, and test some of our recent results in the area of multiplatform position-adaptive direction finding (PADF) [1] [2] for localization of RF emitters in challenging embedded multipath environments. Discussions within the segmented sections of this paper include a description of the MAVSeN Laboratory and the preliminary results from the implementation of mobile platforms with the PADF algorithm. This novel approach to multi-platform RF direction finding is based on the investigation of iterative path-loss based (i.e. path loss exponent) metrics estimates that are measured across multiple platforms in order to develop a control law that robotically/intelligently positionally adapt (i.e. self-adjust) the location of each distributed/cooperative platform. The body of this paper provides a summary of our recent results on PADF and includes a discussion on state-of-the-art Sensor Mote Technologies as applied towards the development of sensor-integrated caged-MAV platform for PADF applications. Also, a discussion of recent experimental results that incorporate sample approaches to real-time singleplatform data pruning is included as part of a discussion on potential approaches to refining a basic PADF technique in order to integrate and perform distributed self-sensitivity and self-consistency analysis as part of a PADF technique with distributed robotic/intelligent features. These techniques are extracted in analytical form from a parallel study denoted as "PADF RF Localization Criteria for Multi-Model Scattering Environments". The focus here is on developing and reporting specific approaches to self-sensitivity and self-consistency within this experimental PADF framework via the exploitation of specific single-agent caged-MAV trajectories that are unique to this experiment set.
Boresight calibration of the aerial multi-head camera system
Young-Jin Lee, Alper Yilmaz
This paper introduces a novel geometric constraint to boresight calibration for aerial multi-head camera systems. Using the precise EOPs (exterior orientation parameters) estimated for each physical camera and the surface information of the area of interest, multi head camera provides a synthetic image at each time epoch. The camera EOPs can be computed directly from the navigation solution provided by an onboard GPS/INS system and camera platform geometric calibration parameters, which represent the geometric relationship between the camera heads. For direct acquisition of EOPs from the navigation system, the camera frame and the INS frame should be precisely aligned. Boresight can be defined as mounting angles between the INS frame and camera frame. Since small but unknown misalignment angles could cause large errors on the ground, which suggests that they should be precisely estimated. In this paper, unknown boresight angles are estimated by using camera platform geometric calibration parameters as constraints. Since each physical camera of the multi-head camera system is tightly affixed to the platform, the geometry between camera frames remains constant. Simulation results show that the constrained method provides better estimation in terms of both accuracy and precision compared to traditional approach which does not use the constraint.
Initial data sampling in design optimization
Evolutionary computation (EC) techniques in design optimization such as genetic algorithms (GA) or efficient global optimization (EGO) require an initial set of data samples (design points) to start the algorithm. They are obtained by evaluating the cost function at selected sites in the input space. A two-dimensional input space can be sampled using a Latin square, a statistical sampling technique which samples a square grid such that there is a single sample in any given row and column. The Latin hypercube is a generalization to any number of dimensions. However, a standard random Latin hypercube can result in initial data sets which may be highly correlated and may not have good space-filling properties. There are techniques which address these issues. We describe and use one technique in this paper.
A robust regularization algorithm for polynomial networks for machine learning
Holger M. Jaenisch, James W. Handley
We present an improvement to the fundamental Group Method of Data Handling (GMDH) Data Modeling algorithm that overcomes the parameter sensitivity to novel cases presented to derived networks. We achieve this result by regularization of the output and using a genetic weighting that selects intermediate models that do not exhibit divergence. The result is the derivation of multi-nested polynomial networks following the Kolmogorov-Gabor polynomial that are robust to mean estimators as well as novel exemplars for input. The full details of the algorithm are presented. We also introduce a new method for approximating GMDH in a single regression model using F, H, and G terms that automatically exports the answers as ordinary differential equations. The MathCAD 15 source code for all algorithms and results are provided.
A scaled, performance driven evaluation of the layered-sensing framework utilizing polarimetric infrared imagery
The layered sensing framework, in application, provides a useful, but complex integration of information sources, e.g. multiple sensing modalities and operating conditions. It is the implied trade-off between sensor fidelity and system complexity that we address here. Abstractly, each sensor/source of information in a layered sensing application can be viewed as a node in the network of constituent sensors. Regardless of the sensing modality, location, scope, etc., each sensor collects information locally to be utilized by the system as a whole for further exploitation. Consequently, the information may be distributed throughout the network and not necessarily coalesced in a central node/location. We present, initially, an analysis of polarimetric infrared data, with two novel features, as one of the input modalities to such a system. We then proceed with statistical and geometric analyses of an example network, thus quantifying the advantages and drawbacks of a specific application of the layered sensing framework.
Knowledge Extraction
icon_mobile_dropdown
Categorification of the layered sensing construct
Kirk Sturtz, Jared Culbertson, Mark E. Oxley, et al.
We propose a mathematical formulation for a layered sensing architecture based on the theory of categories that will allow us to abstractly define agents and their interactions in such a way that we can treat human and machine (or systems of these) agents homogeneously. One particular advantage is that this general formulation will allow the development of multi-resolution analyses of a given situation that is independent of the particular models used to represent a given agent or system of agents. In this paper, we define the model and prove basic facts that will be fundamental in future work. Central to our approach is the integration of uncertainty into our model. Such a framework is necessitated by our desire to define (among other things) measures of alignment and efficacy for systems of heterogeneous agents operating in a diverse and complex environment.
Modeling decision uncertainties in total situation awareness using cloud computation theory
Saleh Zein-Sabatto, Abdulqadir Khoshnaw, Sachin Shetty, et al.
Uncertainty plays decisive role in the confidence of the decisions made about events. For example, in situation awareness, decision-making is faced with two types of uncertainties; information uncertainty and data uncertainty. Data uncertainty exists due to noise in sensor measurements and is classified as randomness. Information uncertainty is due to ambiguity of using (words) to describe events. This uncertainty is known as fuzziness. Typically, these two types of uncertainties are handled separately using two different theories. Randomness is modeled by probability theory, while fuzzy-logic is used to address fuzziness. In this paper we used the Cloud computation theory to treat data randomness and information fuzziness in one single model. First, we described the Cloud theory then used the theory to generate one and two-dimensional Cloud models. Second, we used the Cloud models to capture and process data randomness and fuzziness in information relative to decision-making in situation awareness. Finally, we applied the models to generate security decisions for security monitoring of sensitive area. Testing results are reported at the end of the paper.
Wide-threat detection: recognition of adversarial missions and activity patterns in Empire Challenge 2009
Georgiy Levchuk, Charlotte Shabarekh, Caitlin Furjanic
In this paper, we present results of adversarial activity recognition using data collected in the Empire Challenge (EC 09) exercise. The EC09 experiment provided an opportunity to evaluate our probabilistic spatiotemporal mission recognition algorithms using the data from live air-born and ground sensors. Using ambiguous and noisy data about locations of entities and motion events on the ground, the algorithms inferred the types and locations of OPFOR activities, including reconnaissance, cache runs, IED emplacements, logistics, and planning meetings. In this paper, we present detailed summary of the validation study and recognition accuracy results. Our algorithms were able to detect locations and types of over 75% of hostile activities in EC09 while producing 25% false alarms.
Medical Imaging
icon_mobile_dropdown
Graph clustering techniques applied to the glycomic response in glioblastoma cells to treatments with STAT3 phosphorylation inhibition and fetal bovine serum
Robert Görke, Anke Meyer-Bäse, Claudia Plant, et al.
Cancer stem cells (CSC) represent a very small percentage of the total tumor population however they pose a big challenge in treating cancer. Glycans play a key role in cancer therapeutics since overexpression of them depending on the glycan type can lead either to cell death or more invasive metastasis. Two major components, fetal bovine serum (FBS) and STAT3, are known to up- or down-regulate certain glycolipid or phospholipid compositions found in glioblastoma CSCs. The analysis and the understanding of the global interactional behavior of lipidomic networks remains a challenging task and can not be accomplished solely based on intuitive reasoning. The present contribution aims at applying graph clustering networks to analyze the functional aspects of certain activators or inhibitors at the molecular level in glioblastoma stem cells (GSCs). This research enhances our understanding of the differences in phenotype changes and determining the responses of glycans to certain treatments for the aggressive GSCs, and represents together with a quantitative phosphoproteomic study1 the most detailed systems biology study of GSCs differentiation known so far. Thus, these new paradigms are providing unique understanding of the mechanisms involved in GSC maintenance and tumorigenicity and are thus opening a new window to biomedical frontiers.
Improved computer-aided diagnosis for breast lesions detection in DCE-MRI based on image registration and integration of morphologic and dynamic characteristics
Felix Retter, Claudia Plant, Bernhard Burgeth, et al.
Motion-based artifacts lead in breast MRI to diagnostic misinterpretation and therefore represents an important prerequisite to automatic lesion detection and diagnosis. In the present paper, we evaluate the performance of a computer-aided diagnosis (CAD) system consisting of motion correction, lesion segmentation, feature extraction and classification. Many novel feature extraction techniques are proposed and tested in conjunction with motion correction and classification. Our simulation results have shown that motion compensation combined with Minkowsi functionals and Bayesian classifier can improve the lesion detection and classification.
Image Intelligence
icon_mobile_dropdown
Evolving wavelet and scaling numbers for optimized image compression: forward, inverse, or both? A comparative study
Brendan Babb, Frank Moore, Shawn Aldridge, et al.
The 9/7 wavelet is used for a wide variety of image compression tasks. Recent research, however, has established a methodology for using evolutionary computation to evolve wavelet and scaling numbers describing transforms that outperform the 9/7 under lossy conditions, such as those brought about by quantization or thresholding. This paper describes an investigation into which of three possible approaches to transform evolution produces the most effective transforms. The first approach uses an evolved forward transform for compression, but performs reconstruction using the 9/7 inverse transform; the second uses the 9/7 forward transform for compression, but performs reconstruction using an evolved inverse transform; the third uses simultaneously evolved forward and inverse transforms for compression and reconstruction. Three image sets are independently used for training: digital photographs, fingerprints, and satellite images. Results strongly suggest that it is impossible for evolved transforms to substantially improve upon the performance of the 9/7 without evolving the inverse transform.
Evolving matched filter transform pairs for satellite image processing
Wavelets provide an attractive method for efficient image compression. For transmission across noisy or bandwidth limited channels, a signal may be subjected to quantization in which the signal is transcribed onto a reduced alphabet in order to save bandwidth. Unfortunately, the performance of the discrete wavelet transform (DWT) degrades at increasing levels of quantization. In recent years, evolutionary algorithms (EAs) have been employed to optimize wavelet-inspired transform filters to improve compression performance in the presence of quantization. Wavelet filters consist of a pair of real-valued coefficient sets; one set represents the compression filter while the other set defines the image reconstruction filter. The reconstruction filter is defined as the biorthogonal inverse of the compression filter. Previous research focused upon two approaches to filter optimization. In one approach, the original wavelet filter is used for image compression while the reconstruction filter is evolved by an EA. In the second approach, both the compression and reconstruction filters are evolved. In both cases, the filters are not biorthogonally related to one another. We propose a novel approach to filter evolution. The EA optimizes a compression filter. Rather than using a wavelet filter or evolving a second filter for reconstruction, the reconstruction filter is computed as the biorthogonal inverse of the evolved compression filter. The resulting filter pair retains some of the mathematical properties of wavelets. This paper compares this new approach to existing filter optimization approaches to determine its suitability for the optimization of image filters appropriate for defense applications of image processing.
Image sets for satellite image processing systems
The development of novel image processing algorithms requires a diverse and relevant set of training images to ensure the general applicability of such algorithms for their required tasks. Images must be appropriately chosen for the algorithm's intended applications. Image processing algorithms often employ the discrete wavelet transform (DWT) algorithm to provide efficient compression and near-perfect reconstruction of image data. Defense applications often require the transmission of images and video across noisy or low-bandwidth channels. Unfortunately, the DWT algorithm's performance deteriorates in the presence of noise. Evolutionary algorithms are often able to train image filters that outperform DWT filters in noisy environments. Here, we present and evaluate two image sets suitable for the training of such filters for satellite and unmanned aerial vehicle imagery applications. We demonstrate the use of the first image set as a training platform for evolutionary algorithms that optimize discrete wavelet transform (DWT)-based image transform filters for satellite image compression. We evaluate the suitability of each image as a training image during optimization. Each image is ranked according to its suitability as a training image and its difficulty as a test image. The second image set provides a test-bed for holdout validation of trained image filters. These images are used to independently verify that trained filters will provide strong performance on unseen satellite images. Collectively, these image sets are suitable for the development of image processing algorithms for satellite and reconnaissance imagery applications.
Evolving point-cloud features for gender classification
Brittany Keen, Aaron Fouts, Mateen Rizki, et al.
In this paper we explore the use of histogram features extracted from 3D point clouds of human subjects for gender classification. Experiments are conducted using point clouds drawn from the CAESAR anthropometric database provided by the Air Force Research Laboratory (AFRL) Human Effectiveness Directorate and SAE International. This database contains approximately 4400 high resolution LIDAR whole body scans of carefully posed human subjects. Features are extracted from each point cloud by embedding the cloud in series of cylindrical shapes and computing a point count for each cylinder that characterizes a region of the subject. These measurements define rotationally invariant histogram features that are processed by a classifier to label the gender of each subject. Preliminary results using cylinder sizes defined by human experts demonstrate that gender can be predicted with 98% accuracy for the type of high density point cloud found in the CAESAR database. When point cloud densities are reduced to levels that might be obtained using stand-off sensors; gender classification accuracy degrades. We introduce an evolutionary algorithm to optimize the number and size of the cylinders used to define histogram features. The objective of this optimization process is to identify a set of cylindrical features that reduces the error rate when predicting gender from low density point clouds. A wrapper approach is used to interleave feature selection with classifier evaluation to train the evolutionary algorithm. Results of classification accuracy achieved using the evolved features are compared to the baseline feature set defined by human experts.
Computer/Network Security
icon_mobile_dropdown
Behavioral analysis of malicious code through network traffic and system call monitoring
André R. A. Grégio, Dario S. Fernandes Filho, Vitor M. Afonso, et al.
Malicious code (malware) that spreads through the Internet-such as viruses, worms and trojans-is a major threat to information security nowadays and a profitable business for criminals. There are several approaches to analyze malware by monitoring its actions while it is running in a controlled environment, which helps to identify malicious behaviors. In this article we propose a tool to analyze malware behavior in a non-intrusive and effective way, extending the analysis possibilities to cover malware samples that bypass current approaches and also fixes some issues with these approaches.
An adaptive neural swarm approach for intrusion defense in ad hoc networks
Wireless sensor networks (WSN) and mobile ad hoc networks (MANET) are being increasingly deployed in critical applications due to the flexibility and extensibility of the technology. While these networks possess numerous advantages over traditional wireless systems in dynamic environments they are still vulnerable to many of the same types of host-based and distributed attacks common to those systems. Unfortunately, the limited power and bandwidth available in WSNs and MANETs, combined with the dynamic connectivity that is a defining characteristic of the technology, makes it extremely difficult to utilize traditional intrusion detection techniques. This paper describes an approach to accurately and efficiently detect potentially damaging activity in WSNs and MANETs. It enables the network as a whole to recognize attacks, anomalies, and potential vulnerabilities in a distributive manner that reflects the autonomic processes of biological systems. Each component of the network recognizes activity in its local environment and then contributes to the overall situational awareness of the entire system. The approach utilizes agent-based swarm intelligence to adaptively identify potential data sources on each node and on adjacent nodes throughout the network. The swarm agents then self-organize into modular neural networks that utilize a reinforcement learning algorithm to identify relevant behavior patterns in the data without supervision. Once the modular neural networks have established interconnectivity both locally and with neighboring nodes the analysis of events within the network can be conducted collectively in real-time. The approach has been shown to be extremely effective in identifying distributed network attacks.
Combined bio-inspired/evolutionary computational methods in cross-layer protocol optimization for wireless ad hoc sensor networks
William S. Hortos
Published studies have focused on the application of one bio-inspired or evolutionary computational method to the functions of a single protocol layer in a wireless ad hoc sensor network (WSN). For example, swarm intelligence in the form of ant colony optimization (ACO), has been repeatedly considered for the routing of data/information among nodes, a network-layer function, while genetic algorithms (GAs) have been used to select transmission frequencies and power levels, physical-layer functions. Similarly, artificial immune systems (AISs) as well as trust models of quantized data reputation have been invoked for detection of network intrusions that cause anomalies in data and information; these act on the application and presentation layers. Most recently, a self-organizing scheduling scheme inspired by frog-calling behavior for reliable data transmission in wireless sensor networks, termed anti-phase synchronization, has been applied to realize collision-free transmissions between neighboring nodes, a function of the MAC layer. In a novel departure from previous work, the cross-layer approach to WSN protocol design suggests applying more than one evolutionary computational method to the functions of the appropriate layers to improve the QoS performance of the cross-layer design beyond that of one method applied to a single layer's functions. A baseline WSN protocol design, embedding GAs, anti-phase synchronization, ACO, and a trust model based on quantized data reputation at the physical, MAC, network, and application layers, respectively, is constructed. Simulation results demonstrate the synergies among the bioinspired/ evolutionary methods of the proposed baseline design improve the overall QoS performance of networks over that of a single computational method.