Proceedings Volume 5421

Intelligent Computing: Theory and Applications II

cover
Proceedings Volume 5421

Intelligent Computing: Theory and Applications II

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 12 April 2004
Contents: 7 Sessions, 19 Papers, 0 Presentations
Conference: Defense and Security 2004
Volume Number: 5421

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Applications I
  • Applications II
  • Smart Sensors I
  • Smart Sensors II
  • Theory and Methods I
  • Theory and Methods II
  • Poster Session
Applications I
icon_mobile_dropdown
Using an object-based grid system to evaluate a newly developed EP approach to formulate SVMs as applied to the classification of organophosphate nerve agents
Walker H. Land Jr., Michael Lewis, Omowunmi Sadik, et al.
This paper extends the classification approaches described in reference [1] in the following way: (1.) developing and evaluating a new method for evolving organophosphate nerve agent Support Vector Machine (SVM) classifiers using Evolutionary Programming, (2.) conducting research experiments using a larger database of organophosphate nerve agents, and (3.) upgrading the architecture to an object-based grid system for evaluating the classification of EP derived SVMs. Due to the increased threats of chemical and biological weapons of mass destruction (WMD) by international terrorist organizations, a significant effort is underway to develop tools that can be used to detect and effectively combat biochemical warfare. This paper reports the integration of multi-array sensors with Support Vector Machines (SVMs) for the detection of organophosphates nerve agents using a grid computing system called Legion. Grid computing is the use of large collections of heterogeneous, distributed resources (including machines, databases, devices, and users) to support large-scale computations and wide-area data access. Finally, preliminary results using EP derived support vector machines designed to operate on distributed systems have provided accurate classification results. In addition, distributed training time architectures are 50 times faster when compared to standard iterative training time methods.
Direction-of-arrival interferometer array design using genetic algorithm with fuzzy logic
Design of interferometer arrays for radio frequency direction of arrival estimation involves optimizing conflicting requirements. For example, high resolution conflicts with low cost. Lower level requirements also invoke lower level design issues such as ambiguity in direction of arrival angle. A more efficient array design process is described here, which uses a genetic algorithm with a growing genome and fuzzy logic scoring. Extensive simulation software is also needed. Simulation starts with randomized small array configurations. These are then evaluated against the fitness functions with results scored using fuzzy logic. The best-fit of the population are combined to produce the next generation. A mutation function introduces slight randomness in some genomes. Finally, if the overall population scores well the size of the genome is increased until final genome size is consistent with the desired array resolution requirement. The genetic algorithm design process described here produced a number of array designs. The results indicate discrete stages or steps in the optimization and an interesting trade-off of lower resolution for greater accuracy.
Model-free functional MRI analysis using improved fuzzy cluster analysis techniques
Conventional model-based or statistical analysis methods for functional MRI (fMRI) are easy to implement, and are effective in analyzing data with simple paradigms. However, they are not applicable in situations in which patterns of neural response are complicated and when fMRI response is unknown. In this paper the Gath-Geva algorithm is adapted and rigorously studied for analyzing fMRI data. The algorithm supports spatial connectivity aiding in the identification of activation sites in functional brain imaging. A comparison of this new method with the fuzzy n-means algorithm, Kohonen's self-organizing map, fuzzy n-means algorithm with unsupervised initialization, minimal free energy vector quantizer and the "neural gas" network is done in a systematic fMRI study showing comparative quantitative evaluations. The most important findings in the paper are: (1) the Gath-Geva algorithms outperforms for a large number of codebook vectors all other clustering methods in terms of detecting small activation areas, and (2) for a smaller number of codebook vectors the fuzzy n-means with unsupervised initialization outperforms all other techniques. The applicability of the new algorithm is demonstrated on experimental data.
Applications II
icon_mobile_dropdown
Computer-aided diagnosis in breast MRI based on unsupervised clustering techniques
Exploratory data analysis techniques are applied to the segmentation of lesions in MRI mammography as a first step of a computer-aided diagnosis system. Three new unsupervised clustering techniques are tested on biomedical time-series representing breast MRI scans: fuzzy clustering based on deterministic annealing, "neural gas" network, and topographic independent component analysis. While the first two methods enable a correct segmentation of the lesion, the latter, although incorporating a topographic mapping, fails to detect and subclassify lesions.
Intelligent control system using Dempster-Shafer theory of evidence
This paper presents an intelligent control system for robust and adaptive control of non-linear, time-variant systems operating under uncertain conditions. The approach is based on Dempster-Shafer theory of evidence and Dempster’s rule for combining beliefs. The approach can handle systems with multiple sensors and multiple sources of disturbance. Sensor with different and nonlinear characteristics can also be integrated in this approach.
Invariant extreme physical information and fuzzy clustering
Ravi C. Venkatesan
A principled formulation for knowledge acquisition from discrete data based on a continuum-free invariance preserving extension of the Extreme Physical Information (EPI) theory of Frieden is presented. A systematic invariance preserving methodology to formulate and minimize lattice EPI fuzzy clustering objective functions, and, determine the concomitant constraints is suggested. Equivalence between invariant EPI (IEPI) fuzzy clustering, described within a discrete time-independent Schrodinger-like framework, and robust Possibilistic c-Means (PcM) clustering is exemplified. The constraints are shown to be consistent with Heisenberg's uncertainty principle. Numerical examples for exemplary cases are provided for multiple potential wells, without a-priori knowledge of the number of clusters.
Smart Sensors I
icon_mobile_dropdown
Diffusion-based path planning in mobile actuator-sensor networks (MAS-net): some preliminary results
In this paper we present preliminary results related to path-planning problems when it is known that the quantities of interest in the system are generated via a diffusion process. The use of mobile sensor-actuator networks (MAS-Net) is proposed for such problems. A discussion of such networks is given, followed by a description of the general framework of the problem. Our strategy assumes that a network of mobile sensors can be commanded to collect samples of the distribution of interest. These samples are then used as constraints for a predictive model of the process. The predicted distribution from the model is then used to determine new sampling locations. A 2-D testbed for studying these ideas is described. The testbed includes a network of ten robots operating as a network using Intel Motes. We also present simulation results from our initial partial differential equation model of the diffusion process in the testbed.
Practical discovery and lookup of intelligent distributed services
Many advanced multi-agent cooperation approaches have been proposed over the years, but few have met the test of widespread adoption and cost effectiveness. Because multi-agent cooperation implies the ability of multiple distributed services to discover and lookup one another, the discovery and lookup services must be based on industry standards for them to be widely adoptable and low in cost to implement. This paper describes the design of a practical and low-cost infrastructure for enabling the dynamic discovery and lookup of intelligent distributed services in a federated heterogeneous network using standard cost-effective Java- and XML- based technologies. The requirements for such dynamic discovery and lookup protocols will first be described. An overview of those core candidate solution technologies will then be presented. A dynamic discovery and lookup design based on standard Java and XML technologies will then be described in terms of both the discovery and lookup service infrastructure and client interfaces to such infrastructure.
Designing teams of unattended ground sensors using genetic algorithms
Ayse Selen Yilmaz, Brian N. McQuay, Annie S. Wu, et al.
Improvements in sensor capabilities have driven the need for automated sensor allocation and management systems. Such systems provide a penalty-free test environment and valuable input to human operators by offering candidate solutions. These abilities lead, in turn, to savings in manpower and time. Determining an optimal team of cooperating sensors for military operations is a challenging task. There is a tradeoff between the desire to decrease the cost and the need to increase the sensing capabilities of a sensor suite. This work focuses on unattended ground sensor networks consisting of teams of small, inexpensive sensors. Given a possible configuration of enemy radar, our goal isto generate sensor suites that monitor as many enemy radar as possible while minimizing cost. In previous work, we have shown that genetic algorithms (GAs) can be used to evolve successful teams of sensors for this problem. This work extends our previous work in two ways: we use an improved simulator containing a more accurate model of radar and sensor capabilities for out fitness evaluations and we introduce two new genetic operators, insertion and deletion, that are expected to improve the GA's fine tuning abilities. Empirical results show that our GA approach produces near optimal results under a variety of enemy radar configurations using sensors with varying capabilities. Detection percentage remains stable regardless of changes in the enemy radar placements.
Foundations for learning and adaptation in a multi-degree-of-freedom unmanned ground vehicle
Michael R. Blackburn, Richard Bailey
The real-time coordination and control of a many motion degrees of freedom (dof) unmanned ground vehicle under dynamic conditions in a complex environment is nearly impossible for a human operator to accomplish. Needed are adaptive on-board mechanisms to quickly complete sensor-effector loops to maintain balance and leverage. This paper contains a description of our approach to the control problem for a small unmanned ground vehicle with six dof in the three spatial dimensions. Vehicle control is based upon seven fixed action patterns that exercise all of the motion dof of which the vehicle is capable, and five basic reactive behaviors that protect the vehicle during operation. The reactive behaviors demonstrate short-term adaptations. The learning processes for long-term adaptations of the vehicle control functions that we are implementing are composed of classical and operant conditionings of novel responses to information available from distance sensors (vision and audition) built upon the pre-defined fixed action patterns. The fixed action patterns are in turn modulated by the pre-defined low-level reactive behaviors that, as unconditioned responses, continuously serve to maintain the viability of the robot during the activations of the fixed action patterns, and of the higher-order (conditioned) behaviors. The sensors of the internal environment that govern the low-level reactive behaviors also serve as the criteria for operant conditioning, and satisfy the requirement for basic behavioral motivation.
Diffusion boundary determination and zone control via mobile actuator-sensor networks (MAS-net): challenges and opportunities
This paper presents challenges and opportunities related to the problem of diffusion boundary determination and zone control via mobile actuator-sensor networks (MAS-net). This research theme is motivated by three example application scenarios: 1) The safe ground boundary determination of the radiation field from multiple radiation sources; 2) The nontoxic reservoir water surface boundary determination and zone control due to a toxic diffusion source; 3) The safe nontoxic 3D boundary determination and zone control of biological or chemical contamination in the air. We focus on the case of 2D diffusion process and on using a team of ground mobile robots to track the diffusion boundary. Moreover, we assume that there are a number of robots that can carry and move networked actuators to release a neutralizing chemical agent so that the shape of the polluted zone can be actively controlled. These two MAS-net applications, i.e., diffusion boundary determination and zone control, are formulated as model-based distributed control tasks. On the technological side, we focus on the node specialization and the power supply problems. On the theoretical side, some recently developed new concepts are introduced, such as the regional/zone observability, regional/zone controllability, regional/zone Luenberger observer etc. We speculate on possible further developments in the theoretical research by noting the combination of diffusion based path planning and regional analysis of the overall MAS-net distributed control system.
Smart Sensors II
icon_mobile_dropdown
Applying a service-based architecture to autonomous distributed sensor networks
David M. Patrone, Dennis S. Patrone, Doug S. Wenstrand, et al.
Traditional distributed architectures are not sufficient when developing an autonomous, distributed sensor network. In order to be truly autonomous, a distributed sensor network must be able to survive and reconfigure in-the-field without manual intervention. A limitation of traditional distributed architectures, such as client/server or peer-to-peer, within an autonomous network is that the distributed devices and applications are tightly coupled by their communication protocols prior to implementation and deployment. The introduction of new devices and applications in the field is difficult due to this coupling. Also, autonomous reconfiguration of the devices on the network due to faults or addition of new devices is extremely difficult unless the devices are homogeneous. A service-based architecture is proposed as an alternative architecture for creating autonomous, distributed sensor networks. The service-based approach provides the ability to create a scalable, self-configuring, and self-healing network for building and maintaining large, emerging and ad-hoc virtual networks of devices and applications. New devices can be automatically discovered by current devices on the network and automatically integrated into the system without manual intervention. This paper will explain the benefits and limitations of applying a service-based architecture to autonomous, distributed sensor networks and compare this approach with traditional architectures such as client/server and peer-to-peer. A description will be given of a prototype system developed using service-enabled seismic, acoustic, and visual sensors.
Intelligent communication systems sensor scheduling based on network information
Erik P. Blasch, Andrew Hoff
Increased reliance on data communications has driven the need for a higher capacity solution to transmit data. Many decision making problems include various types of data (i.e. video, text), which require an intelligent way to transfer the data across a communication system. Data fusion can be performed at the transmitting end to reduce the dimensionality of the data transferred; however if there are errors, the receiving end would have no way to search through the pedigree of information to correct the problem. Thus, a desired analysis is to be able to transfer all the data types, while achieving the “Quality of Service” metrics of throughput, delay, delay variation, probability of error, and cost. One way to solve this problem is by using the Asynchronous Transfer Mode network data scheduling. An ATM network allows multiple types of data to be sent over the same system with dynamic bandwidth allocation. The following paper provides a description of an intelligent scheduling model to enhance the capability to transmit data for fusion analysis.
Theory and Methods I
icon_mobile_dropdown
Techniques for evaluating classifiers in application
In gauging the generalization capability of a classifier, a good evaluation technique should adhere to certain principles. For instance, the technique should evaluate a selected classifier, not simply an architecture. Secondly, a solution should be assessable at the classifier’s design and, further, throughout its application. Additionally, the technique should be insensitive to data presentation and cover a significant portion of the classifier’s domain. Such principles call for methods beyond supervised learning and statistical training techniques such as cross validation. For this paper, we shall discuss the evaluation of a generalization in application. For illustration, we will present a method for the multilayer perceptron (MLP) that may be drawn from the unlabeled data collected in the operational use of a given classifier. These conclusions support self-supervised learning and computational methods that isolate unstable, nonrepresentational regions in the classifier.
A comparative analysis of machine classifiers
An analysis of training techniques for a machine classifier is presented using three methods of training the weights of the classifier. The decision regions for a four class problem are presented to illustrate the differences made by each of the training methods.
Polymodal information processing via temporal cortex Area 37 modeling
A model of biological information processing is presented that consists of auditory and visual subsystems linked to temporal cortex and limbic processing. An biologically based algorithm is presented for the fusing of information sources of fundamentally different modalities. Proof of this concept is outlined by a system which combines auditory input (musical sequences) and visual input (illustrations such as paintings) via a model of cortex processing for Area 37 of the temporal cortex. The training data can be used to construct a connectionist model whose biological relevance is suspect yet is still useful and a biologically based model which achieves the same input to output map through biologically relevant means. The constructed models are able to create from a set of auditory and visual clues a combined musical/ illustration output which shares many of the properties of the original training data. These algorithms are not dependent on these particular auditory/ visual modalities and hence are of general use in the intelligent computation of outputs that require sensor fusion.
Theory and Methods II
icon_mobile_dropdown
An architecture for distributed real-time large-scale information processing for intelligence analysis
Given a massive and dynamic space of information (nuggets) and a query to be answered, how can the correct (answer) nuggets be retrieved in an effective and efficient manner? We present a large-scale distributed real-time architecture based on anytime intelligent foraging, gathering, and matching (I-FGM) on massive and dynamic information spaces. Simply put, we envision that when given a search query, large numbers of computational processes are alerted or activated in parallel to begin identifying and retrieving the appro-priate information nuggets. In particular, our approach aims to provide an anytime capa-bility which functions as follows: Given finite computational resources, I-FGM will pro-ceed to explore the information space and, over time, continuously identify and update promising candidate nugget, thus, good candidates will be available at anytime on re-quest. With the computational costs of evaluating the relevance of a candidate nugget, the anytime nature of I-FGM will provide increasing confidence on nugget selections over time by providing admissible partial evaluations. When a new promising candidate is identified, the current set of selected nuggets is re-evaluated and updated appropriately. Essentially, I-FGM will guide its finite computational resources in locating the target in-formation nuggets quickly and iteratively over time. In addition, the goal of I-FGM is to naturally handle new nuggets as they appear. A central element of our framework is to provide a formal computational model of this massive data-intensive problem.
Experimental analysis of methods for imputation of missing values in databases
Alireza Farhangfar, Lukasz A. Kurgan, Witold Pedrycz
A very important issue faced by researchers and practitioners who use industrial and research databases is incompleteness of data, usually in terms of missing or erroneous values. While some of data analysis algorithms can work with incomplete data, a large portion of them require complete data. Therefore, different strategies, such as deletion of incomplete examples, and imputation (filling) of missing values through variety of statistical and machine learning (ML) procedures, are developed to preprocess the incomplete data. This study concentrates on performing experimental analysis of several algorithms for imputation of missing values, which range from simple statistical algorithms like mean and hot deck imputation to imputation algorithms that work based on application of inductive ML algorithms. Three major families of ML algorithms, such as probabilistic algorithms (e.g. Naive Bayes), decision tree algorithms (e.g. C4.5), and decision rule algorithms (e.g. CLIP4), are used to implement the ML based imputation algorithms. The analysis is carried out using a comprehensive range of databases, for which missing values were introduced randomly. The goal of this paper is to provide general guidelines on selection of suitable data imputation algorithms based on characteristics of the data. The guidelines are developed by performing a comprehensive experimental comparison of performance of different data imputation algorithms.
Poster Session
icon_mobile_dropdown
The concept of biologically motivated time-pulse information processing for design and construction of multifunctional devices of neural logic
On the basis of the analysis of advanced approaches and optoelectronic systems for realization of various logics: two-valued, multi-valued, neural, continuous and others the biologically motivated time-pulse conception for building of multifunctional reconfigurable universal elements with programmable tuning for neurobiologic is grounded. The concept consists in usage of preliminary conversion of multi-level or continuous optic 2D signals into durations of time intervals (the conversion to a temporal area) and further use of time-pulse two-level digital signals that allows to ensure fast tuning to a required function of two-valued, multi-valued and other logics. It is shown that optoelectronic pulse-phase and pulse-width modulators (PPM and PWM) are the base elements for that. Time-pulse coding universal elements for matrix two-valued and multi-valued logics and structural-functional design of universal time-pulse coding elements for neural (continuous) logic are considered in the article. PPMs realized on 1.5μm technology CMOS transistors are considered. The PPMs have parameters: the input photocurrent range is 10nA...10μA; the conversion period is 10μs...1ms; the conversion relative error is 0.1...1%; the conversion law is ramp; the supply voltage is 3V and the power consumption is 83μW. The small power consumption of such PPMs enables successfully their integration in 2Darray with size of 128x128 elements and more and productivity equals 1...10 Giga continuous logic operations per sec.