Proceedings Volume 6039

Complex Systems

cover
Proceedings Volume 6039

Complex Systems

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 18 January 2006
Contents: 8 Sessions, 25 Papers, 0 Presentations
Conference: Microelectronics, MEMS, and Nanotechnology 2005
Volume Number: 6039

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Keynote Session
  • Complex Systems in Defence
  • Multi-Agent Systems
  • Mathematical Modelling of Complex Systems
  • Complex Real Systems
  • Networks I
  • Networks II
  • Biological and Bio-Inspired Complex Systems
Keynote Session
icon_mobile_dropdown
Statistical mechanics of dense granular media
M. Nicodemi, A. Coniglio, A. de Candia, et al.
We discuss some recent results on Statistical Mechanics approach to dense granular media. In particular, by analytical mean field investigation we derive the phase diagram of monodisperse and bydisperse granular assemblies. We show that "jamming" corresponds to a phase transition from a "fluid" to a "glassy" phase, observed when crystallization is avoided. The nature of such a "glassy" phase turns out to be the same found in mean field models for glass formers. This gives quantitative evidence to the idea of a unified description of the "jamming" transition in granular media and thermal systems, such as glasses. We also discuss mixing/segregation transitions in binary mixtures and their connections to phase separation and "geometric" effects.
Complex systems engineering from nano- to macro-scale
Geoff James, Jiaming Li, Ying Guo, et al.
The design of complex systems to achieve desired outcomes - complex systems engineering - is achievable in numerous natural systems and in some systems of human construction. This paper concerns multi-agent complex systems that comprise a large number of autonomous, interacting elements. Emergence presents a rich variety of behaviours for the designer to use; however, the unpredictability of emergence is a barrier to conventional engineering methodology. By probing examples of engineered systems and looking for common features, a design methodology may be sought.
Complex Systems in Defence
icon_mobile_dropdown
Co-Adaptation
Adaptivity is a property that arises naturally in the growth of complex systems, and leads to such desirable features as resilience to shocks and damage, the ability to discover and exploit advantages and resources in the environment, to recognise and avoid dangers, and to produce innovative and effective behaviours in the face of unpredictable challenges. Not surprisingly, as the world becomes more complex and unpredictable, there is a growing requirement for adaptivity in our own systems and processes, and interest in how to design for it, or rather, foster its emergence. The feasibility of engendering more adaptivity in our systems and processes rests on advances in detailed understanding of how adaptivity works in the natural world, and how that might be extended, and on what is being made possible by the continuing explosion in information and information technology. We recognise the potential for multiple nested levels of adaptivity, from adaptive action-in-the-world, to applying double-loop learning to the adaptive action capability elements themselves, through applying the power of adaptivity to the difficult problem of articulating sufficiently precise measures of success suitable to drive the adaptive action and the double-loop learning levels, and ultimately at the level we might call co-adaptation. The term co-adaptation acknowledges that in practice we are never looking at an isolated system adapting to a relatively static environment, but rather at a number of systems interacting with each other, and thereby creating a constantly changing context for each other to adapt to. This invites a higher level view and further options for targetted interventions in aspects of the roles, boundaries and relationships of the interacting systems (or that subset of them over which we may have some influence) in order to more effectively shape the outcomes.
Complexity associated with the optimisation of capability options in military operations
A. Pincombe, A. Bender, G. Allen
In the context of a military operation, even if the intended actions, the geographic location, and the capabilities of the opposition are known, there are still some critical uncertainties that could have a major impact on the effectiveness of a given set of capabilities. These uncertainties include unpredictable events and the response alternatives that are available to the command and control elements of the capability set. They greatly complicate any a priori mathematical description. In a forecasting approach, the most likely future might be chosen and a solution sought that is optimal for that case. With scenario analysis, futures are proposed on the basis of critical uncertainties and the option that is most robust is chosen. We use scenario analysis but our approach is different in that we focus on the complexity and use the coupling between scenarios and options to create information on ideal options. The approach makes use of both soft and hard operations research methods, with subject matter expertise being used to define plausible responses to scenarios. In each scenario, uncertainty affects only a subset of the system-inherent variables and the variables that describe system-environment interactions. It is this scenario-specific reduction of variables that makes the problem mathematically tractable. The process we define is significantly different to existing scenario analysis processes, so we have named it adversarial scenario analysis. It can be used in conjunction with other methods, including recent improvements to the scenario analysis process. To illustrate the approach, we undertake a tactical level scenario analysis for a logistics problem that is defined by a network, expected throughputs to end users, the transport capacity available, the infrastructure at the nodes and the capacities of roads, stocks etc. The throughput capacity, e.g. the effectiveness, of the system relies on all of these variables and on the couplings between them. The system is initially in equilibrium for a given level of demand. However, different, and simpler, solutions emerge as the balance of couplings and the importance of variables change. The scenarios describe such changes in conditions. For each scenario it was possible to define measures that describe the differences between options. As with agent-based distillations, the solution is essentially qualitative and exploratory, bringing awareness of possible future difficulties and of the capabilities that are necessary if we are to deal successfully with those difficulties.
Adaptive battle agents: complex adaptive combat models
Kim L. Lim, Isaac Mann, Ricardo Santos, et al.
This work explores emergent behaviour in a complex adaptive system, specifically an agent-based battlefield simulation model. We explore the changes in agent attribute sets through the use of genetic algorithms over a series of battles, with performance measured by a number of different statistics including number of casualties, number of enemy agents killed, and success rate at "capturing the flag". The agents' capabilities include (but are not limited to) maneuverability upon the battlefield, formulating, sending, receiving and acting upon messages and attacking enemy agents.
Multi-Agent Systems
icon_mobile_dropdown
Effects of decision-making on the transport costs across complex networks
We analyse the effects of agents' decisions on the creation of and reaction to, congestion on a centralised network with a ring-and-hub topology. We take a fixed network model and numerically determine the global transport costs across the network as a function of capacity. These results show that as the capacity of the hub is reduced the system dynamics are driven by an interplay between stable states and critical points. The stable states are studied in detail allowing us to derive an analytic expression for the probability of crowding within the central hub. The analytic solution is in excellent agreement with the numeric results.
Deducing the multi-trader population driving a financial market
Nachi Gupta, Raphael Hauser, Neil Johnson
We have previously laid out a basic framework for predicting financial movements and pockets of predictability by tracking the distribution of a multi-trader population playing on an artificial financial market model. This work explores extensions to this basic framework. We allow for more intelligent agents with a richer strategy set, and we no longer constrain the distribution over these agents to a probability space. We then introduce a fusion scheme which accounts for multiple runs of randomly chosen sets of possible agent types. We also discuss a mechanism for bias removal on the estimates.
Risk assessment of capability requirements using WISDOM-II
Ang Yang, Hussein A. Abbass, Ruhul Sarker, et al.
The analysis of capability requirements is very important for military operational decision. It assists defence analysts to make decisions at all strategic, operational and tactical levels. However it tends to be extremely expensive and time-consuming because of the complexity under the military command, control and communication environment. Information technologies, such as red teaming, complex adaptive systems and agent based systems, can facilitate such analysis in a well-structured and systematic way through computer simulations. Based on these technologies, a promising agent-based combat simulation system - WISDOM-II is built. In this paper, we conduct a series of analysis to evaluate the effect of different capability configurations on the performance of different force compositions.
Bubbles in a minority game setting with real financial data
It is a well observed fact that markets follow both positive and/or negative trends, crashes and bubble effects. In general a strong positive trend is followed by a crash--a famous example of these effects was seen in the recent crash on the NASDAQ (April 2000) and prior to the crash in the Hong Kong market, which was associated with the Asian crisis in the early 1994. In this paper we use real market data coupled into a minority game with different payoff functions to study the dynamics and the location of financial bubbles.
Mathematical Modelling of Complex Systems
icon_mobile_dropdown
Coupling-induced oscillations in overdamped bistable dynamic systems
A. R. Bulsara, V. In, A. Kho, et al.
Recently, we have shown the emergence of oscillations in overdamped undriven nonlinear dynamic systems subject to carefully crafted coupling schemes and operating conditions. Here, we summarize these results for a system of N = 3 coupled ferromagnetic cores, the underpinning of a "coupled-core fluxgate magnetometer"(CCFM); the oscillatory behaviour is triggered when the coupling constant exceeds a threshold value (bifurcation point), and the oscillation frequency exhibits a characteristic scaling behaviour with the "separation" of the coupling constant from its threshold value, as well as with an external "target" dc magnetic flux signal. We also present the first (numerical) results on the effects of a (gaussian, exponentially correlated) noise floor on the spectral properties of the system response, and extend our investigations to the large N case, wherein the noise is seen to mediate interesting spatio-temporal cooperative behavior.
How to use noise to reduce complexity in quantization
Mark D. McDonnell, Nigel G. Stocks, Charles E.M. Pearce, et al.
Consider a quantization scheme which has the aim of quantizing a signal into N+1 discrete output states. The specification of such a scheme has two parts. Firstly, in the encoding stage, the specification of N unique threshold values is required. Secondly, the decoding stage requires specification of N+1 unique reproduction values. Thus, in general, 2N+1 unique values are required for a complete specification. We show in this paper how noise can be used to reduce the number of unique values required in the encoding stage. This is achieved by allowing the noise to effectively make all thresholds independent random variables, the end result being a stochastic quantization. This idea originates from a form of stochastic resonance known as suprathreshold stochastic resonance. Stochastic resonance occurs when noise in a system is essential for that system to provide its optimal output and can only occur in nonlinear systems--one prime example being neurons. The use of noise requires a tradeoff in performance, however, we show that even very low signal-to-noise ratios can provide a reasonable average performance for a substantial reduction in complexity, and that high signal-to-noise ratios can also provide a reduction in complexity for only a negligible degradation in performance.
Estimation for time-changed self-similar stochastic processes
W. Arroum, O.D. Jones
We consider processes of the form X(t) = X~(θ(t)) where X~ is a self-similar process with stationary increments and θ is a deterministic subordinator with a periodic activity function a = θ'> 0. Such processes have been proposed as models for high-frequency financial data, such as currency exchange rates, where there are known to be daily and weekly periodic fluctuations in the volatility, captured here by the periodic activity function. We review an existing estimator for the activity function then propose three new methods for estimating it and present some experimental studies of their performance. We finish with an application to some foreign exchange and FTSE100 futures data.
Complex Real Systems
icon_mobile_dropdown
Materials and complexity: emergence of structural complexity in sphere packings
The contemporary science of materials and condensed-matter physics is changing in response to a new awareness of the relevance of concepts associated with complexity. Scientists who design and study new materials are confronted by an ever-increasing degree of complexity, both in the materials themselves and in their synthesis. Typically, modern advanced materials are partially non-crystalline, often multicomponent, and form out of equilibrium. Further, they have functional and structural properties that are active over several length-scales. This emerging structural and functional complexity is intrinsic and necessary to many aspects of modern materials; features common also to several other complex systems. In this paper we briefly review the emerging structural complexity in a special model system: sphere packings.
Hysteresis and drift in a carbon-polymer composite strain sensor
Rowan F. Cumming, Matthew Solomon, Jason P. Hayes, et al.
A conductive polymer strain gauge was screen printed to produce an active area of 3mm × 4mm. The graphite and titanium dioxide loaded thermoplastic device was found to have a resistance of 4.3kΩ and a gauge factor of up to 20. The higher resistivity and gauge factor result in a lower power consumption and higher sensitivity when directly compared to metal foil strain gauges. However, a substantial hysteresis of approximately 80με was identified in a complete strain cycle from 0me to 730με. The source of this hysteresis was considered to be the thermoplastic matrix. Subsequently the viscoelastic nature of the polymer matrix was analysed using the gauge's resistive signal as it changed under applied strains, and this output was then compared to the standard linear solid (or Zener) model from linear viscoelastic theory. This model was applied to the data and with some limitations was found to make an improvement to the reported hysteresis.
Advanced text authorship detection methods and their application to biblical texts
Tālis Putniņš, Domenic J. Signoriello, Samant Jain, et al.
Authorship attribution has a range of applications in a growing number of fields such as forensic evidence, plagiarism detection, email filtering, and web information management. In this study, three attribution techniques are extended, tested on a corpus of English texts, and applied to a book in the New Testament of disputed authorship. The word recurrence interval based method compares standard deviations of the number of words between successive occurrences of a keyword both graphically and with chi-squared tests. The trigram Markov method compares the probabilities of the occurrence of words conditional on the preceding two words to determine the similarity between texts. The third method extracts stylometric measures such as the frequency of occurrence of function words and from these constructs text classification models using multiple discriminant analysis. The effectiveness of these techniques is compared. The accuracy of the results obtained by some of these extended methods is higher than many of the current state of the art approaches. Statistical evidence is presented about the authorship of the selected book from the New Testament.
Modeling of plasma oscillations and terahertz photomixing in HEMT-like heterostructure with lateral Schottky junction
We study theoretically a heterostructure device with the structure akin to a high-electron mobility transistor which can be used to generate electro-magnetic radiation in the terahertz range of frequencies. The gated electron channel is supplied with a lateral Schottky contact serving as the source. The operation of the device is associated with photomixing of optical signals in high-electric-field depletion region of the Schottky junction. The electrons and holes photogenerated in the Schottky junction depletion region and propagating across it induce the ac current in the quasi-neutral electron channel which, in turn, excites the plasma oscillations in this channel. Fast electron transport in the Schottky junction depletion region and resonant properties of the electron channel provide an enhanced response of the photomixer to optical signals at the plasma frequencies.
Numerical simulation of self-organized nano-islands in plasma-based assembly of quantum dot arrays
I. Levchenko, K. Ostrikov
This work presents the details of the numerical model used in simulation of self-organization of nano-islands on solid surfaces in plasma-assisted assembly of quantum dot structures. The model includes the near-substrate non-neutral layer (plasma sheath) and a nanostructured solid deposition surface and accounts for the incoming flux of and energy of ions from the plasma, surface temperature-controlled adatom migration about the surface, adatom collisions with other adatoms and nano-islands, adatom inflow to the growing nano-islands from the plasma and from the two-dimensional vapour on the surface, and particle evaporation to the ambient space and two-dimensional vapour. The differences in surface concentrations of adatoms in different areas within the quantum dot pattern significantly affect the self-organization of the nano-islands. The model allows one to formulate the conditions when certain islands grow, and certain ones shrink or even dissolve and relate them to the process control parameters. Surface coverage by self-organized quantum dots obtained from numerical simulation appears to be in reasonable agreement with the available experimental results.
Networks I
icon_mobile_dropdown
Connecting the dots to disconnect them: a study into network evolution and dynamics for analyzing terrorist networks
Hussein A. Abbass, Michael Barlow, Daryl Essam, et al.
For a long time, a lack of sufficient data has been an obstacle to the intelligence community. This study provides an integrated approach which combines network theory and data mining to analyze 1440 instances of terrorism that occurred up to 2002. The study reveals interesting patterns on the evolution of these terrorist organizations over two decades.
Mapping lessons from ants to free flight: an ant-based weather avoidance algorithm in free flight airspace
Sameer Alam, Hussein A. Abbass, Michael Barlow, et al.
The continuing growth of air traffic worldwide motivates the need for new approaches to air traffic management that are more flexible both in terms of traffic volume and weather. Free Flight is one such approach seriously considered by the aviation community. However the benefits of Free Flight are severely curtailed in the convective weather season when weather is highly active, leading aircrafts to deviate from their optimal trajectories. This paper investigates the use of ant colony optimization in generating optimal weather avoidance trajectories in Free Flight airspace. The problem is motivated by the need to take full advantage of the airspace capacity in a Free Flight environment, while maintaining safe separation between aircrafts and hazardous weather. The experiments described herein were run on a high fidelity Free Flight air traffic simulation system which allows for a variety of constraints on the computed routes and accurate measurement of environments dynamics. This permits us to estimate the desired behavior of an aircraft, including avoidance of changing hazardous weather patterns, turn and curvature constraints, and the horizontal separation standard and required time of arrival at a pre determined point, and to analyze the performance of our algorithm in various weather scenarios. The proposed Ant Colony Optimization based weather avoidance algorithm was able to find optimum weather free routes every time if they exist. In case of highly complex scenarios the algorithm comes out with the route which requires the aircraft to fly through the weather cells with least disturbances. All the solutions generated were within flight parameters and upon integration with the flight management system of the aircraft in a Free Flight air traffic simulator, successfully negotiated the bad weather.
Contagions across networks: colds and markets
We explore a variety of network models describing transmission across a network. In particular we focus on transmission across composite networks, or "networks of networks", in which a finite number of networked objects are then themselves connected together into a network. In a disease context we introduce two interrelated viruses to hosts on a network, to model the infection of hosts in a classroom situation, with high rates of infection within a classroom, and lower rates of infection between classrooms. The hosts can be either susceptible to infection, infected, or recovering from each virus. During the infection stage and recovery stage there is some level of cross-immunity to related viruses. We explore the effects of immunizing sections of the community on transmission through social networks. In a stock market context we introduce memes, or virus-like ideas into a virtual agent-based model of a stock exchange. By varying the parameters of the individual traders and the way in which they are connected we are able to show emergent behaviour, including boom and bust cycles.
Networks II
icon_mobile_dropdown
Extracting the correlation structure by means of planar embedding
The hierarchical structure of correlation matrices in complex systems is studied by extracting a significant sub-set of correlations resulting in a planar graph. Such a graph has been generated by a method introduced in Aste et al. [1] and it has the same hierarchical structure of the Minimum Spanning Tree but it contains a larger amount of links, loops and cliques. In Tumminello et al. [2], we have shown that this method, applied to a financial portfolio of 100 stocks in the USA equity markets, is pretty efficient in filtering relevant information about the system clustering revaling the hierarchical organization in the whole system and within each cluster. Here we discuss this filtering correlation procedure and its application to different financial data sets.
Multi-objective evolutionary algorithm for investigating the trade-off between pleiotropy and redundancy
Zhiyang Ong, Hao-Wei Lo, Matthew Berryman, et al.
The trade-off between pleiotropy and redundancy in telecommunications networks is analyzed in this paper. They are optimized to reduce installation costs and propagation delays. Pleiotropy of a server in a telecommunications network is defined as the number of clients and servers that it can service whilst redundancy is described as the number of servers servicing a client. Telecommunications networks containing many servers with large pleiotropy are cost-effective but vulnerable to network failures and attacks. Conversely, those networks containing many servers with high redundancy are reliable but costly. Several key issues regarding the choice of cost functions and techniques in evolutionary computation (such as the modeling of Darwinian evolution, and mutualism and commensalism) will be discussed, and a future research agenda is outlined. Experimental results indicate that the pleiotropy of servers in the optimum network does improve, whilst the redundancy of clients do not vary significantly, as expected, with evolving networks. This is due to the controlled evolution of networks that is modeled by the steady-state genetic algorithm; changes in telecommunications networks that occur drastically over a very short period of time are rare.
Scale-free networks in complex systems
M. Bartolozzi, D. B. Leinweber, T. Surungan, et al.
In the past few years, several studies have explored the topology of interactions in different complex systems. Areas of investigation span from biology to engineering, physics and the social sciences. Although having different microscopic dynamics, the results demonstrate that most systems under consideration tend to self-organize into structures that share common features. In particular, the networks of interaction are characterized by a power law distribution, P(k)~ k-α, in the number of connections per node, k, over several orders of magnitude. Networks that fulfill this propriety of scale-invariance are referred to as "scale-free". In the present work we explore the implication of scale-free topologies in the antiferromagnetic (AF) Ising model and in a stochastic model of opinion formation. In the first case we show that the implicit disorder and frustration lead to a spinglass phase transition not observed for the AF Ising model on standard lattices. We further illustrate that the opinion formation model produces a coherent, turbulent-like dynamics for a certain range of parameters. The influence, of random or targeted exclusion of nodes is studied.
Biological and Bio-Inspired Complex Systems
icon_mobile_dropdown
Multilevel modeling for inference of genetic regulatory networks
Shu-Kay Ng, Kui Wang, Geoffrey J. McLachlan
Time-course experiments with microarrays are often used to study dynamic biological systems and genetic regulatory networks (GRNs) that model how genes influence each other in cell-level development of organisms. The inference for GRNs provides important insights into the fundamental biological processes such as growth and is useful in disease diagnosis and genomic drug design. Due to the experimental design, multilevel data hierarchies are often present in time-course gene expression data. Most existing methods, however, ignore the dependency of the expression measurements over time and the correlation among gene expression profiles. Such independence assumptions violate regulatory interactions and can result in overlooking certain important subject effects and lead to spurious inference for regulatory networks or mechanisms. In this paper, a multilevel mixed-effects model is adopted to incorporate data hierarchies in the analysis of time-course data, where temporal and subject effects are both assumed to be random. The method starts with the clustering of genes by fitting the mixture model within the multilevel random-effects model framework using the expectation-maximization (EM) algorithm. The network of regulatory interactions is then determined by searching for regulatory control elements (activators and inhibitors) shared by the clusters of co-expressed genes, based on a time-lagged correlation coefficients measurement. The method is applied to two real time-course datasets from the budding yeast (Saccharomyces cerevisiae) genome. It is shown that the proposed method provides clusters of cell-cycle regulated genes that are supported by existing gene function annotations, and hence enables inference on regulatory interactions for the genetic network.
Automated sleep scoring and sleep apnea detection in children
David P. Baraglia, Matthew J. Berryman, Scott W. Coussens, et al.
This paper investigates the automated detection of a patient's breathing rate and heart rate from their skin conductivity as well as sleep stage scoring and breathing event detection from their EEG. The software developed for these tasks is tested on data sets obtained from the sleep disorders unit at the Adelaide Women's and Children's Hospital. The sleep scoring and breathing event detection tasks used neural networks to achieve signal classification. The Fourier transform and the Higuchi fractal dimension were used to extract features for input to the neural network. The filtered skin conductivity appeared visually to bear a similarity to the breathing and heart rate signal, but a more detailed evaluation showed the relation was not consistent. Sleep stage classification was achieved with and accuracy of around 65% with some stages being accurately scored and others poorly scored. The two breathing events hypopnea and apnea were scored with varying degrees of accuracy with the highest scores being around 75% and 30%.