Proceedings Volume 7705

Modeling and Simulation for Defense Systems and Applications V

cover
Proceedings Volume 7705

Modeling and Simulation for Defense Systems and Applications V

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 25 April 2010
Contents: 10 Sessions, 22 Papers, 0 Presentations
Conference: SPIE Defense, Security, and Sensing 2010
Volume Number: 7705

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 7705
  • Physics-Based Simulations
  • Sensors and Sensor-Based Systems I
  • Sensors and Sensor-Based Systems II
  • Optical Components and Systems
  • Tools, Techniques, and VV&A
  • Defect Simulation and Detection
  • Distributed Systems
  • Warfighter and Operations
  • Poster Session
Front Matter: Volume 7705
icon_mobile_dropdown
Front Matter: Volume 7705
This PDF file contains the front matter associated with SPIE Proceedings Volume 7705, including the Title Page, Copyright information, Table of Contents, and the Conference Committee listing.
Physics-Based Simulations
icon_mobile_dropdown
CULA: hybrid GPU accelerated linear algebra routines
John R. Humphrey, Daniel K. Price, Kyle E. Spagnoli, et al.
The modern graphics processing unit (GPU) found in many standard personal computers is a highly parallel math processor capable of nearly 1 TFLOPS peak throughput at a cost similar to a high-end CPU and an excellent FLOPS/watt ratio. High-level linear algebra operations are computationally intense, often requiring O(N3) operations and would seem a natural fit for the processing power of the GPU. Our work is on CULA, a GPU accelerated implementation of linear algebra routines. We present results from factorizations such as LU decomposition, singular value decomposition and QR decomposition along with applications like system solution and least squares. The GPU execution model featured by NVIDIA GPUs based on CUDA demands very strong parallelism, requiring between hundreds and thousands of simultaneous operations to achieve high performance. Some constructs from linear algebra map extremely well to the GPU and others map poorly. CPUs, on the other hand, do well at smaller order parallelism and perform acceptably during low-parallelism code segments. Our work addresses this via hybrid a processing model, in which the CPU and GPU work simultaneously to produce results. In many cases, this is accomplished by allowing each platform to do the work it performs most naturally.
Novel high-fidelity realistic explosion damage simulation for urban environments
Xiaoqing Liu, Jacob Yadegar, Youding Zhu, et al.
Realistic building damage simulation has a significant impact in modern modeling and simulation systems especially in diverse panoply of military and civil applications where these simulation systems are widely used for personnel training, critical mission planning, disaster management, etc. Realistic building damage simulation should incorporate accurate physics-based explosion models, rubble generation, rubble flyout, and interactions between flying rubble and their surrounding entities. However, none of the existing building damage simulation systems sufficiently faithfully realize the criteria of realism required for effective military applications. In this paper, we present a novel physics-based high-fidelity and runtime efficient explosion simulation system to realistically simulate destruction to buildings. In the proposed system, a family of novel blast models is applied to accurately and realistically simulate explosions based on static and/or dynamic detonation conditions. The system also takes account of rubble pile formation and applies a generic and scalable multi-component based object representation to describe scene entities and highly scalable agent-subsumption architecture and scheduler to schedule clusters of sequential and parallel events. The proposed system utilizes a highly efficient and scalable tetrahedral decomposition approach to realistically simulate rubble formation. Experimental results demonstrate that the proposed system has the capability to realistically simulate rubble generation, rubble flyout and their primary and secondary impacts on surrounding objects including buildings, constructions, vehicles and pedestrians in clusters of sequential and parallel damage events.
Sensors and Sensor-Based Systems I
icon_mobile_dropdown
Depth estimation, spatially variant image registration, and super-resolution using a multi-lenslet camera
Qiang Zhang, Mark Mirotznik, Santiago Saldana, et al.
With a multi-lenslet camera, we can capture multiple low resolution (LR) images of the same scene and use them to reconstruct a high resolution (HR) image. For this purpose, two major computation problems need to be solved, the image registration and the super resolution (SR) reconstruction. For the first, one major hurdle is the spatially variant shifts estimation, because objects in a scene are often at different depths, and due to parallax, shifts between imaged objects often vary on a pixel basis. This poses a great computational challenge as the problem is NP complete. The multi-lenslet camera with a single focal plane provides us a unique opportunity to take advantage of the parallax phenomenon, and to directly relate object depths with their shifts, and thus we essentially reduced the parameter space from a two dimensional (x, y) space to a one dimensional depth space, which would greatly reduce the computational cost. As results, not only we have registered LR images, the estimated depth map can also be valuable for some applications. After registration, LR images along with estimated shifts can be used to reconstruct an HR image. A previously developed algorithm will be employed to efficiently compute for a large HR image in the size of 1024x1024.
Sensors and Sensor-Based Systems II
icon_mobile_dropdown
Multispectral tactical integrated scene generation capability using satellite imagery
A multi-spectral tactical integrated scene generation capability using satellite terrain imagery is currently available using a synthetic predictive simulation code developed by the Munitions Directorate of the Air Force Research Laboratory (AFRL/RWGGS). This capability produces multi-spectral integrated scene imagery from the perspective of a sensor/seeker for an air-to-ground scenario using geo-referenced U.S. Geological Survey (USGS) Digital Terrain Elevation Data (DTED) and satellite terrain imagery. The produced imagery is spatially, spectrally, and temporally accurate. Using surveillance flight path and viewing angle, this capability has been interfaced with Microsoft Virtual Earth to extract terrain data of interest at the needed background resolution.
PerSEval phase I: development of a 3D urban terrain model for evaluation of persistent surveillance sensors and video-based tracking algorithms
Dawne M. Deaver, Robin Kang, Vinh Tran, et al.
PerSEval is a modeling and simulation tool being developed for end-to-end evaluation of airborne persistent surveillance imaging sensor systems. This class of sensor systems is characterized by having a wide coverage area over an extended period of time and operating in either visible or thermal infrared wavebands. Current operational systems are heavily used by image analysts for tracking vehicles or dismounted personnel, with an emphasis in urban areas of interest. Future persistent surveillance systems will include automated ground target tracking algorithms to alleviate analyst workload. As a system evaluation tool, PerSEval will include dependencies on the scenario, platform, sensor, processing, and tracking algorithm. This paper describes the overall PerSEval architecture as well as the first phase of development which focuses on the creation of a three-dimensional urban terrain simulation appropriate for the evaluation of automated tracking algorithms.
Optical Components and Systems
icon_mobile_dropdown
Modeling multi-channel optical links using OptiSPICE for WDM systems
Pavan Gunupudi, Tom Smy, Jackson Klein, et al.
Building on a previously presented framework for a single engine simulator (OptiSPICE) this paper will present models and techniques for modelling devices used in local area networks utilizing wavelength division multiplexing, single-mode fiber and integrated electronics. This paper will detail time-domain models of various elements that form optical links in such a system. Detailed models based on physical rate equations will be presented for laser sources and electro-optic modulators. A single mode fiber model based on the Non-linear Schrodinger Equation and which includes multiple channel effects will be presented. Finally, a model of an avalanche photo-diode using an electrical diode and a photo-current which is proportional to the optical intensity at the input will be described. The final section of the paper will present results from a multi-channel optical link. The initial part of each channel is comprised of a laser source and driver, an optical gain/attenuation element and an electro-optical modulator driven by a bit stream generator. An optical multiplexing element is then used to merge the optical channels and this is connected to a single-mode fiber. At the end of the fiber an optical splitter is used with optical filters to de-multiplex the optical signal and finally a avalanche photo-diode and amplifier is used to terminate each channel. These results demonstrate the successful simulation of multi-channel optical links using the presented optoelectronic simulation framework and models.
Multi-transceiver simulation modules for free-space optical mobile ad hoc networks
This paper presents realistic simulation modules to assess characteristics of multi-transceiver free-space-optical (FSO) mobile ad-hoc networks. We start with a physical propagation model for FSO communications in the context of mobile ad-hoc networks (MANETs). We specifically focus on the drop in power of the light beam and probability of error in the decoded signal due to a number of parameters (such as separation between transmitter and receiver and visibility in the propagation medium), comparing our results with well-known theoretical models. Then, we provide details on simulating multi-transceiver mobile wireless nodes in Network Simulator 2 (NS-2), realistic obstacles in the medium and communication between directional optical transceivers. We introduce new structures in the networking protocol stack at lower layers to deliver such functionality. At the end, we provide our findings resulted from detailed modeling and simulation of FSO-MANETs regarding effects of such directionality on higher layers in the networking stack.
The effects of electron temperature in terahertz quantum cascade laser predictions
Philip Slingerland, Christopher Baird, Bryan Crompton, et al.
Quantum cascade lasers (QCL's) employ the mid- and far-infrared intersubband radiative transitions available in semiconducting heterostructures. Through the precise design and construction of these heterostructues the laser characteristics and output frequencies can be controlled. When fabricated, QCL's offer a lightweight and portable alternative to traditional laser systems which emit in this frequency range. The successful operation of these devices strongly depends on the effects of electron transport. Studies have been conducted on the mechanisms involved in electron transport and a prediction code for QCL simulation and design has been completed. The implemented approach utilized a three period simulation of the laser active region. All of the wavefunctions within the simulation were included in a self-consistent rate equation model. This model employed all relevant types of scattering mechanisms within three periods. Additionally, an energy balance equation was studied to determine the temperature of electron distributions separately from the lattice temperature. This equation included the influence of both electron-LO phonon and electron-electron scattering. The effect of different modelling parameters within QCL electron temperature predictions will be presented along with a description of the complete QCL prediction code.
Tools, Techniques, and VV&A
icon_mobile_dropdown
Analyzing the impact of data movement on GPU computations
Daniel K. Price, John R. Humphrey, Kyle E. Spagnoli, et al.
Recently, GPU computing has taken the scientific computing landscape by storm, fueled by the attractive nature of the massively parallel arithmetic hardware. When porting their code, researchers rely on a set of best practices that have been developed over the few years that general purpose GPU computing has been employed. This paper challenges a widely held belief that transfers to and from the GPU device must be minimized to achieve the best speedups over existing codes by presenting a case study on CULA, our library for dense linear algebra computation on GPU. Among the topics to be discussed include the relationship between computation and transfer time for both synchronous and asynchronous transfers, as well as the impact that data allocations have on memory performance and overall solution time.
Risk-based verification, validation, and accreditation process
James N. Elele, Jeremy Smith
This paper presents a risk-based Verification, Validation, and Accreditation (VV&A) process for Models and Simulations (M&S). Recently, the emphasis on M&S used to support Department of Defense (DoD) acquisition has been based on the level of resources allocated to establishing the credibility of the M&S on the risks associated with the decision being supported by the M&S. In addition, DoD VV&A regulations recommend tailoring the V&V process to allow efficient use of resources. However, one problem is that no methodology is specified for such tailoring processes. The BMV&V has developed a risk-based process that implements tailoring of the VV&A activities based on risk. Our process incorporates MIL-STD 3022 for new M&S. For legacy M&S, the process starts by first assessing the current risk level of the M&S based on the credibility attributes of the M&S as defined through its Capability, Accuracy and Usability, relative to the articulated Intended Use Statement (IUS). If the risk is low, the M&S is credible for application, and no further V&V is required. If the risk is medium or high, the Accreditation Authority determines whether the M&S can be accepted as-is or if the risk should be mitigated. If the Accreditation Authority is willing to accept the risks, then a Conditional Accreditation is made. If the risks associated with using the M&S as-is are deemed too high to accept, then a Risk Mitigation/Accreditation Plan is developed to guide the process. The implementation of such a risk mitigation plan is finally documented through an Accreditation Support Package.
Using simulation and virtual machines to identify information assurance requirements
Sheila B. Banks, Martin R. Stytz
The US military is changing its philosophy, approach, and technologies used for warfare. In the process of achieving this vision for high-speed, highly mobile warfare, there are a number of issues that must be addressed and solved; issues that are not addressed by commercial systems because Department of Defense (DoD) Information Technology (IT) systems operate in an environment different from the commercial world. The differences arise from the differences in the scope and skill used in attacks upon DoD systems, the interdependencies between DoD software systems used for network centric warfare (NCW), and the need to rely upon commercial software components in virtually every DoD system. As a result, while NCW promises more effective and efficient means for employing DoD resources, it also increases the vulnerability and allure of DoD systems to cyber attack. A further challenge arises due to the rapid changes in software and information assurance (IA) requirements and technologies over the course of a project. Therefore, the four challenges that must be addressed are determining how to specify the information assurance requirements for a DoD system, minimizing changes to commercial software, incorporation of new system and IA requirements in a timely manner with minimal impact, and insuring that the interdependencies between systems do not result in cyber attack vulnerabilities. In this paper, we address all four issues. In addition to addressing the four challenges outlined above, the interdependencies and interconnections between systems indicate that the IA requirements for a system must consider two important facets of a system's IA defensive capabilities. The facets are the types of IA attacks that the system must repel and the ability of a system to insure that any IA attack that penetrates the system is contained within the system and does not spread. The IA requirements should be derived from threat assessments for the system as well as for the need to address the four requirements challenges outlined above. To address these issues, we developed a system architecture and acquisition approach designed to separate the system's IA capabilities requirements and development from the other system capability requirements; thereby, allowing the IA capabilities to be developed rapidly and assessed separately from the other system capabilities. Simulation environments and technologies allow us to test and evaluate solutions to the issues while also insuring that the system being tested and the solution are not exposed to real-world threats.
Defect Simulation and Detection
icon_mobile_dropdown
On development of a VLSI circuit for impact source identification in ceramic plates
Interest has been shown in the problem of real-time crack detection, crack extent measurement and the identification of the impact source causing the damage. A solution to the problem of impact source identification is presented using a signal processing technique employing piezoelectric sensors. In order to detect the crack and to identify the source of the impact, the Fuzzy logic approach is suggested. Based on the FLA approach, a procedure to develop the rule base is given. The implementation of the rules is done using Hardware Description Languages (HDL) such as Verilog. The procedure from Verilog to VLSI implementation is suggested. FPGA implementation and testing of the suggested procedure is included. The problems for the future work on the development of VLSI to measure the crack and identify the impact sources are given.
Realistic and efficient 2D crack simulation
Jacob Yadegar, Xiaoqing Liu, Abhishek Singh
Although numerical algorithms for 2D crack simulation have been studied in Modeling and Simulation (M&S) and computer graphics for decades, realism and computational efficiency are still major challenges. In this paper, we introduce a high-fidelity, scalable, adaptive and efficient/runtime 2D crack/fracture simulation system by applying the mathematically elegant Peano-Cesaro triangular meshing/remeshing technique to model the generation of shards/fragments. The recursive fractal sweep associated with the Peano-Cesaro triangulation provides efficient local multi-resolution refinement to any level-of-detail. The generated binary decomposition tree also provides efficient neighbor retrieval mechanism used for mesh element splitting and merging with minimal memory requirements essential for realistic 2D fragment formation. Upon load impact/contact/penetration, a number of factors including impact angle, impact energy, and material properties are all taken into account to produce the criteria of crack initialization, propagation, and termination leading to realistic fractal-like rubble/fragments formation. The aforementioned parameters are used as variables of probabilistic models of cracks/shards formation, making the proposed solution highly adaptive by allowing machine learning mechanisms learn the optimal values for the variables/parameters based on prior benchmark data generated by off-line physics based simulation solutions that produce accurate fractures/shards though at highly non-real time paste. Crack/fracture simulation has been conducted on various load impacts with different initial locations at various impulse scales. The simulation results demonstrate that the proposed system has the capability to realistically and efficiently simulate 2D crack phenomena (such as window shattering and shards generation) with diverse potentials in military and civil M&S applications such as training and mission planning.
Distributed Systems
icon_mobile_dropdown
Integrating botnet simulations with network centric warfare simulations
Martin R. Stytz, Sheila B. Banks
"Botnets," or "bot armies," are large groups of remotely controlled malicious software designed and operated in order to conduct attacks against government and civilian targets. Bot armies are one of the most serious security threats to networks and computer systems in operation today. Botnets are remotely operated by botmasters who can launch large-scale malicious network activity. While bot army activity has, to date, been largely limited to fraud, blackmail, and other criminal activity, their potential for causing large-scale damage to the entire internet and launching large-scale, coordinated attacks on government computers, networks, and data gathering operations has been underestimated. This paper will not discuss how to build bots but instead discuss ways to use simulation to address the threats they pose. This paper suggests means for addressing the need to provide botnet defense training based upon existing simulation environments and discusses the capabilities needed for training systems for botnet activities. In this paper we discuss botnet technologies and review the capabilities that underlie this threat to network, information, and computer security. The second section of the paper contains background information about bot armies and their foundational technologies. The third section contains a discussion of the techniques we developed for estimating botnet bandwidth consumption and our approach for simulating botnet activities. The fourth section contains a summary and suggestions for additional research.
Streaming video for distributed simulation
Steven G. Webster, Douglas J. Paul
Distributed simulation environments are increasingly using video to stimulate operational systems and their prototypical equivalents. Traditionally, this video has been synthesized and delivered by an analog means to consuming software applications. Scene generators typically render to commodity video cards, generate out of band metadata, and convert their outputs to formats compatible with the stimulated systems. However, the approach becomes hardware intensive as environment scale and distribution requirements grow. Streaming video technologies can be applied to uncouple video sources from their consumers, thereby enabling video channel quantities beyond rendering hardware outputs. Moreover, metadata describing the video content can be multiplexed, thereby ensuring temporal registration between video and its attribution. As an application of this approach, the Night Vision Image Generator (NVIG) has been extended and integrated with distribution architectures to deliver streaming video in virtual simulation environments. Video capture hardware emulation and application frame buffer reads are considered for capturing rendered scenes. Video source to encoder bindings and content multiplexing are realized by combining third party video codec, container, and transport implementations with original metadata encoders. Readily available commercial and open source solutions are utilized for content distribution and demultiplexing to a variety of formats and clients. Connected and connectionless distribution approaches are discussed with respect to latency and reliability. Client side scalability, latency, and initialization issues are addressed. Finally, the solution is applied to tactical systems stimulus and training, showing the evolvement from the analog to the streamed video approach.
Warfighter and Operations
icon_mobile_dropdown
Warfighter decision making performance analysis as an investment priority driver
David J. Thornley, David F. Dean, James C. Kirk
Estimating the relative value of alternative tactics, techniques and procedures (TTP) and information systems requires measures of the costs and benefits of each, and methods for combining and comparing those measures. The NATO Code of Best Practice for Command and Control Assessment explains that decision making quality would ideally be best assessed on outcomes. Lessons learned in practice can be assessed statistically to support this, but experimentation with alternate measures in live conflict is undesirable. To this end, the development of practical experimentation to parameterize effective constructive simulation and analytic modelling for system utility prediction is desirable. The Land Battlespace Systems Department of Dstl has modeled human development of situational awareness to support constructive simulation by empirically discovering how evidence is weighed according to circumstance, personality, training and briefing. The human decision maker (DM) provides the backbone of the information processing activity associated with military engagements because of inherent uncertainty associated with combat operations. To develop methods for representing the process in order to assess equipment and non-technological interventions such as training and TTPs we are developing componentized or modularized timed analytic stochastic model components and instruments as part of a framework to support quantitative assessment of intelligence production and consumption methods in a human decision maker-centric mission space. In this paper, we formulate an abstraction of the human intelligence fusion process from the Defence Science and Technology Laboratory's (Dstl's) INCIDER model to include in our framework, and synthesize relevant cost and benefit characteristics.
Individual warfighter effectiveness and survivability in a CBRN threat environment
Roger Schleper, Chris Gaughan, Michael O. Kierzewski, et al.
The effort described in this paper attempts to enhance the state-of-the-art to model high-fidelity (hi-fi) dismounted infantry interactions with a realistic Chemical, Biological, Radiological, Nuclear (CBRN) hazard. There is limited CBRN Modeling & Simulation (M&S) capability for research, training and doctrine development. Although numerous ground and plume hazards simulations exist, few model the entire problem space. To this end, the following three hi-fi simulations were federated: 1) The Infantry Warrior Simulation (IWARS); 2) The Command Control, and Communications Human Performance Model (C3HPM); and, 3) The CBRN Simulation Suite via High Level Architecture (HLA) using the Modeling Architecture for Technology, Research and EXperimentation (MATREX) architecture. The goal of this federation is to provide an integrated capability that will allow analysis of CBRN sensors and Warfighter protective equipment in the context of a complex battlefield environment with dismounted infantry missions/tactics. The IWARS provides representation of dismounted entities and their decisions/physical tasks in a battlefield environment. The C3HPM provides task degradation data due to presence of various CBRN threats and due to wearing of CBRN protective equipment. The CBRN Sim Suite provides dynamic threat events/propagation, high fidelity CBRN sensor representations with tactical message output, CBRN injury based on exposure dosage/concentration and entity protection.
Simulating effectiveness of helicopter evasive manoeuvres to RPG attack
D. Anderson, D. G. Thomson
The survivability of helicopters under attack by ground troops using rocket propelled grenades has been amply illustrated over the past decade. Given that an RPG is unguided and it is infeasible to cover helicopters in thick armour, existing optical countermeasures are ineffective - the solution is to compute an evasive manoeuvre. In this paper, an RPG/helicopter engagement model is presented. Manoeuvre profiles are defined in the missile approach warning sensor camera image plane using a local maximum acceleration vector. Required control inputs are then computed using inverse simulation techniques. Assessments of platform survivability to several engagement scenarios are presented.
SOA approach to battle command: simulation interoperability
Gregory Mayott, Mid Self, Gordon James Miller, et al.
NVESD is developing a Sensor Data and Management Services (SDMS) Service Oriented Architecture (SOA) that provides an innovative approach to achieve seamless application functionality across simulation and battle command systems. In 2010, CERDEC will conduct a SDMS Battle Command demonstration that will highlight the SDMS SOA capability to couple simulation applications to existing Battle Command systems. The demonstration will leverage RDECOM MATREX simulation tools and TRADOC Maneuver Support Battle Laboratory Virtual Base Defense Operations Center facilities. The battle command systems are those specific to the operation of a base defense operations center in support of force protection missions. The SDMS SOA consists of four components that will be discussed. An Asset Management Service (AMS) will automatically discover the existence, state, and interface definition required to interact with a named asset (sensor or a sensor platform, a process such as level-1 fusion, or an interface to a sensor or other network endpoint). A Streaming Video Service (SVS) will automatically discover the existence, state, and interfaces required to interact with a named video stream, and abstract the consumers of the video stream from the originating device. A Task Manager Service (TMS) will be used to automatically discover the existence of a named mission task, and will interpret, translate and transmit a mission command for the blue force unit(s) described in a mission order. JC3IEDM data objects, and software development kit (SDK), will be utilized as the basic data object definition for implemented web services.
A simulation approach to a virtual base defense operating center
Keith Athmer, Chris Gaughan
The TRADOC Maneuver Support Center of Excellence (MSCoE) is the Army proponent for protection and in turn, has the mission to support fixed site protection issues. To this end, the Maneuver Support Battle Lab (MSBL) developed a Virtual Base Defense Operating Center (VBDOC) capability that was initiated in support of the Force Protection Joint Experiment (FPJE) to examine data fusion enhancements and improvements to the Common Operating Picture (COP) display. Furthermore BDOC Standard Operating Procedures (SOPs), Tactics, Techniques and Procedures (TTPs), and Unmanned Ground Vehicle (UGV) capabilities were examined in order to optimize manpower, reduce exposure of friendly personnel, and improve force protection. The Modeling and Simulation (M&S) architecture was especially important due to the cost of providing realistic environments, such as Chemical Biological Radiological Nuclear (CBRN) hazards, and the availability of soldiers for experimentation. The VBDOC simulation architecture contains a force-on-force simulation, a CBRN simulation, a desktop UGV Advanced Concepts Research Tool (ACRT) and a sensor controller using the Distributed Interactive Simulation (DIS) protocol. This simulation architecture stimulated actual Command and Control (C2) systems including the Joint Battlespace Command and Control System (JBC2S) and the Joint Warning and Reporting Network (JWARN). These C2 systems, along with video feeds from various sensors and unmanned vehicles, were used by Battle Captains and staffs for situational awareness of the battlefield while conducting the experiment. The VBDOC capability offers a controlled environment to study fixed site protection issues, such as future Concept of Operation (CONOP)/TTP/SOP development and refinement, examining emerging concepts, and assessing specific technology capabilities.
Poster Session
icon_mobile_dropdown
A prioritization scheme of detector in intrusion detection model based on GA
Pei-li Qiao, Shuo Yuan, Jie Su
To speed algorithm convergence and avoid early-maturing, the theory of Uniform Design Sampling (UDS) is used to redesign the crossover operation of Genetic Algorithm and to improve the similarity of cyber-chromosome which is correlated with Detector Redundancy. A new detector prioritization scheme is built on the basis of the combination of partial searching strategy and a new method to evaluate the data of redundancy. Simulation experiment demonstrates that this scheme maintains the variety, efficiency and sufficiency of the detector. Our scheme has a better performance in searching velocity, global optimal ability. Detection rate is increased and false alarm rate is decreased to a certain degree.