Proceedings Volume 1483

Signal and Image Processing Systems Performance Evaluation, Simulation, and Modeling

Hatem N. Nasr, Michael E. Bazakos
cover
Proceedings Volume 1483

Signal and Image Processing Systems Performance Evaluation, Simulation, and Modeling

Hatem N. Nasr, Michael E. Bazakos
View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 1 July 1991
Contents: 3 Sessions, 22 Papers, 0 Presentations
Conference: Orlando '91 1991
Volume Number: 1483

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Simulation
  • Modeling
  • Evaluation
Simulation
icon_mobile_dropdown
Generic modular imaging IR signal processor
John Eric Auborn, William R. Harris
A modular signal processor architecture suitable for many applications has been developed to meet real-time requirements, and is adaptable to multiple uses. This generic modular architecture has been developed and demonstrated in real-time hardware for representative filters and a target detection algorithm. Computer-aided design tools were used throughout the hardware development, and an application specific integrated circuit (ASIC) or other custom IC implementation could be used for actual production hardware.
Real-time architecture based on the image processing module family
Shigeru Kimura, Yoshiyuki Murakami, Hikaru Matsuda
A new real-time image-processing architecture for an electro-optical tracking system was developed and evaluated. In this architecture, the universal interface called Pipelined Data Interface (PDI) was defined, and this interface connects the function modules. The authors fabricated the equipment as a demonstration system, including an input section, SPU (support unit), and an image processing section, IPU (image processing unit). In the IPU various modules are realized. These modules include spatial filters, moving target detector, window thresholder, binary filters, shrink point finder, boundary tracer, feature extractor, ID (identification) filters, etc.
Application of the ProtoWare simulation testbed to the design and evaluation of advanced avionics
Daniel M. Bubb, Leo T. Wilson, John R. Stoltz
The authors describe ProtoWare, a system design methodology and a supporting toolset, that is used to develop robust tactical simulation systems. An example is provided of the use of ProtoWare in the development, testing, and evaluation of an avionics requirements simulation system. Comparisons are made to show the improvements achieved by considering various tracking, fusion, and sensor manager algorithm designs. Risk reduction figures of merit are generated to assist the tactical system designer in controlling the risk of inserting a new unproven technology.
Development of a fire and forget imaging infrared seeker missile simulation
Charles S. Hall, Anthony J. Alongi, Russ L. Fortner, et al.
The Missile-Enhanced Thermal Image Model (METIM) is a menu-driven computer model which simulates the performance of autonomous seeker missiles in battlefield situations. Its capacity to synthesize targets and backgrounds account for natural and artificial obscurants, and model missile track and flight operations provide a flexible analysis tool for the evaluation of autonomous seeker missile hardware. This paper chronicles the development of the simulation, discusses the current status of the model, and presents future plans. Results from the model are shown.
Health monitoring of rocket engines using image processing
Peter J. Disimile, Bridget Shoe, Norman Toy
Analysis of spectral and video data for anomalous events occurring in the exhaust plume of the Space Shuttle Main Engine (SSME) has shown that the improved time resolution of video tape increases the detection rate of anomalies in the visual region. Preliminary developments and applications of image processing techniques are used to extract information from video data of the SSME exhaust plume. Images have been enhanced to show the exhaust plume shock structure and for the isolation of an anomalous event.
Adaptive morphological filter for image processing
Fulin Cheng, Anastasios N. Venetsanopoulos
A new opening operation (NOP) and a new closing operation (NCP) are introduced and the algorithms to effectively compute the NOP and the NCP are developed. Based on the NOP and the NCP algorithms, an adaptive morphological filter is constructed. The filter adapts to the change of the image features in a way that it finds the optimal shape of the local structuring element of the filter according to the local features of the image. The shape can be any connected shape of a given size, which is considered the best to preserve the local details of the image. In that way, the filter can preserve any details larger or equal to the given mask size and remove those of smaller size. The local computational complexity of the filter changes according to the complexity of the local features of the image. Experiments have shown that the overall computational complexity of the proposed adaptive morphological filter is comparable to that of the nonadaptive one combining four 1-D structuring elements of the same size.
Modeling
icon_mobile_dropdown
Laboratory development of a nonlinear optical tracking filter
Kenneth L. Block, Ernest E. Whitworth Jr., Joseph E. Bergin
Experimental results are presented for a nonlinear optical pre-processing filter built and demonstrated to remove transient objects from images sent to a track processor. This filter could be used to enhance track maintenance of point source and small extended objects. Operationally, the nonlinear optical technique accommodates unwanted track image effects such as signal loss, scene saturation, and multiple false signals.
Dynamic end-to-end model testbed for IR detection algorithms
Frank J. Iannarilli Jr., Martin Ronald Wohlers
Aerodyne has recently developed an 'IRST Engagement Model' under contract for Lockheed Aeronautical Systems Company (LASC). The model's purpose is to simulate the performance of an IRST system in long-range air-to-air detection and tracking engagements. The hallmark of the model is its end-to-end first-principles modeling of all major elements which determine specific performance. The target aircraft IR signature, the atmospheric cloud and sky background, and associated atmospheric effects are modeled at high fidelity, thereby producing an input image matched to the specific IRST under study. A detailed deterministic model of the IRST accounts for optical and sensor effects, signal processing, and track association typical of first-generation IRSTs. These model elements are coupled together along with a dynamic target and observer (IRST) trajectories model so that an analyst can specify air-to-air engagements at various velocities, ranges, and viewing angles. The analyst can study the effects of varying IRST algorithms, sensor characteristics, optical bandpass, cloud background levels, atmospheric effects, and target performance characteristics as well as varying the target aircraft itself. This computer model was designed for portability and growth.
Adaptive optics, transfer loops modeling
Corinne Boyer, Jean-Paul Gaffard, Jean-Pierre Barrat, et al.
An adaptive optical system dedicated to high resolution imaging can be modeled in terms of transfer loops. This model permits estimation of the response of such a system to time-varying wavefront perturbation. A block diagram describing an adaptive optic as a closed loop system is given and an analytical expression expressed in Z transform is found. In a second step, identification methods are used; the best estimates for the parameter values of the model are found using as an example the Come_On project data. Finally the responses of the model to known perturbations are compared with the experimental data recorded during the experiments of Come_On; the results are found to be in good agreement.
Adaptive optical transfer function modeling
In a preceding paper the authors calculated the mean value of the Optical Transfer Function (OTF) for a high-resolution imaging instrument corrected by adaptive optics. The Come_On experiments carried out during 1990 show good correlation with the mathematical model. In the present paper the model described in the 1987 paper is revised to introduce modal control and take into account the effects of time lags and/or angular depointings on the corrected OTF. These lags represent the time needed to perform the computations between phase measurements and corrections, and the angular depointings define the angular field where the corrections are valuable. The model permits an evaluation of the OTF decay in respect to the Fried diameter. Numerical results are given using the modified model with the Come_On project specifications. These works have been sponsored by the French Defense Ministry through the Direction des Recherches, Etudes et Techniques (DRET).
Wind tunnel model aircraft attitude and motion analysis
Hassan Mostafavi
A model-based approach to estimating the position and pitch-roll-yaw of the model aircraft in a vertical wind tunnel used to study the aircraft spin characteristics. The video image sequence of the model during flight is digitally analyzed for detection and tracking of target markers installed on the model. The geometric model of these targets is represented by their coordinates relative to the model reference coordinates. The six degrees of freedom (six DOF), namely, the pitch-roll-yaw and the 3-D position, is estimated using the camera model and iterative minimization of the distance in the image plane between the target points and the projected model points. Using prediction of the six DOF for the next video frame, the estimation procedure is repeated in a six DOF tracking loop that utilizes the dynamic as well as the geometric model of the target. The six DOF estimation and target tracking has been implemented on a real-time image processing workstation based on the Sun SPARC architecture. Interactive graphics and video image processing is used for initial designation of the approximate six DOF by the operator, followed by automatic tracking which dynamically displays the stick figure of the model aircraft on the grey-level video sequence.
ATR performance modeling for building multiscenario adaptive systems
Hatem N. Nasr
Modeling Automatic Target Recognition (ATR) system performance is important for a number of reasons. Many of these reasons have to do with the fact that performance models can enhance the ability to predict the ATR system performance in scenarios where data is not available. However, a critical use of ATR performance models that has not been explored until recently is the adaptation of the ATR system parameters. A system has been developed in recent years called Knowledge and Model-Based Algorithm Adaptation (KMBAA) for automatic ATR parameters adaptation. KMBAA has shown tremendous success in its ability to adapt ATR parameters and enhance the ATR system performance. KMBAA relies heavily on the use of complex ATR performance models. These models relate a number of ATR performance measures, such as probability of detection, to a number of ATR critical parameters, such as bright thresholds, and image/scene metrics, such as target range. The models being used in the KMBAA systems, and the process of building such models, are discussed in this paper.
Information-theoretic approach to optimal quantization
Maximo Lorenzo, Sandor Z. Der, J. Russ Moulton Jr.
The authors offer transinformation maximization as the criterion for optimal signal quantization for most applications in lieu of the more conventional criteria, such as mean- square-error minimization first proposed by Max. Optimal quantization for signal transinformation (which reduces to entropy in the noise-free case) using a uniform digitizer and a companding gain function is considered, and specifically applied to the image acquisition problem for ATR (Automatic Target Recognizer) processing. Both pre- and post-gain noisy channels are examined under linear sensor gain (with extension to non-linear gains); maximum achievable entropy, transinformation, SNR, and minimum mean square error are computed for several typical input distributions. The authors establish the following arguments supporting Maximum Transinformation Acquisition--MTA (noise)--and Maximum Entropy Acquisition-- MEA (no-noise): (1) MTA 'matches' the sensor/digitizer channel to the input signal in an information-theoretic sense, preserving as much analog information as possible. (2) MTA can be closely approximated with a less computationally intensive algorithm than actual transinformation. (3) Net information content is the crucial quantity governing a priori detection and recognition in cluttered environments. (4) MEA provides the best overall pixel intensity spread throughout the image, maximizing the probability of pixel differences where analog intensities differ; MTA provides very near optimal SNR and noise-free pixel contrasts. (5) MTA provides a standard, repeatable, and globally optimal acquisition method for extracting information for ATRs in the absence of a priori scene knowledge; MTA can also be applied locally to maximize information content within a region of interest.
Evaluation
icon_mobile_dropdown
Experimental comparison of optical binary phase-only filter and high-pass matched filter correlation
Kenneth G. Leib, Robert W. Brandstetter, Marvin D. Drake, et al.
An experimental comparison of two optical correlation systems in their ability to do target detection, identification, and discrimination is described. A common set of four binary input images was used for the experiments, which included single and multiple target scenes. The set included symmetric and asymmetric objects. The two optical correlation systems compared were (a) a high pass matched filter (HPMF) VanderLugt system and (b) a binary phase-only filter (BPOF) system. In the HPMF system, the input image and filter are film-based, with the filter somewhat Gaussian apodized to achieve high passing. The BPOF system has one magneto-optical spatial light modulator as the input device, and another as the filter device. The experimental measurements compared were (1) target auto- and crosscorrelation, (2) auto- and crosscorrelation for multiple target scenes, (3) spatial extent of the correlation peaks, and (4) sidelobe levels in multiple target scenes. In spite of the fundamental differences in the correlators compared (i.e., film vs. real time image/filter), the use of binary imagery and high pass filters in both cases gave comparable results in target detection, identification, and discrimination. Both the similarities and the differences are described and summarized.
Parametric analysis of target/decoy performance
This paper describes an analytical approach to the parametric analysis of target/decoy discrimination performance as a function of various controllable object characteristics. This analysis tool can be used to answer the question, How distinct in physical characteristics do a target and decoy have to be before they can be easily discriminated? Three main characteristics of the objects are considered in this analysis: temperature, projected area, and rate of rotation. These characteristics are given assumed models for their statistics and described by a set of parameters including their first and second order moments. Based upon the statistical parameters and models for the object characteristics, a set of equations is derived to compute the mean and covariance of the optical signature as seen by a sensor for the decoy and target classes. An estimate of the classification performance between the classes is made using a function of a statistical distance measure. This estimate is used as a performance measure in a parameter trade-off analysis during an example decoy concept development process. While a purely analytical approach such as this lacks the fidelity of a sophisticated simulation model, it is computationally much simpler and is most appropriately applied during decoy concept development before the application of more rigorous simulation-based analysis.
Technique for ground/image truthing using a digital map to reduce the number of required measurements
Sandor Z. Der, G. John Dome, Gerald A. Rusche
The U.S. Army CECOM Center for Night Vision and Electro-Optics (C2NVEO) maintains a terrain board for the purpose of testing and evaluating Automatic Target Recognizers (ATRs). A combination ground truth and image truth procedure has been devised which minimizes the number of location measurements made, thus reducing set-up time. Reference points are ground truthed using lasers mounted on optical benches on the north and east edges of the terrain board. The program uses the measured ground and image location of two targets (reference points), combined with a digital map of the region in question, to calculate the camera azimuth, elevation, and roll angles. Thereafter the program calculates the ground location of any target given the image location using a simple ray trace procedure. Since the ground truthing of a target is much more time intensive than the image truthing, a great deal of time is saved if there are more than two targets. An elegant user interface allows the operator to display the current image, mark the image location of each target with a mouse, and have the program display the ranges to each target. This procedure has dramatically reduced the personnel required to conduct a test. The ray tracing procedure can also be applied to field testing with some modifications. In order to increase the maximum range at which targets can be viewed within the confines of the terrain room, a mirror has recently been added. The ground truthing procedure for the mirror is discussed.
Neural networks for ATR parameters adaptation
Hossien Amehdi, Hatem N. Nasr
The performance of complex signal processing systems such as Automated Target Recognition (ATR) systems can be dramatically improved by adjusting the system parameters in a dynamic fashion. One of the critical problems in ATR systems is their inability to adapt to changes in the scene and the environment. ATR parameters adaptation techniques have been the focus of many ATR researchers. In this paper a back-propagation neural network (NN) architecture for automatically adapting certain critical parameters in an ATR system is described. The NN uses as input certain image and scene descriptors called 'metrics.' The output of the NN is the suggested values of the ATR parameters. The authors show some preliminary results of their NN approach and discuss the trade-offs between that approach and alternative approaches.
Performance evaluation of a texture-based segmentation algorithm
Texture segmentations are crucial components of many remote sensing, scene analysis, and object recognition systems. However, very little attention has been paid to the problem of performance evaluation in the numerous algorithms that have been proposed by the image understanding community. In this paper, a particular algorithm is introduced and its performance is evaluated in a systematic manner on a wide range of scene and scenarios. Both the algorithm and the methodology used in its evaluation have significance in numerous applications in the computer-based image understanding field.
Computer-aided performance evaluation system for the on-board data compression system in HIRIS
Shen-en Qian, Ruqin Wang, Shuqiu Li, et al.
This paper describes a computer-aided performance evaluation system for testing and checking the on-board data compression system used in the High-Resolution Imaging Spectrometer (HIRIS). The earth resources spectra from the NASA earth resources spectral information system are taken for simulative spectra of ground objects, and these spectra are sampled and quantized according to the conditions and parameters of HIRIS. Then the simulative spectral data are put in a GIFOV spectral data generator through a computer interface. In testing, the spectral data stored in the generator are sent to the data compression system at the same rate of the readout clock of subdetectors in HIRIS. The data compression system compresses the input data in real-time under the control of the clock of the input data. After compressing, the compression results are returned to the computer through a bidirection interface and compared with the original ones. Finally, the computer gives out testing results. This kind of test system can exactly simulate the original data obtained by HIRIS in space, and objectively evaluate the data compression system.
Evaluation of image tracker algorithms
William C. Marshall
This paper presents the results of applying a digital simulation of three conventional image- tracking algorithms to the task of tracking a synthetic target image in the presence of both moving background clutter and sensor noise. The background clutter model is based on two- dimensional correlated noise equations and is repeatable for each test. The Fortran listings of the clutter algorithm are given. Centroid, correlation, and a feature-matching algorithms were used for testing, with a simple criterion for adequacy of tracking. Results are shown with clutter signal-to-noise ratio (SNR) plotted against sensor SNR, which illustrates the relative merit of each tracking algorithm.
Multisensor fusion methodologies compared
John Swan, Frank J. Shields
The fusion of multi-sensor data is dependent upon a paradigm of evidence accumulation. Many methodologies presently exist to carry out this fusion process, but the advantages and problems involved with each are not well documented or quantified. A modeling tool has been developed for these processes. This tool allows a comparison of evidence accumulation algorithms such as Bayes Nets and a Fuzzy Set concept. The tool allows variation of the number of sets of evidence accumulated as well as the correlation between these sets. These sets are compared by using the same inputs which can be interpreted as either features, decisions, or evidence. The performance differences between the paradigms are measured. A baseline fusion process is developed which takes into account the full covariance matrix of the data, thus taking into account the correlation between the evidence sources. This problem is completely solved and compared to the solution of the methods which implicitly or explicitly make assumptions about the interrelationship of data sources, methods such as Bayes, Dempster-Shafer, and Fuzzy Sets. The results are given in parametric form which can be utilized to develop design decision tools for systems. This tool will allow comparisons of methods and measures of bounding errors using the paradigms based on assumptions. An outline is given as to the assumptions required for each of the methods and how these impact the type of data required. The results of a sensitivity analysis, which shows the significance of these results in evaluation methodologies, is discussed.
Training image collection at CECOM's Center for Night Vision and Electro-Optics
Richard W. Harr
Imagery is a critical need for ATh development. Real world scenes collected for ATh training and testing typically attempt to sample various target types, contrasts, backgrounds, depression angles, and ranges. These data sets are limited by the variability of the real world. ATh development is greatly enhanced by a data set that voids some of the real world problems. The Terrain Board test facility of CECOM's Center for Night Vision and Electro-Optics (C2NVEO) has the resources to obtain such an image training set that voids some of these real world problems. As part of the Balanced Technology Initiative Program (BTI), an extensive digital imagery set was recently collected at the facility, controlling some of the variables of the real world and systematically varying others.