Proceedings Volume 4029

Targets and Backgrounds VI: Characterization, Visualization, and the Detection Process

cover
Proceedings Volume 4029

Targets and Backgrounds VI: Characterization, Visualization, and the Detection Process

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 24 July 2000
Contents: 9 Sessions, 44 Papers, 0 Presentations
Conference: AeroSense 2000 2000
Volume Number: 4029

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • MIL Performance Estimation
  • Threat Sensing
  • System Performance Issues
  • Scene Estimation Technologies
  • Remote Sensing and Scene Dynamics
  • System Performance Issues
  • Calibration and Validation of Imaging Systems
  • Hyperspectral Sensing, Analysis, and Applications
  • Target-Hiding Technologies
  • Poster Session
MIL Performance Estimation
icon_mobile_dropdown
Perception testing for development of computer models of ground vehicle visual discrimination performance
This paper describes a series of large-scale perception experiments designed to collect human observer visual search and discrimination performance data for use in calibrating and validating computer models of visual acquisition of military ground vehicles. The first experiment provides data for development of models of color and luminescence adaptation and contrast sensitivity to extract information needed to discriminate simple 2-D shapes, as a function of size, adaptation, blur and contrast. The second experiment provides data for development of models of search and discrimination for simple 3-D shapes in cluttered backgrounds, as a function of size, clutter level, and facet contrast. The third experiment provides data for development of models of serach and discrimination of military ground vehicles in natural settings. These stimuli include vehicles at close and far ranges, with and without cue feature suppression, with and without camouflage, and under clear, hazy and dark conditions. Observer response test results show that the stimuli are uniformly distributed from very high to very low signatures. This paper also reports on insights for modeling visual discrimination.
Analysis and modeling of fixation point selection for visual search in cluttered backgrounds
Magnus Snorrason, James Hoffman, Harald Ruda
Hard-to-see targets are generally only detected by human observers once they have been fixated. Hence, understanding how the human visual system allocates fixation locations is necessary for predicting target detectability. Visual search experiments were conducted where observers searched for military vehicles in cluttered terrain. Instantaneous eye position measurements were collected using an eye tracker. The resulting data was partitioned into fixations and saccades, and analyzed for correlation with various image properties. The fixation data was used to validate out model for predicting fixation locations. This model generates a saliency map from bottom-up image features, such as local contrast. To account for top-down scene understanding effects, a separate cognitive bias map is generated. The combination of these two maps provides a fixation probability map, from which sequences of fixation points were generated.
Calibration of a time-to-detection model using data from visual search experiments
Harald Ruda, James Hoffman, Magnus Snorrason
Using a model of visual search that predicts fixation probabilities for hard-to-see targets in naturalistic images, it is possible to stochastically generate fixation sequences and time to detection for targets in these images. The purpose of the current work is to calibrate some of the parameters of a time to detection model. In particular, this work is an attempt to elucidate the parameters of the proposed fixation memory model, the strength and decay parameters. The methods used to perform this calibration consist chiefly of comparison of the stochastic model with both experimental data and a theoretical analysis of a simplified scenario. The experimental data have been collected from ten observers performing a visual search experiment. During the experiment, eye fixations were tracked with an ISCAN infrared camera system. The visual search stimuli required fixation on target for detection (i.e. hard-to-detect stimuli). The experiment studied re-fixations of previously fixated targets ,where the fixation memory failed. The theoretical analysis is based on a simplified scenario that parallels the experimental setup, with a fixed number, N, of equally probable objects. It is possible to derive analytical expressions for the re- fixation probability in this case. The results of the analysis can be used in three different ways: (1) to verify the implementation of the stochastic model, (2) to estimate the stochastic parameters of the model (i.e., number of fixations sequences to generate), and (3) to calibrate the fixation memory parameters by fitting the experimental data.
Multiband E/O color fusion with consideration of noise and registration
Jonathon M. Schuler, J. Grant Howard, Penny R. Warren, et al.
Sensor fusion of up to three disparate imagers can readily be achieved by assigning each component video stream to a separate channel any standard RBG color monitor such as with television or personal computer systems. Provided the component imagery is pixel registered, such a straightforward systems can provide improved object-background separation, yielding quantifiable human-factors performance improvement compared to viewing monochrome imagery of a single sensor. Consideration is given to appropriate dynamic range management of the available color gamut, and appropriate color saturation in the presence of imager noise.
Real-time color fusion of E/O sensors with PC-based COTS hardware
J. Grant Howard, Penny R. Warren, Richard Klien, et al.
Increases in the power of personal computers and the availability of infrared focal plane array cameras allows new options in the development of real-time color fusion system for human visualization. This paper describes on-going development of an inexpensive, real-time PC=based infrared color visualization system. The hardware used in the system is all COTS, making it relatively inexpensive to maintain and modify. It consists of a dual Pentium II PC, with fast digital storage and up to five PCI frame-grabber cards. The frame-grabbers cards allow data to be selected from RS-170 (analog) or RS-422 (digital) cameras. Software allows the system configuration to be changed on the fly, so cameras can be swapped at will and new cameras can be added to the system in a matter of minutes. The software running on the system reads up to five separate images from the frame-grabber cards. These images are then digitally registered using a rubber-sheeting algorithm to reshape and shift the images. The registered data, from two or three cameras, is then processed by the selected fusion algorithm to produce a color-fused image, which is then displayed in real-time. The real-time capability of this system allows interactive laboratory testing of issues such as band selection, fusion algorithm optimization, and visualization trade-offs.
Autonomous stereo object tracking using motion estimation and JTC
Jae-Soo Lee, Jung-Hwan Ko, Kyu-Tae Kim, et al.
General stereo vision system shows things in 3D, using two visions of left and right side. When the viewpoints of left/right sides are not in accord with each other, it give fatigue to human eyes and prevents them from having the 3-D feeling. Also, it would be difficult to track mobile objects that are not in the middle of a screen. Therefore, the object tracking function of stereo vision system is to control tracking objects to always be in the middle of a screen while controlling convergence angles of mobile objects in the input image of the left/right cameras. In this paper, object-tracker in stereo vision is presented which would track mobile objects by using block matching algorithm (Motion Estimation) of preprocessing and JTC.
Sensor fusion: a preattentive vision approach
Because different imaging sensors provide different signature cues to distinguish targets from backgrounds there has been a substantial amount of effort put into how to merge the information from different sensors. Unfortunately, when the imagery from two different sensors is combined the noise from each sensor is also combined in the resultant image. Additionally, attempts to enhance the target distinctness from the background also enhance the distinctness of false targets and clutter. Even so there has been some progress in trying to mimic the human vision capability by color contrast enhancement. But what has not been tried is how to mimic how the human vision system inherently does this sensor function of our color cone sensors. We do our sensor fusion in the pre- attentive phase of human vision. This requires the use of binocular stereo vision because we do have two eyes. In human vision the images from each eye are split in half, and the halves are sent to opposite sides of the brain for massively parallel processing. We don't know exactly how this process works, but the results is a visualization of the world that is 3D in nature. This process automatically combines the color, texture, and size and shape of the objects that make up the two images that our eyes produce. It significantly reduces noise and clutter in our visualization of the world. In this pre-attentive phase of human vision tha takes just an instant to accomplish, our human vision process has performed an extremely efficient fusion of cone imagery. This sensor fusion process has produced a scene where depth perception and surface contour cues are used to orient and distinguish objects in the scene before us. It is at this stage that we begin to attentively sort through the scene for objects or targets of interest. In many cases, however, the targets of interest have already been located because of their depth or surface contour cues. Camouflaged targets that blend perfectly into complex backgrounds may be made to pop out because of their depth cues. In this paper we will describe a new method termed RGB stereo sensor fusion that uses color coding of the separate pairs of sensor images fused to produce wide baseline stereo images that are displayed to observers for search and target acquisition. Performance enhancements for the technique are given as well as rationale for optimum color code selection. One important finding was that different colors (RGB) and different spatial frequencies are fused with different efficiencies by the binocular vision system.
Threat Sensing
icon_mobile_dropdown
Detection and classification of infrared decoys and small targets in a sea background
A combination of algorithms has been developed for the detection, tracking, and classification of targets at sea. In a flexible software setup, different methods of preprocessing and detection can be chosen for the processing of infrared and visible-light images. Two projects, in which the software is used, are discussed. In the SURFER project, the algorithms are used for the detection and classification of small targets, e.g., swimmers, dinghies, speedboats, and floating mines. Different detection methods are applied to recorded data. We will present a method to describe the background by fitting continuous functions to the data, and show that this provides a better separation between objects and clutter. The detection of targets using electro- optical systems is one part of this project, in which also algorithms for fusion of electro-optical data with radar data are being developed. In the second project, a simple infrared image-seeker has been built that is used to test the effectiveness of infrared decoys launched from a ship. In a more complicated image seeker algorithm, features such as contrast and size and characterization of trajectory are used to differentiate between ship, infrared decoys and false alarms resulting from clutter. In this paper, results for the detection of small targets in a sea background are shown for a number of detection methods. Further, a description is given of the simulator imaging seeker, and some results of the imaging seeker software applied to simulated and recorded data will be shown.
FFT-descriptors for shape recognition of military vehicles
Andreas Wimmer, Georg S. Ruppert, Oliver Sidla, et al.
An accurate method to detect and classify military vehicles based on the recognition of shapes is presented in this work. FFT-Descriptors are used to generate a scale, translation and rotation invariant characterization of the shape of such an object. By interpreting the boundary pixels of an object as complex numbers it is possible to calculate an FFT-Descriptor based on the spectrum of a Fast Fourier Transform of these numbers. It is shown that by using this characterization it is possible to match such representations with models in a database of known vehicles and thereby gaining a highly robust and fault tolerant object classification. By selecting a specific number of components of a FFT-Descriptor the classification process can by tailored to different needs of recognition accuracy, allowed shape deviation and classification speed.
Model-based target and background characterization
Markus Mueller, Wolfgang Krueger, Norbert Heinze
Up to now most approaches of target and background characterization (and exploitation) concentrate solely on the information given by pixels. In many cases this is a complex and unprofitable task. During the development of automatic exploitation algorithms the main goal is the optimization of certain performance parameters. These parameters are measured during test runs while applying one algorithm with one parameter set to images that constitute of image domains with very different domain characteristics (targets and various types of background clutter). Model based geocoding and registration approaches provide means for utilizing the information stored in GIS (Geographical Information Systems). The geographical information stored in the various GIS layers can define ROE (Regions of Expectations) and may allow for dedicated algorithm parametrization and development. ROI (Region of Interest) detection algorithms (in most cases MMO (Man- Made Object) detection) use implicit target and/or background models. The detection algorithms of ROIs utilize gradient direction models that have to be matched with transformed image domain data. In most cases simple threshold calculations on the match results discriminate target object signatures from the background. The geocoding approaches extract line-like structures (street signatures) from the image domain and match the graph constellation against a vector model extracted from a GIS (Geographical Information System) data base. Apart from geo-coding the algorithms can be also used for image-to-image registration (multi sensor and data fusion) and may be used for creation and validation of geographical maps.
Tactical midinfrared testbed
A new tactical airborne multicolor missile warning testbed was developed and fielded as part of an Air Force Research Laboratory (AFRL) initiative focusing on clutter and missile signature measurements for algorithm development. Multicolor discrimination is one of the most effective ways of improving the performance of infrared missile warning sensors, particularly for heavy clutter situations. Its utility has been demonstrated in fielded scanning sensors. Normally, multicolor discrimination is performed in the mid-infrared, 3-5 micrometers band, where the molecular emission of CO and CO2 characteristic of a combustion process is readily distinguished from the continuum of a black body radiator. Current infrared warning sensor development is focused on staring mosaic detector arrays that provide much higher frame rates than scanning systems in a more compact and mechanically simpler package. This, in turn, has required that multicolor clutter data be collected for both analysis and algorithm development. The developed sensor test bed is a 256x256 InSb sensor with an optimized two color filter wheel integrated with the optics. The collection portion includes a ruggedized parallel array processor and fast disk array capable of real-time processing and collection of up to 350 full frames per second. This configuration allowed the collection and real- time processing of temporally correlated, radiometrically calibrated data in two spectral bands that was compared to background and target imagery taken previously. The current data collections were taken from a modified Piper light aircraft at medium and low altitudes of background, battlefield clutter, and shoulder-fired missile signatures during August 1999.
Correlated two-color midinfrared background characteristics
Multicolor discrimination techniques provide a useful approach to suppressing background clutter and reducing false alarm rates in warning sensors. To assess discrimination performance, it is necessary to understand the statistics of each band as well as inter-band correlations. This paper describes the background measurements from an airborne platform collected using a two-color prototype staring missile- warning sensor. The sensor is a commercial 256x256 InSb camera with filter wheel integrated into a 90 deg by 90 deg. optic. The two colors lie in the carbon dioxide red spike region and in the window region below 4 micrometers. These bands are useful for detecting the combustion of hydrocarbons in the presence of background clutter. The sensor looks straight down from the aircraft and data is collected at frame rates from 10 to 100 Hz. Extensive background data has been collected over a wide range of scenes representing industrial, urban, rural, mountainous, and shoreline terrain. The data has been analyzed to provide correlated statistics of these spectral bands for both the underlying background structure and discrete false alarm sources. This data provides a basis for estimating the performance of spectral discrimination and optimizing processing algorithms for the suppression of clutter and rejection of false alarms.
System Performance Issues
icon_mobile_dropdown
New method of point-target detection for SAR image
Peng Wan, JianGuo Wang, Zhiqin Zhao, et al.
A new method of reducing speckle noise for SAR (synthesize aperture radar) image is proposed, which combine enhanced wavelet transform and self-adaptive Wiener filter based SAR image scene heterogeneity. It can better preserve clutter edge and point target The several probability density function (pdf) is analyzed after speckle is de-noised and target detection studied. A new target detection method and its realizing are proposed. The validity of the method is tested by experiments for SIR- C/X HH SAR image.
Edge extraction of small reflection target in SAR image
Peng Wan, JianGuo Wang, Zhiqin Zhao, et al.
A edge synthesized extraction method of small reflection target for SAR (synthetic aperture radar) image is proposed. A wavelet de-noising method is obtained based SAR image clutter heterogeneity. SAR image segmentation threshold values are calculated based the criteria of smallest error. The interesting target is obtained after several segmentations. The SAR image target edge appears by morphological operation. The validity of this method is tested by real SAR image.
Complex HRR range signatures
Junshui Ma, Stanley C. Ahalt
The need to automatically identify moving targets is becoming increasingly important in modern battlefields. However, Synthetic Aperture Radar (SAR) is problematic when applied to moving targets scenarios because moving targets tend to smear SAR images. High-Range Resolution (HRR) Radar has, consequentially, attracted more attention due to its potential performance in moving target identification. However, devising reliable identification techniques using HRR signatures is challenging because the signatures are extremely sensitive to radar aspect angles primarily because of scintillation. This aspect sensitivity causes the HRR signatures to exhibit irregular behavior that makes extracting robust target features a challenge. As a result, HRR applications tend to base their processing on the magnitude of complex HRR signatures. We argue that insightful feature selection shoudl be based on a detailed understanding of the properties of the complex signatures. In this paper we focus on studying the fundamental behavior of complex HRR signatures that are generated from a representative HRR model. Our analysis focuses on (1) scintillation effects; (2) the relationship between HRR signatures and aspect angle, and (3) the utility of the phase of complex HRR signatures. In this paper we present a number of observations concerning the redundancy of phase information, the variance of HRR signatures as a function of aspect angle, and the relationship between scattering coefficients and scatterer locations.
Scene Estimation Technologies
icon_mobile_dropdown
Round robin of painted targets BRDF measurements
Several French research laboratories set up goniometers allowing BRDF measurements at different laser wavelengths in the infrared. On the effor of the Delegation Generale de l'Armement (DGA/STTC), a round robin set of painted targets BRDF measurements was undertaken, under the ONERA expertise. The laboratories participating in this round robin were the Aerospatiale Matra CCR Suresnes, The IPN SMA-Virgo Lyon, the Institut Fresnel Marseille, and the CEA DAM CESTA Le Barp. The goniometers of the four laboratories are firstly described. The targets studied are seven 5cm diameter painted disks of aluminum or steel, a spectralon reference sample, and a sandpaper sample. We have first demonstrated that the pollution of painted targets with dust has a very weak influence on the BRDF. Before and after each measurement series, the directional-hemispherical reflectance of the samples was measured at ONERA. The measurements have been achieved according to a protocol specifying the sample position and laser probe size. Chosen wavelengths for the inter-comparison are 1.064 micrometers . For both wavelengths, the characteristics of the different goniometers are compared in term of noise and repeatability. The difference between the painted targets BRDF measured with the various devices are relatively limited at 1.06 micrometers , and mainly induced by speckle. More important differences are obtained at 10.6 micrometers , particularly for a BRDF measurement device using an absolute calibration method. In order to explain these differences, few hypotheses are advanced. Information on the absolute accuracy is obtained by the comparison of the measured directional-hemispherical reflectance and the one computed from BRDF measurements.
IR field reflectometer (EMIR): first results
Christian Hamel, Jean-Francois Millot, Alain Janest
The EMIR IR reflectometer is used for laboratory and field measurement of average reflection factor of natural and man made samples in atmospheric IR windows. The data collected will be used to improve target and background data bases for more realistic IR scene generation.
Optical characterization of volume-scattering backgrounds
Scattering media act in many situations as backgrounds in target recognition and remote sensing and an accurate method for their characterization is highly desirable. The use of light sources with short temporal coherence produces the depth resolution needed for this purpose. The low coherence interferometry has been used, for a long time, as a filter to suppress multiple light scattering and preserve the single-scattering characterized by well defined scattering angles and polarization properties. Recently, the low-coherence interferometry was successfully applied to multiple light scattering regime. The signal obtained from such a measurement relates directly to the optical path- length distribution of the backscattered light and, therefore, comprehensively characterizes the scattering system. The path-length resolved backscatering defines the scattering properties of the medium and its shape has distinct features for single and multiple-scattering regimes. In our experiments, the path-length domain is sampled with a resolution equivalent to 30 fs in conventional time-of-flight measurements. We will show that the transition domain between single and multiple scattering can be fully characterized using this methodology and that single scattering information can be successfully retrieved even in the presence of a strong multiple scattering component.
Development and characterization of a 3D high-resolution terrain database
Aaron Wilkosz, Bryan L. Williams, Steve Motz
A top-level description of methods used to generate elements of a high resolution 3D characterization database is presented. The database elements are defined as ground plane elevation map, vegetation height elevation map, material classification map, discrete man-made object map, and temperature radiance map. The paper will cover data collection by means of aerial photography, techniques of soft photogrammetry used to derive the elevation data, and the methodology followed to generate the material classification map. The discussion will feature the development of the database elements covering Fort Greely, Alaska. The developed databases are used by the US Army Aviation and Missile Command to evaluate the performance of various missile systems.
Automatic temperature computation for realistic IR simulation
Alain Le Goff, Philippe Kersaudy, Jean Latger, et al.
Polygon temperature computation in 3D virtual scenes is fundamental for IR image simulation. This article describes in detail the temperature calculation software and its current extensions, briefly presented in [1]. This software, called MURET, is used by the simulation workshop CHORALE of the French DGA. MURET is a one-dimensional thermal software, which accurately takes into account the material thermal attributes of three-dimensional scene and the variation of the environment characteristics (atmosphere) as a function of the time. Concerning the environment, absorbed incident fluxes are computed wavelength by wavelength, for each half an hour, druing 24 hours before the time of the simulation. For each polygon, incident fluxes are compsed of: direct solar fluxes, sky illumination (including diffuse solar fluxes). Concerning the materials, classical thermal attributes are associated to several layers, such as conductivity, absorption, spectral emissivity, density, specific heat, thickness and convection coefficients are taken into account. In the future, MURET will be able to simulate permeable natural materials (water influence) and vegetation natural materials (woods). This model of thermal attributes induces a very accurate polygon temperature computation for the complex 3D databases often found in CHORALE simulations. The kernel of MUET consists of an efficient ray tracer allowing to compute the history (over 24 hours) of the shadowed parts of the 3D scene and a library, responsible for the thermal computations. The great originality concerns the way the heating fluxes are computed. Using ray tracing, the flux received in each 3D point of the scene accurately takes into account the masking (hidden surfaces) between objects. By the way, this library supplies other thermal modules such as a thermal shows computation tool.
Ground target infrared signature modeling with the multiservice electro-optic signature (MuSES) code
Jeffrey S. Sanders, Keith R. Johnson, Allen R. Curran, et al.
With an increased reliance on modeling and simulation in the defense community a requirement has developed for improved ground target infrared signature prediction capabilities. Predictive ground target infrared signature modeling has traditionally been done using the Physically Reasonable Infrared Signature Model (PRISM). The PRISM code has been used extensively in support of signature management for vehicle designers as well as other applications. The intended replacement for PRISM, the Multi-Service Electro-optic Signature (MuSES) code, has recently been developed and offers increased capabilities and ease of use. Until recently, IR/thermal signature analysis suffered from a disparity between the geometry required to predict signatures and the geometry used to design vehicles. The solution to the IR geometry problem was the development of MuSES, which uses meshed CAD geometry. MuSES is a rapid prototyping thermal design tool and an infrared signature prediction tool. To restore modularity lost over ten years of PRISM evolution, a new object-oriented thermal solver was created. The solver incorporates numerous advanced features including a net enclosure method for radiation, CFD interface, restart/seed capability, batch mode, and alternate solution strategies (such as the partial direct solution method). The MuSES interface is optimized for engineers/analysts who need to incorporate signature management treatments or heat management solutions into vehicle designs. Topics covered by this paper include a detailed description of the MuSES code and its capabilities, as well as multiple examples of model creation. The geometry modeling paradigm for the MuSES code represents a radical shift in how a vehicle model is created for the purpose of infrared signature modeling. The model creation examples are presented to demonstrate the tools and techniques used as well as to convey lessons learned to potential users in proper geometry modeling and meshing techniques.
Characterization techniques for incorporating backgrounds into DIRSIG
The appearance of operation hyperspectral imaging spectrometers in both solar and thermal regions has lead to the development of a variety of spectral detection algorithms. The development and testing of these algorithms requires well characterized field collection campaigns that can be time and cost prohibitive. Radiometrically robust synthetic image generation (SIG) environments that can generate appropriate images under a variety of atmospheric conditions and with a variety of sensors offers an excellent supplement to reduce the scope of the expensive field collections. In addition, SIG image products provide the algorithm developer with per-pixel truth, allowing for improved characterization of the algorithm performance. To meet the needs of the algorithm development community, the image modeling community needs to supply synthetic image products that contain all the spatial and spectral variability present in real world scenes, and that provide the large area coverage typically acquired with actual sensors. This places a heavy burden on synthetic scene builders to construct well characterized scenes that span large areas. Several SIG models have demonstrated the ability to accurately model targets (vehicles, buildings, etc.) Using well constructed target geometry (from CAD packages) and robust thermal and radiometry models. However, background objects (vegetation, infrastructure, etc.) dominate the percentage of real world scene pixels and utilizing target building techniques is time and resource prohibitive. This paper discusses new methods that have been integrated into the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model to characterize backgrounds. The new suite of scene construct types allows the user to incorporate both terrain and surface properties to obtain wide area coverage. The terrain can be incorporated using a triangular irregular network (TIN) derived from elevation data or digital elevation model (DEM) data from actual sensors, temperature maps, spectral reflectance cubes (possible derived from actual sensors), and/or material and mixture maps. Descriptions and examples of each new technique are presented as well as hybrid methods to demonstrate target embedding in real world imagery.
Irma 5.0 multisensor signature prediction model
Michael R. Wellfare, Douglas A. Vechinski, John S. Watson, et al.
The Irma synthetic signature model was one of the first high resolution Infrared (IR) target and background signature models to be developed for tactical weapons application. Originally developed in 1980 by the Munitions Directorate of the Air Force Research Laboratory (AFRL/MN), the Irma model was used exclusively to generate IR scenes for smart weapons research and development. In 1988, a number of significant upgrades to Irma were initiated including the addition of a laser channel. This two-channel version was released to the user community in 1990. In 1992, an improved scene generator was incorporated into the Irma model, which supported correlated frame-to-frame imagery. A passive IR/millimeter wave (MMW) code was completed in 1994. This served as the cornerstone for the development of the co-registered active/passive IR IR/MMW model, Irma 4.0. The latest release of Irma, version 4.1, incorporated a number of upgrades to both the physical models and software. Since that time several upgrades to the model have been accomplished including the inclusion of circular polarization, hybrid LADAR signature blending, and a RF air-to-air channel. Work is still ongoing towards the development of a reconfigurable sensor model, a Scannerless Range Imaging (SRI) sensor modeling capability, a PC version, and an enhanced user interface. These capabilities will be integrated into the next release, Irma 5.0, scheduled for completion in FY00.The purpose of this paper is to demonstrate the progress of the Irma 5.0 development effort. Irma is being developed to facilitate multi-sensor research and development. It is currently being used to support a number of civilian and military applications. The Irma user base includes over 130 agencies within the Air Force, Army, Navy, DARPA, NASA, Department of Transportation, academia, and industry.
Remote Sensing and Scene Dynamics
icon_mobile_dropdown
Radiometric spectral and band rendering of targets using anisotropic BRDFs and measured backgrounds
John W. Hilgers, Jeffrey A. Hoffman, William R. Reynolds, et al.
Achievement of ultra-high fidelity signature modeling of targets requires a significant level of complexity for all of the components required in the rendering process. Specifically, the reflectance of the surface must be described using the bi-directional distribution function (BRDF). In addition, the spatial representation of the background must be high fidelity. A methodology and corresponding model for spectral and band rendering of targets using both isotropic and anisotropic BRDFs is presented. In addition, a set of tools will be described for generating theoretical anisotropic BRDFs and for reducing data required for a description of an anisotropic BRDF by 5 orders of magnitude. This methodology is hybrid using a spectrally measured panoramic of the background mapped to a large hemisphere. Both radiosity and ray-tracing approaches are incorporated simultaneously for a robust solution. In the thermal domain the spectral emission is also included in the solution. Rendering examples using several BRDFs will be presented.
Scene simulation for camouflage assessment
Alexander W. Houlbrook, Marilyn A. Gilmore, Ian R. Moorhead, et al.
Synthetic imagery is now used by a variety of military applications. In our application, we are using synthetic imagery to study the effectiveness of different camouflage techniques. The requirement is to be able to display high fidelity imagery of target vehicles against different background in different wavebands. For a complete assessment of camouflage the system should be able to account for the effect of target motion, interactions between the target and its environment and effects such as hot sources, e.g. engines. CAMEO-SIM has been developed to meet these requirements. It can generate physically accurate radiance images in any EO waveband between 0.4 and 14 microns. Sensor effects are added as post-process. The system is capable of modelling highly cluttered terrain scenes and delivers radiance values at each pixel. Recent extensions to CAMEO-SIM include true-color visible band imagery and simple multispectral image display for simulation of hyperspectral imagery. Visible band images are displayed on a calibrated monitor for assessment experiments using observers. Radiometric data are used by other models. A range of verification tests has shown that the software computes the correct values for analytically tractable scenarios. Validation tests using simple scenes have also been undertaken. More complex validation tests using observer trials are planned. This paper will describe the current version of CAMEO-SIM and how images it produces are used for camouflage assessment. The verification and validation tests undertaken will be discussed. In addition, example images will be used to demonstrate the significance of different effects such as spectral rendering and shadows. Planned developments of CAMEO- SIM will also be outlined.
Assessment of synthetic image fidelity
Kevin D. Mitchell, Ian R. Moorhead, Marilyn A. Gilmore, et al.
Computer generated imagery is increasingly used for a wide variety of purposes ranging from computer games to flight simulators to camouflage and sensor assessment. The fidelity required for this imagery is dependent on the anticipated use - for example when used for camouflage design it must be physically correct spectrally and spatially. The rendering techniques used will also depend upon the waveband being simulated, spatial resolution of the sensor and the required frame rate. Rendering of natural outdoor scenes is particularly demanding, because of the statistical variation in materials and illumination, atmospheric effects and the complex geometric structures of objects such as trees. The accuracy of the simulated imagery has tended to be assessed subjectively in the past. First and second order statistics do not capture many of the essential characteristics of natural scenes. Direct pixel comparison would impose an unachievable demand on the synthetic imagery. For many applications, such as camouflage design, it is important that nay metrics used will work in both visible and infrared wavebands. We are investigating a variety of different methods of comparing real and synthetic imagery and comparing synthetic imagery rendered to different levels of fidelity. These techniques will include neural networks (ICA), higher order statistics and models of human contrast perception. This paper will present an overview of the analyses we have carried out and some initial results along with some preliminary conclusions regarding the fidelity of synthetic imagery.
Temporal measurements and scene projection testing of NAWC's fiber array projector using AEDC's laser-based Direct Write Scene Generator
Heard S. Lowry III, Lanny L. Holt, Robert Z. Dalbey, et al.
The operation of the Direct Write Scene Generator (DWSG) at the Arnold Engineering Development Center (AEDC) to drive a fiber array projection system is reported. The fiber array absorbs the input radiation from the laser-based system and produces broadband infrared output through blackbody cavities fabricated on the ends of the optical fibers. A test program was accomplished to quantify the performance of the fiber array with respect to input laser power and optical pulse width. Static and dynamic scenes were also projected with the device and recorded with an IR camera system. This paper presents the results of this work.
Utilization of DIRSIG in support of real-time infrared scene generation
Jeffrey S. Sanders, Scott D. Brown
Real-time infrared scene generation for hardware-in-the-loop has been a traditionally difficult challenge. Infrared scenes are usually generated using commercial hardware that was not designed to properly handle the thermal and environmental physics involved. Real-time infrared scenes typically lack details that are included in scenes rendered in no-real- time by ray-tracing programs such as the Digital Imaging and Remote Sensing Scene Generation (DIRSIG) program. However, executing DIRSIG in real-time while retaining all the physics is beyond current computational capabilities for many applications. DIRSIG is a first principles-based synthetic image generation model that produces multi- or hyper-spectral images in the 0.3 to 20 micron region of the electromagnetic spectrum. The DIRSIG model is an integrated collection of independent first principles based on sub-models, each of which works in conjunction to produce radiance field images with high radiometric fidelity. DIRSIG uses the MODTRAN radiation propagation model for exo-atmospheric irradiance, emitted and scattered radiances (upwelled and downwelled) and path transmission predictions. This radiometry submodel utilizes bidirectional reflectance data, accounts for specular and diffuse background contributions, and features path length dependent extinction and emission for transmissive bodies (plumes, clouds, etc.) which may be present in any target, background or solar path. This detailed environmental modeling greatly enhances the number of rendered features and hence, the fidelity of a rendered scene. While DIRSIG itself cannot currently be executed in real-time, its outputs can be used to provide scene inputs for real-time scene generators. These inputs can incorporate significant features such as target to background thermal interactions, static background object thermal shadowing, and partially transmissive countermeasures. All of these features represent significant improvements over the current state of the art in real-time IR scene generation.
System Performance Issues
icon_mobile_dropdown
Correct weighting of atmospheric transmittance and target temperature applied to IR airborne reconnaissance systems
Yair Z. Lauber, David Braun
The development of IR airborne reconnaissance systems at ELOP involves numerous analyses and optimizations. It has been found that contradictions arise, the results based on SNR calculations for a particular system were better in a specific spectral band, while in the overall performance prediction (MRT and GRD) a different spectral band appeared to be preferable. In many calculations, it is common practice to convert a detailed and accurate function into a single averaged parameter to simplify calculations. The accuracy and reliability of the prediction, no matter which model is in use, depends on correct averaging. In IR imaging system analysis, weighting according to Planck's equation is appropriate, but sometimes is omitted for the sake of simplicity. This paper shows how ignoring this weighting causes misleading results, both in performance prediction and design decision making such as choosing spectral band. Examples of the differences between the approaches will be shown.
Performance analyses for multispectral imaging systems
David Braun, Vladimir Alperovich, Michael J. Berger
Multi-spectral imaging systems are required for global monitoring of land and ocean. In order to design a new multi-spectral spaceborne system developed in Elop, and optimize physical parameters, theoretical analyses were performed. The system consists of twelve narrow spectral bands in the visible spectrum. Each spectral band was selected according to the information required for agriculture and water monitoring. NE(delta) (rho) is the principal driver for system design. NE(delta) (rho) refers to the change in target spectral reflectance, which produces a signal in the sensor equal to the noise level in that sensor. This paper describes the NE(delta) (rho) sensitivity to different kinds of scenarios such as vegetation, water and soils. Sensitivity to spectral bands in the 390-965nm spectrum, sun elevation angles and different atmospheric conditions are also presented. The system performance calculations are based on a new simulation tool developed in-house and on the Modtran code (by ALF - USA) for radiance calculations. Along with Ne(delta) (rho) , other performance parameters are presented such as Signal to Noise Ratio and NE(delta) L. From the analyses presented in this paper, it can be shown that the system design of multi-spectral imager has to take into account both scenario and physical parameters. The performance of the Multi-spectral imager is strongly dependent on the scenario and the atmospheric conditions during photography.
Calibration and Validation of Imaging Systems
icon_mobile_dropdown
Hyperspectral simulation of chemical weapon dispersal patterns using DIRSIG
Peter S. Arnold, Scott D. Brown, John R. Schott
Fieldable thermal infrared hyperspectral imaging spectrometers has made it possible to design and construct new instruments for better detection of battlefield hazards such as chemical weapon clouds. The availability of spectroscopic measurements of these clouds can be used not only for the detection and identification of specific chemical agents but also to potentially quantify the lethality of the cloud. The simulation of chemical weapon dispersal patterns in a synthetic imaging environment offers significant benefits to sensor designers. Such an environment allows designers to easily develop trade spaces to test detection and quantification algorithms without the need for expensive and dangerous field releases. This paper discusses the implementation of a generic gas dispersion model that has been integrated into the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model. The gas cloud model utilizes a 3D Gaussian distribution and first order dynamics (drift and dispersion) to drive the macro-scale cloud development and movement. The model also attempts to account for turbulence by incorporating fractional Brownian motion techniques to reproduce the micro-scale variances within the cloud. The cloud path length concentrations are then processed by the DIRSIG radiometry sub-model to compute the emission and transmission of the cloud body on a per-pixel basis. Example hyperspectral image cubes containing common agents and release amounts will be presented. Time lapse sequences will also be presented to demonstrate the evolution of the cloud over time.
Effects of multiple scattering and thermal emission on target-background signatures sensed through obscuring atmospheres
Robert A. Sutherland, Jill C. Thompson, James D. Klett
We report on the application of a recently developed method for producing exact solutions of the thermal vision of the radiative transfer equation1. The method is demonstrated to be accurate to within five significant figures when compared with the one dimensional plane layer solutions published by van de Hulst2, and, has the added capability for treating discrete localized, aerosol clouds of spherical and cylindrical symmetry. The method, described in detail in a companion paper1, is only briefly summarized here, where our main purpose is to demonstrate the utility of the method for calculating emissivity functions of finite aerosol clouds of arbitrary optical thickness and albedo, and most likely to occur on the modern cluttered battlefield. The emissivity functions are then used to determine apparent temperatures including effects of both internal thermal emission and in- scatter from the ambient surroundings. We apply the results to four generic scenarios, including the mid and far IR and a hypothetical full spectrum band. In all cases, calculations show that errors on the order of several degrees in the sensed temperature can occur if cloud emissivity is not accounted for; with errors being most pronounced at the higher values of optical depth and albedo. We also demonstrate that significant discrepancies can occur when comparing results from different spectral bands, especially for the mid IR which consistently shows higher apparent temperatures than the other bands, including the full spectrum case. Results of emissivity calculations show that in almost no case can one justify the simple Beer's Law model that essentially ignores emissive/scattering effects; however, there is reason for optimism in the use of other simplifying first and higher order approximations used in some contemporary models. The present version of the model treats only Gaussian aerosol distributions and isotropic scattering; although neither assumption necessarily represents a restriction on the method.
Hyperspectral Sensing, Analysis, and Applications
icon_mobile_dropdown
From hyperspectral imaging to dedicated sensors
Hyper spectral imaging is a technique that obtains a two-dimensional image of a scene, while for each pixel a spectrum is recorded. Hyper spectral imaging systems can be very powerful at extracting information by using the spectral information in addition to the more conventional information extraction algorithms based on the spatial information within an image. However it is very unlikely that a hyper spectral imager will be used as a sensor for day to day operations. Hyper spectral imagers have the disadvantage of being rather complex and generating huge amounts of data. In this paper we discuss the approach that hyper spectral imagers are most powerful as research instruments and that they can be used to develop dedicated sensors for a particular application. Such a dedicated sensor could be optimized by selecting the most appropriate wavelength bands and making these bands as broad, or as narrow, as needed in order to detect, classify, or identify targets. The number of bands needed for such a dedicated sensor may depend on the accepted false alarm rate of such a system. In this paper we present some example spectra of materials and atmospheric transmission and discuss how a dedicated sensor can be designed for a specific application.
ScanSpec: an imaging FTIR spectrometer
The demand for hyperspectral imagers for research has increased in order to match the performance of new sensors for military applications. These work in several spectral bands and targets and backgrounds need to be characterized both spatially and spectrally to enable efficient signature analysis. Another task for a hyperspectral research imager is to acquire hyperspectral data to be able to study new hyperspectral signal processing techniques to detect, classify, and identify targets. This paper describes how a hyperspectral IR imager was developed based on an FTIR spectrometer at the Defence Research Establishment (FOA) in Linkoping, Sweden. The system, called ScanSpec, consists of a fast FTIR spectrometer from Bomem (MR254), an image-scanning mirror device with controlling electronics, and software for data collection and image forming. The spectrometer itself has not been modified. The paper also contains a performance evaluation with NESR NEDT, and MRTD analysis. Finally, some examples of hyperspectral results from field trials are presented: maritime background and remote gas detection.
Fratricide-preventing friend identification tag based on photonic band structure coding
Danny Eliyahu, Lev S. Sadovnik, Vladimir A. Manasson
A new friend foe identification tag based on photonic band structure (PBS) is presented. The tag utilizes frequency-coded radar signal return. Targets that include the passive tag responds selectively to slightly different frequencies generated by interrogating MMW radar. It is possible to use in- and out-of-band gap frequencies or defect modes of the PBS in order to obtain frequency dependent radar waves reflections. This tag can be made in the form of patch attachable such as plate or corner reflectors, to be worn by an individual marine, or to be integrated into the platform camouflage. Ultimately, it can be incorporated as smart skin or a ground or airborne vehicle. The proposed tag takes full advantage of existing sensors for interrogation (minimal chances required), it is lightweight and small in dimensions, it operates in degraded environments, it has no impact on platform vulnerability, it has low susceptibility to spoofing and mimicking (code of the day) and it has low susceptibility to active jamming. We demonstrated the operation of the tag using multi-layer dielectric (Duroid) having periodic structure of metal on top of each of the layers (metal strips in this case). The experimental results are consistent with numerical simulation. The device can be combined with temporal coding to increase target detection and identification resolution.
Soft computing and hyperspectral video for background extraction
Tomasz P. Jannson, Paul I. Shnitser, Sergey Sandomirsky, et al.
This paper presents experimental results of hyperspectral image compression by means of soft computing. Compressions and transmission of hyperspectral data requires intensive computation and sophisticated processing that have been incompatible with on-board real-time operation. Soft computing with intelligent processing optimizes the compression parameters of MPEG 1, tuning them to the specific video content to deliver the highest hyperspectral video compression quality. This soft computing approach is compared with compression based on wavelet transform.
Target-Hiding Technologies
icon_mobile_dropdown
Trial SNAPSHOT: measurements for terrain background characterization
The spatial and spectral characteristics of targets and backgrounds must be known and understood for a wide variety of reasons such as: synthetic scene simulation and validation; target description for modelling; in- service target material characterisation and background variability assessment. Without this information it will be impossible to design effective camouflage systems and to maximise the capabilities of new sensors. Laboratory measurements of background materials are insufficient to provide the data required. A series of trials are being undertaken in the UK to quantify both diurnal and seasonal changes of a terrain background, as well as the statistical variability within a scene. These trials are part of a collaborative effort between the Defence Evaluation and Research Agency (UK), Defence Clothing and Textile Agency (UK) and the T.A.C.O.M. (USA). Data are being gathered at a single site consisting primarily of south facing mixed coniferous and deciduous woodland, but also containing uncultivated grassland and tracks. Ideally each point in the scene needs to be characterized at all relevant wavelengths but his is unrealistic. In addition there are a number of important environmental variables that are required. The goal of the measurement programme is to acquire data across the spectrum from 0.4 - 14 microns. Sensors used to include visible band imaging spectroradiometers, telespectroradiometers (visual, NIR, SWIR and LWIR), calibrate colour cameras, broad band SWIR and LWIR imagers and contact reflectance measurement equipment. Targets consist of painted panels with known material properties and a wheeled vehicle, which is in some cases covered with camouflage netting. Measurements have bene made of the background with and without the man- made objects present. This paper will review the results to date and present an analysis of the spectral characteristics fo different surfaces. In addition some consideration will be given to the implications of the data obtained for camouflage design.
Development and application of diurnal thermal modeling for camouflage, concealment, and deception
Mark L. B. Rodgers
The art of camouflage is to make a military asset appear to be part of the natural environment: its background. In order to predict the likely performance of countermeasures in attaining this goal it is necessary to model the signatures of targets, backgrounds and the effect of countermeasures. A library of diurnal thermal models has been constructed covering a range of backgrounds from vegetated and non- vegetated surfaces to snow cover. These models, originally developed for Western Europe, have been validated successfully for theatres of operation from the arctic to the desert. This paper will show the basis for and development of physically based models for the diurnal thermal behavior both of these backgrounds and for major passive countermeasures: camouflage nets and continuous textile materials. The countermeasures set up significant challenges for the thermal modeler with their low but non-zero thermal inertial and the extent to which they influence local aerodynamic behavior. These challenges have been met and the necessary extensive validation has shown the ability of the models to predict successfully the behavior of in-service countermeasures.
Fuzzy logic approach for the quantitative assessment of camouflage effectiveness in the thermal infrared domain
A key point for good camouflage results in the thermal infrared domain lies in the ability of the camouflage system to adapt to the thermal emission behavior of the surrounding background. In order to obtain reliable assessments of the camouflage effectiveness, evaluation has to take place under various environment condition. The combination of the different results leads to a assessment measure with the demanded reliability. The object quantization of the individual camouflage effectiveness and the following combination is very difficult to achieve by human operators. Therefore an Infrared Camouflage Effectiveness Assessment Tool (ICEAT) has been developed, which needs only minor human interaction and supports the automated combination of the results of various test scenes. In a first step hot spots of the object and the background are detected. In a second phase various features are calculated which are combined to a single assessment measure in the third phase by using fuzzy logic. The fuzzy logic approach has the advantage that the customization of the ICEAT can be achieved by simply modifying the used membership functions.
Robust measure for camouflage effectiveness in the visual domain
A human-in-the-loop computer based camouflage assessment approach was already presented at the AeroSense 1998 conference.3 The same image sets were used for human photosimulation as well as for the computer assessment method. The human photosimulation results suggested four camouflage classes which were used to develop and verify the separability measure. Analyzing camouflage effectiveness using separability measures induces a very complex feature space. Best results were obtained using the C4.5 classifier as a separability measure. The size of the objects presented duing the photosimulation sessions and tactical knowledge of the observers had significant influence on the detectoin/recognition performance of humans. The most important advantage of our method is to make camouflage assessment more transparent and deterministic. Results of a selected experiment during a field test are shown in this paper.
Poster Session
icon_mobile_dropdown
Problems of precise air-spatial monitoring
Valeri V. Gladun, Yuri A. Pirogov, Evgeni N. Terentiev, et al.
The designers of modern devices of vision prefer to choose a scanning step of receiving system antenna with some less size than the main lobe of Point Spread Function (PSF). They tell such systems are good constructed. However there is a problem of additional increase of resolution for such well constructed receiving system. This problem is naturally connected with improvement of mathematical models. The report is devoted to development and applications of local-linear method of additional enhancement of resolution for such receiving system in radio vision and optics.
Experimental study of laser bistatic scattering from random deeply rough surface and backscattering enhancemant
Zhensen Wu, Kun Song, Liyan Qi
The laser bistatic scattering from some deeply rough plane samples are measured by using of the automated scattering measurement system. We observe the backscattering enhancement, which is confined to a narrow cone around the antispecular direction, and discuss the influences of roughness and dielectric properties on them.
Altering the SNR by photodetector noise manipulation
Irradiation of a photodetector by very short pulses is presented as the primary and perhaps the only remote technology for altering the SNR. Such noise manipulation will decrease the SNR value for certain types of common MIR and LWIR photodetectors. The effect is based on the difference between carrier lifetimes and altering pulse dwell time. When the pulse width is much less than photodetector rise time, i.e., 100 fs vs 10 ns, most of the photons can not generate an electrical charge, but only heat. We describe thermal, radiometric and electronic circuit models developed to simulate the transfer of short pulses of time- dependent radiant and electrical signals through a photodetector during the alteration. The models are developed to provide an analysis tool for evaluating the time-dependent radiometric sensitivity for the remote gain control of IR photodetectors.
Development of a UV stimulator for installed system testing of aircraft missile warning systems
William G. Robinson
Missile warning systems (MWS) present unique problems for hardware-in- the-loop (HITL) testing compared to other sensors found on modern day military aircraft and ground vehicles. End-to-end testing of a UV MWS like the AN/AAR-47 and other non-imagine MWS require a stimulator capable of large intensity dynamic range, moderate temporal response, and the capability to provide simultaneous optical signatures to all four systems sensors. These requirements dictate a different type of stimulator than is normally used with more conventional UV, visible, and IR imaging systems using imaging sensors and relatively narrow fields of view (FOV) on the other of 10-30 degrees. This paper describes both the requirements for a non-imaging UV MWS and the design used to satisfies the requirement for hardware and software testing of the AN/AAR-47 and other non-imaging UV MWS equipment.