Proceedings Volume 6543

Infrared Imaging Systems: Design, Analysis, Modeling, and Testing XVIII

cover
Proceedings Volume 6543

Infrared Imaging Systems: Design, Analysis, Modeling, and Testing XVIII

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 30 April 2007
Contents: 12 Sessions, 43 Papers, 0 Presentations
Conference: Defense and Security Symposium 2007
Volume Number: 6543

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 6543
  • Target, Backgrounds, and Atmospherics I
  • Target, Backgrounds, and Atmospherics II
  • Modeling I
  • Modeling II
  • Workshop on V50, E-zoom, and Boost Modeling Approaches
  • Modeling III
  • Modeling IV
  • Modeling V
  • Systems and Testing I
  • Systems and Testing II
  • Poster Session
Front Matter: Volume 6543
icon_mobile_dropdown
Front Matter: Volume 6543
This PDF file contains the front matter associated with SPIE Proceedings Volume 6543, including the Title Page, Copyright information, Table of Contents, Introduction (if any), and the Conference Committee listing.
Target, Backgrounds, and Atmospherics I
icon_mobile_dropdown
Simulation of active and passive infrared images using the SE-WORKBENCH
Jean Latger, Thierry Cathala, Nicolas Douchin, et al.
The SE-WORKBENCH workshop, also called CHORALE ("simulated Optronic Acoustic Radar battlefield"), is used by the French MoD/DGA to perform multi-sensors simulations by creating virtual realistic multi spectral 3D scenes and then generating the signal received by a sensor. Taking advantage of developments made in the frame of Radar simulation, CHORALE is currently enhanced with new functionalities in order to tackle the "active" problem, involving new generation of infrared sensors such as laser radars. This article aims at presenting the challenges for simulating simultaneously passive IR imagery of a full terrain and active imagery especially on targets. We insist on duality and differences concerning in particular monochromatic/coherent waves versus incoherent waves, BRDF modeling taking into account surface roughness, polarization effects, Doppler effects. The SE-WORKBENCH implements the "Photon Map" method that enables to treat the "global illumination" paradigm, consisting of multiple reflections ray tracing effects combined with Monte-Carlo ray scattering. This approach is assessed in the frame of coherent illumination by laser. The constraint of the atmosphere propagation accurate dependence on wavelength is also studied. Special requirements for advanced systems such as flash laser systems or heterodyne infrared detectors are analyzed. Finally, modeling issues of the degradations introduced by atmospheric turbulence are discussed.
Modelling and analysis of ship surface BRDF
Modelling the bi-directional reflectance distribution function (BRDF) of a ship surface is an integral part of any infrared ship signature model. The ShipIR surface BRDF model is based on Sandford and Robertson (1985) and makes a discrete assumption for lobe-width and solar-glint. The ShipIR sea surface reflectance model uses a roughness model based on the early work of Cox and Munk (1954) and refined using the integral solution proposed by Mermelstein et al. (1994). A similar approach was used by Ward (1992) to model the visual properties of a real surface, considering isotropic and anisotropic surface roughness. This paper compares the two roughness models and shows how a slope probability density function (PDF) version of the bi-directional reflectance is better suited for modelling micro-faceted surface reflections. The simulation of an actual ship IR glint measurement demonstrates the effect of BRDF lobes in the paint property and provides a qualitative assessment of the ShipIR model.
Modelling reflections from exact surfaces in CameoSim
A. A. Mitchell, J. M. Brewster, A. W. Haynes, et al.
Reflections such as glint are only seen over a small angular range around the Bi-directional Reflectance Difference Function (BRDF) specular lobe. When modelling a target for input to detection codes it is essential that these types of reflection are modelled accurately as they can have a significant impact on detection range. This paper investigates the use of CameoSim to model the glint effects from 3-D curved shapes in the 3-5µm band. Methods investigated to increase accuracy are: increasing the number of facets, using vertex averaging and the use of true geometry to successfully model IR glint effects. Conclusions are drawn as to the way forward for high fidelity modelling.
CUBI: a test body for thermal object model validation
CUBI is a rather simple geometrical object used in outdoor experiments with the objective of gathering data which can be utilized in testing and validating object models in the thermal infrared. Since its introduction several years ago, CUBI is gaining interest by an increasing number of research laboratories which are engaged in thermal infrared modelling. Being a member of the worldwide CUBI Forum, the FGAN-FOM has installed a CUBI about 1 year ago. Since then, CUBI surface temperatures are being recorded continuously, together with a set of associated environmental data. The data collected are utilized to explore the capabilities of the FOM Thermal Object code F-TOM. For this purpose, the model was modified to represent CUBI in model space. Likewise, the well-known IR signature prediction model RadTherm/IR was applied to the CUBI problem. In this paper we will present CUBI and the philosophy behind it, the comprehensive CUBI data collection effort at our place, and the development of the two different thermal models. Experimental data and model predictions will be shown and compared. Strengths and weaknesses of the models will be discussed.
Target, Backgrounds, and Atmospherics II
icon_mobile_dropdown
Computer-aided camouflage assessment in real time
Current army operations demand for continuous improvements of camouflage and concealment. This requires systematic, objective assessment methods which is a very time consuming task using the present software systems. Also the interactive composition of ground-truth is cumbersome. We present a system for camouflage assessment using image sequences in real-time. The image sequences may stem from any imaging sensor, e.g. visual-optical (VIS), infrared (IR), and SAR. Flexible navigation in image sequences, a semi-automatic generation of groundtruth along with several functional enhancements form the base of the system whereas the main issue is the camouflage assessment function with its generic interface to include individually defined feature extractors. For semi-automatic annotation and ground-truth construction the user has to define interesting areas with polygons in some starting frame. After that, the system estimates the transformation parameters for successive images in real-time and applies the parameters on the previously defined polygons in order to warp the ground-truth polygons onto the new frames. Various classes of polygons (target 1 : : : n, background area 1 : : :m, etc.) can be defined and colorized. Defined ground-truth areas can be evaluated in real-time by applying individually selected feature extractors whilst the results are displayed graphically and as a chart. For the evaluation, new measurements can be integrated by the user and applied via a generic interface. The system is built as a generic integration platform offering plenty of extension potential in order to further enable or improve camouflage assessment methods. Due to generic interfaces also ATR and ATD methods for automatic and semi-automatic camouflage assessment are integrable.
A new instrument for measuring optical transmission in the atmosphere
It is an important task to measure optical transmission of the atmosphere when testing the performance of electro-optical systems such as thermal imagers. Only by knowing atmospheric transmission precisely enough, we will be able to eliminate effects of the atmosphere on test results. For this reason a new instrument that measures optical transmission in the atmosphere has been constructed. The transmissometer consists of a transmitter/receiver unit, a reflector and control software. The instrument measures atmospheric transmission at wavelength of 1 &mgr;m and 8-12 &mgr;m by comparing the intensity of the beam propagating through the atmosphere and the reference beam inside the transmitter/receiver unit. Calibration is carried out by the aid of a visibility meter and a special calibration algorithm. An important criterion for the design was to create an instrument which could be used flexibly in field measurements. The transmissometer was tested comprehensively in the field in March and June 2006. It can measure extinction coefficients up to 3 - 12 km-1 depending on the span between the transmitter/receiver unit and reflector with accuracy of 10 - 20 %. According to the test measurements the transmissometer also fulfills the other requirement specifications.
Modeling I
icon_mobile_dropdown
An engineer's approach to system performance
A simplified approach to target acquisition is presented which combines the optics performance (often specified by the Airy disk size) with the detector size. The variable is Fλ/d where F is the focal ratio, λ is the wavelength, and d is the detector size. The simplified approach allows plotting range as a function of aperture diameter, focal length, wavelength, detector size, field-of-view, or noise. Assuming a 100% fill factor, no aliasing occurs when Fλ/d ⩾ 2. This suggests that the sampling theorem plays an important role in target detection. However, sampling artifacts are quite acceptable. Since real targets are aperiodic, relating the number of detectors to the sampling theorem should be avoided. Likewise, the Airy disk size can be related to the detector size (Fλ/d) but trying to decide the required number of samples across the Airy disk as a design criterion should be avoided.
Range performance benefit of contrast enhancement
Richard Vollmerhausen, Van Hodgkin
This paper discusses the range-performance benefit of high-frequency boost and local-area contrast enhancement (LACE). Boost corrects for diffraction blur and other imager blurs that degrade the high-frequency content of imagery. LACE reduces the dynamic range of scene content and results in better use of display gray scale. In some cases, the target acquisition range of state-of-the-art thermal imagers is doubled. However, the effectiveness of image processing depends on the signal to noise and sampling properties of the imager. Range performance is predicted using the NVThermIP** thermal model; model application and limitations are discussed.
NVThermIP vs TOD: matching the target acquisition range criteria
Currently, three major approaches exist to predict Target Acquisition (TA) performance with thermal imagers: i) the TOD laboratory method and model, ii) the NVThermIP model and iii) the MTDP lab method and TRM3 model. In this study, TOD measurements, TOD predictions and NVThermIP predictions are compared for a number of simulated sensors ranging from very well-sampled to highly under-sampled. A similar comparison study using a previous (2001) version of the NVTherm model showed huge differences in sensor performance predictions (Bijl, Hogervorst & Valeton; SPIE Proceedings Vol. 4719, 51-62; 2002). The most important result of the current study is that NVThermIP predictions are much closer to the TOD measurements and predictions than those of its predecessor, showing limited effect of under-sampling. Quantitatively, TA range predictions for well-sampled imagers are equivalent and NVThermIP predicts 25% longer ranges than the TOD model for under-sampled imagers with MP = 0.35∙ VP and β = 1.25, where VP are the criteria published with NVThermIP to predict TA range for a variety of target sets, MP are the corresponding TOD magnification factors, and β is the slope of the probability vs range function in the TOD target acquisition model. Which method yields the best predictions under which circumstances should be the subject of an empirical study using TA performance for real targets. It is therefore advised that all available TA validation data be presented in such a way that all models and methods can be compared to the data directly and unambiguously.
Cell balancing for vehicle identification perception experiments and correcting for cell imbalance in test results
Corrections are given for cell imbalance in the design and analysis of twelve (12)-target identification (ID) perception tests. Such tests are an important tool in the development of the Night Vision and Electronic Sensors Directorate (NVESD) observer performance model used in NVThermIP to compare electro-optical systems. It is shown that the partitions of the 12-target set previously used in perception experiments exhibit statistically significant cell imbalance. Results from perception testing are used to determine the relative difficulty of identifying different images in the set. A program is presented to partition the set into lists that are balanced according to the collected observer data. The relative difficulty of image subsets is shown to be related to the best-fit V50 values for the subsets. The results of past perception experiments are adjusted to account for cell imbalance using the subset V50 terms. Under the proper conditions, the adjusted results are shown to better follow the TTP model for observer performance.
Super-resolution reconstruction and local area processing
Super resolution reconstruction (SRR) improves resolution by increasing the effective sampling frequency. Target acquisition range increases but the amount of increase depends upon the relationship between the optical blur diameter and the detector size. Range improvement of up to 52% is possible. Modern systems digitize the scene into 12 or more bits but the display typically presents only 8 bits. Gray scale compression forces scene detail to fall into a gray level and thereby "disappear." Local area processing (LAP) readjusts the gray scale so that scene detail becomes discernible. Without LAP the target signature is small compared to the global scene dynamic range and this results in poor range performance. With LAP, the target contrast is large compared to the local background. The combination of SRR and LAP significantly increases range performance.
Modeling II
icon_mobile_dropdown
Direct view optics model for facial identification
Ronald G. Driggers, Steve Moyer, Keith Krapels, et al.
Direct view optics is a class of sensors to include the human eye and the human eye coupled to rifle scopes, spotter scopes, binoculars, and telescopes. The target acquisition model for direct view optics is based on the contrast threshold function of the eye with a modification for the optics modulation transfer function and the optical magnification. In this research, we extend the direct view model for the application of facial identification. The model is described and the experimental method for calibrating the task of human facial identification is discussed.
Effects of band-limited noise on human observer performance
Perception tests establish the effects of spatially band-limited noise and blur on human observer performance. Previously, Bijl showed that the contrast threshold of a target image with spatially band-limited noise is a function of noise spatial frequency. He used the method of adjustment to find the contrast thresholds for each noise frequency band. A noise band exists in which the target contrast threshold reaches a peak relative to the threshold for higher- or lower-noise frequencies. Bijl also showed that the peak of this noise band shifts as high frequency information is removed from the target images. To further establish these results, we performed forced-choice experiments. First, a Night Vision and Electronics Sensors Directorate (NVESD) twelve (12)-target infrared tracked vehicle image set identification (ID) experiment, second, a bar-pattern resolving experiment, and third, a Triangle Orientation Discrimination (TOD) experiment. In all of the experiments, the test images were first spatially blurred, then spatially band-limited noise was added. The noise center spatial frequency was varied in half-octave increments over seven octaves. Observers were shown images of varying target-to-noise contrasts, and a contrast threshold was calculated for each spatial noise band. Finally, we compared the Targeting Task Performance (TTP) human observer model predictions for performance in the presence of spatially band-limited noise with these experimental results.
Temporal/spatial tracking requirements for tracking humans
This paper details the development, experimentation, collected data and the results of research designed to gain an understanding of the temporal and spatial image collection guidelines for tracking humans. More specifically, a quantitative understanding of the relationship between human observer performance and the spatial and temporal resolution is sought. Performance is measured as a function of the number of video frames per second, imager spatial resolution and the ability of the observer to accurately determine the destination of a moving human target. The research is restricted to data and imagery collected from altitudes typical of modern low to mid altitude persistent surveillance platforms using a wide field of view. The ability of the human observer to perform an unaided track of a human target was determined by their completion of carefully designed perception experiments. In these experiments, the observers were presented with simulated imagery from Night Vision's EOSim urban terrain simulator. The details of the simulated targets and backgrounds, the design of the experiments and their associated results are included in this treatment.
An image sharpness metric for image processing applications using feedback
Eric P. Lam, Christopher A. Leddy, Stephen R. Nash
Some image processing applications require an image to meet a quality metric before performing processing on it. If an image is too degraded such that it is difficult or impossible to reconstruct, the input image may be discarded. When conditions do not exhibit time-invariant image degradations, it is necessary to determine how sharp an image is. In this paper, we present a metric that measures the relative sharpness with respect to a reference image frame. The reference image frame may be a previous input image or even an output frame from the image processor. The sharpness metric is based on analyzing edges. The assumption of this problem is that input images are similar to each other in terms of observation angle and time. Although the input images are similar, it cannot be assumed that all input images are the same, because they are collected at different time samples.
Sine wave contrast target for direct view optics field performance measurements
Keith Krapels, Paul Larson, Ronald G. Driggers, et al.
In this research, a sensor performance measurement technique is developed similar to the Triangle Orientation Discrimination (TOD), but sinusoids are used instead of triangles. Also, instead of infrared systems, the technique is applied to the eye and direct view optics. This new technique is called Contrast Threshold Function Orientation Discrimination (CTFOD) and the result is a "system" contrast threshold function that can be used with Vollmerhausen's Target Task Performance (TTP) metric. The technique is a simple technique that can be measured in the field using a target board where the results provide for the eye, the optics transfer function and transmission, and any atmospheric turbulence effects that are present.
Third-generation FLIR simulation at NVESD
Third generation FLIR sensors, comprised of 2-D focal plane arrays with simultaneous LWIR/MWIR detection capability, are to be fielded in the near future and are expected to play an important role in future Army sensor applications. NVESD has an effort underway to produce a simulation package that will bring Third Generation FLIR sensor performance to training and wargaming applications. This simulation product provides a wide variety of targets and backgrounds, both rural and urban, for different seasons, times of day, and atmospheric conditions and is built on the existing NVESD LWIR simulation package named NV EOSim. A sensor effects package, which is part of the simulation, uses standard NVTherm sensor decks to accurately simulate the noise, diffraction, resolution, and other design features of individual sensors. The physics of the simulation and the key Third Generation FLIR characteristics incorporated are discussed in detail.
Workshop on V50, E-zoom, and Boost Modeling Approaches
icon_mobile_dropdown
Guidance on methods and parameters for Army target acquisition models
Recently the U.S. Army Night Vision and Electronic Sensors Directorate released a revision to the target acquisition models. The Targeting Task Performance (TTP) metric represents a significant improvement in the U.S. Army's target acquisition modeling capabilities. The purpose of this paper is to describe the experiments and calculation methodologies behind generating the task difficulty parameter (V-50) value used in the model to predict range performance. Included in this paper are experimental designs for recognition and identification tasks for various target sets. Based upon the results to these experiments new V-50 values are calculated to provide proper guidance for the most accurate performance predictions possible.
Modeling III
icon_mobile_dropdown
Electronic zoom functionality in under-sampled imaging systems
Stephen D. Burks, Joseph P. Reynolds, Jonathan Hixson, et al.
US Army thermal target acquisition models based on the Johnson metric do not accurately predict sensor performance with electronic zoom (E-zoom). For this reason, NVTherm2002 removed the limiting E zoom Modulation Transfer Functions (MTF) to agree better with measured performance results. In certain scenarios, especially with under-sampled staring sensors, the model shows incorrect performance improvements with E-zoomed images. The current Army model NVThermIP, based upon the new targeting task performance (TTP) metric, more accurately models range performance in these cases. E-zoom provides system design flexibility when limited to a single optical field-of-view and/or eye distance is constrained by ergonomic factors. This paper demonstrates that targeting acquisition range performance, modeled using the TTP metric, shows increases only up to an optimized magnification and then decreases beyond this optimal value. A design "rule of thumb" is provided to determine this optimal magnification. NVThermIP modeled range performance is supported with E-zoom perception experiment results.
Finding a fusion metric that best reflects human observer preference
A perception test determined which of several image fusion metrics best correlates with relative observer preference. Many fusion techniques and fusion metrics have been proposed, but there is a need to relate them to a human observer's measure of image quality. LWIR and MWIR images were fused using techniques based on the Discrete Wavelet Transform (DWT), the Shift-Invariant DWT (SIDWT), Gabor filters, Pixel averaging, and Principal Component Analysis (PCA). Two different sets of fused images were generated from urban scenes. The quality of the fused images was then measured using the mutual information metric (MINF), fusion quality index (FQI), edge-dependent fusion quality index (EDFQI), weighted-fusion quality index (WFQI), and the mean-squared errors between the fused and source images (MS(F-L), MS(F-M)). A paired-comparison perception test determined how observers rated the relative quality of the fused images. The observers based their decisions on the noticeable presence or absence of information, blur, and distortion in the images. The observer preferences were then correlated with the fusion metric outputs to see which metric best represents observer preference. The results of the paired comparison test show that the mutual information metric most consistently correlates well with the measured observer preferences.
Impact of path radiance on MWIR and LWIR imaging
Atmospheric radiance occurs in both the MWIR and LWIR primarily as a consequence of thermal emission by the gases and aerosols in the atmosphere. If this radiation originates between a scene and a thermal imaging sensor, it's called path radiance. In thermal IR imagery, path radiance reduces scene radiation contrast at the entrance pupil. For ground based sensors, this effect would be most significant in search systems with wide fields of view (WFOV) that image a large range depth of field. In WFOV search systems, the sensor display gain and level are typically adjusted to optimize the contrast of targets and backgrounds at the closer ranges. Without compensation in WFOV imagery, high path radiance can mask distant targets in the detection process. However, in narrow fields of view (NFOV), path radiance can have less of an impact since targets and backgrounds will be at about the same range and thus have the same path radiance. As long as the NFOV radiation contrast exceeds the system noise, sensor display gain and level adjustments, or image processing, if available, can be used to boost the contrast at the display. However, there are some imaging conditions that are beyond compensation by display contrast adjustments or image processing. Using MODTRAN, this paper examines the potential impacts of path radiance from the phenomenological point of view on target-tobackground contrast and signatures (&Dgr;T) for dual band thermal imaging systems
A new optical flow estimation method in joint EO/IR video surveillance
Hong Man, Robert J. Holt, Jing Wang, et al.
Electro-Optical (EO) and Infra-Red (IR) sensors have been jointly deployed in many surveillance systems. In this work we study the special characteristics of optical flow in IR imagery, and introduce an optical flow estimation method using co-registered EO and IR image frames. The basic optical flow calculation is based on the combined local and global (CLG) method (Bruhn, Weickert and Schnorr, 2002), which seeks solutions that simultaneously satisfy a local averaged brightness constancy constraint and a global flow smoothness constraint. While CLG method can be directly applied to IR image frames, the estimated optical flow fields usually manifest high level of random motions caused by thermal noise. Furthermore, IR sensors operating at different wavelengths, e.g. meddle-wave infrared (MWIR) and long-wave infrared (LWIR), may yield inconsistent motions in optical flow estimation. Because of the availability of both EO and IR sensors in many practical scenarios, we propose to estimate optical flow jointly using both EO and IR image frames. This method is able to take advantage of the complementary information offered by these two imaging modalities. The joint optical flow calculation fuses the motion fields from EO and IR images using a cross-regularization mechanism and a non-linear flow fusion model which aligns the estimated motions based on neighbor activities. Experiments performed on the OTCBVS dataset demonstrated that the proposed approach can effectively eliminate many unimportant motions, and significantly reduce erroneous motions, such as sensor noise.
Modeling IV
icon_mobile_dropdown
Correlation between the number of spatial, thermal, and total cues in LWIR imagery and probability of identification
A human perception test has been conducted to determine the correlation between observer response and the number of spatial cues without thermal attributes, thermal cues, and total cues in an image. The experiment used the NVESD 12 target LWIR tracked vehicle image set. Various levels of Gaussian blur were applied to twelve aspects of the twelve targets in order to reduce both the number of resolvable cycles and the number of observable thermal and spatial cues. The author then counted every observable thermal and spatial cue in each of the processed images. A thermal cue was defined as either a hot spot or a cool spot. Typically, hot spots are produced by a vehicle's engine or exhaust. Cool spots are features such as air intakes and trim vanes. Spatial cues included characteristics such as barrel length, turret size, and number of wheels. The results of a 12 alternative forced choice identification perception test were analyzed to determine the correlation coefficients between probability of identification and the number of thermal, spatial, and total cues. The results show that the number of spatial cues in an image was strongly correlated with observer performance.
Silhouette and background information analysis
Michelle N. Moore, John D. O'Connor
Thermal target identification testing was conducted to explore some of the potential differences in the perception of ID cues in conventional gray shade images, and those same target images when rendered as various types of silhouette, including one with no background. Such experiments may give insight into the human target identification and discrimination tasks relevant to emerging laser-based imaging technology. For example, laser range gating is known to produce silhouetted grey shade targets with a single grey shade background like the ones included here. Well-understood thermal imagery of a well-understood target set was chosen as a convenient and relevant baseline image set for initial exploration. Experiments were formed to compare human vehicle ID performance when viewing full gray-shade imagery versus viewing various combinations of silhouette images. Experiments were performed using 1) Original gray-shade imagery, 2) Silhouette (uniform target) with no background (i.e. background consisting of uniform grey shade, 3) silhouette (target) with (original grey shade) background, and 4) Original target with no background. An eight target set was presented at three different aspects. Notional viewing ranges were simulated by applying two levels of Gaussian blur and two down sampling rates to the complete experimental target set. Observer results indicate higher target identification scores for the "target with no background" and "silhouette with no background" imagery than for the original gray shade imagery. However, ID results were lower for "silhouette with background" imagery than for the original imagery.
Quantitative analysis of infrared contrast enhancement algorithms
Dynamic range reduction and contrast enhancement are two image-processing methods that are required when developing thermal camera systems. The two methods must be performed in such a way that the high dynamic range imagery output from current sensors are compressed in a pleasing way for display on lower dynamic range monitors. This research examines a quantitative analysis of infrared contrast enhancement algorithms found in literature and developed by the author. Four algorithms were studied, three of which were found in literature and one developed by the author: tail-less plateau equalization (TPE), adaptive plateau equalization (APE), the method according to Aare Mällo (MEAM), and infrared multi-scale retinex (IMSR). TPE and APE are histogram-based methods, requiring the calculation of the probability density of digital counts within an image. MEAM and IMSR are frequency-domain methods, methods that operate on input imagery that has been split into components containing differing spatial frequency content. After a rate of growth analysis and psychophysical trial were performed, MEAM was found to be the best algorithm.
Active imaging system performance model for target acquisition
The U.S. Army RDECOM CERDEC Night Vision & Electronic Sensors Directorate has developed a laser-range-gated imaging system performance model for the detection, recognition, and identification of vehicle targets. The model is based on the established US Army RDECOM CERDEC NVESD sensor performance models of the human system response through an imaging system. The Java-based model, called NVLRG, accounts for the effect of active illumination, atmospheric attenuation, and turbulence effects relevant to LRG imagers, such as speckle and scintillation, and for the critical sensor and display components. This model can be used to assess the performance of recently proposed active SWIR systems through various trade studies. This paper will describe the NVLRG model in detail, discuss the validation of recent model components, present initial trade study results, and outline plans to validate and calibrate the end-to-end model with field data through human perception testing.
Modeling V
icon_mobile_dropdown
Modeling the blur associated with vibration and motion
Richard Vollmerhausen, Mel H. Friedman, Joe Reynolds, et al.
This paper discusses the Modulation Transfer Functions (MTF) associated with image motion. The paper describes MTF for line-of-sight vibration, electronic stabilization, and translation of the target within the field of view. A model for oculomotor system tracking is presented. The common procedure of treating vibration blur as Gaussian is reasonably accurate in most cases. However, the common practice of ignoring motion blur leads to substantial error when modeling search tasks.
An evaluation of fusion algorithms using image fusion metrics and human identification performance
The performance of image fusion algorithms is evaluated using image fusion quality metrics and observer performance in identification perception experiments. Image Intensified (I2) and LWIR images are used as the inputs to the fusion algorithms. The test subjects are tasked to identify potentially threatening handheld objects in both the original and fused images. The metrics used for evaluation are mutual information (MI), fusion quality index (FQI), weighted fusion quality index (WFQI), and edge-dependent fusion quality index (EDFQI). Some of the fusion algorithms under consideration are based on Peter Burt's Laplacian Pyramid, Toet's Ratio of Low Pass (RoLP or contrast ratio), and Waxman's Opponent Processing. Also considered in this paper are pixel averaging, superposition, multi-scale decomposition, and shift invariant discrete wavelet transform (SIDWT). The fusion algorithms are compared using human performance in an object-identification perception experiment. The observer responses are then compared to the image fusion quality metrics to determine the amount of correlation, if any. The results of the perception test indicated that the opponent processing and ratio of contrast algorithms yielded the greatest observer performance on average. Task difficulty (V50) associated with the I2 and LWIR imagery for each fusion algorithm is also reported.
EO/IR sensor model for evaluating multispectral imaging system performance
This paper discusses the capabilities of a EO/IR sensor model developed to provide a robust means for comparative assessments of infrared FPA's and sensors operating in the infrared spectral bands that coincide with the atmospheric windows - SW1 (1.0-1.8&mgr;m), sMW (2-2.5&mgr;m), MW (3-5&mgr;m), and LW (8-12&mgr;m). The applications of interest include thermal imaging, threat warning, missile interception, UAV surveillance, forest fire and agricultural crop health assessments, and mine detection. As a true imaging model it also functions as an assessment tool for single-band and multi-color imagery. The detector model characterizes InGaAs, InSb, HgCdTe, QWIP and microbolometer sensors for spectral response, dark currents and noise. The model places the specified FPA into an optical system, evaluates system performance (NEI, NETD, MRTD, and SNR) and creates two-point corrected imagery complete with 3-D noise image effects. Analyses are possible for both passive and active laser illuminated scenes for simulated state-of-the-art IR FPA's and Avalanche Photodiode Detector (APD) arrays. Simulated multispectral image comparisons expose various scene components of interest which are illustrated using the imaging model. This model has been exercised here as a predictive tool for the performance of state-of-the-art detector arrays in optical systems in the five spectral bands (atmospheric windows) from the SW to the LW and as a potential testbed for prototype sensors. Results of the analysis will be presented for various targets for each of the focal plane technologies for a variety of missions.
Modeling the effects of high contrast and saturated images on target acquisition performance
Brian P. Teaney, Jonathan G. Hixson, Bill Blecha
Most infrared sensors allow for adjustment of the sensors gain and level settings. This adjustment of gain and level effects the contrast of the output image. This process is accounted for in the current US Army thermal target acquisition model (NVThermIP), using the scene contrast temperature. By changing the scene contrast temperature in NVThermIP, the system gain can be modified to reflect varying contrast levels presented at the display. In this paper, the results of perception experiments dealing with image contrast and saturation are reviewed. These results are compared with predicted performance based on the target task difficulty metric used in NVThermIP.
Designing an error metric for super-resolution enhanced IR passive ranging
Jae H. Cha, A. Lynn Abbott
Assessment of image resolution enhancement on range estimation using stereo vision systems provides valuable insight to the design and feasibility of advanced passive ranging systems. Application of such enhancements to stereo analysis for visible-band cameras has shown promising results in the past. These methods need to be extended to the infrared band for a day/night operational capability and, in particular, the performance of uncooled infrared sensors needs to be quantified. Here how resolution enhancement affects the estimation of "stereo disparity", a quantity that directly relates to range, is examined empirically using a low-resolution uncooled staring infrared camera, and the results are analyzed with respect to measured data. Currently available resolution enhancement algorithms such as those based on Maximum A Posteriori (MAP) and Markov Chain Monte Carlo (MCMC) methods are utilized. The variance of disparity estimation error is chosen as a metric for performance, and is examined as a function of algorithm parameters, target-to-background differential temperature, image noise, and baseline distance. Based on the metric, an empirical model for performance gain is introduced. Overall, resolution enhancement processing is beneficial to stereo disparity estimation especially when signal-to-noise ratio is high, and when sample-scene phasing impedes the accuracy of estimation.
IR system field performance with superresolution
Jonathan Fanning, Justin Miller, Jennifer Park, et al.
Superresolution processing is currently being used to improve the performance of infrared imagers through an increase in sampling, the removal of aliasing, and the reduction of fixed-pattern noise. The performance improvement of superresolution has not been previously tested on military targets. This paper presents the results of human perception experiments to determine field performance on the NVESD standard military eight (8)-target set using a prototype LWIR camera. These experiments test and compare human performance of both still images and movie clips, each generated with and without superresolution processing. Lockheed Martin's XR® algorithm is tested as a specific example of a modern combined superresolution and image processing algorithm. Basic superresolution with no additional processing is tested to help determine the benefit of separate processes. The superresolution processing is modeled in NVThermIP for comparison to the perception test. The measured range to 70% probability of identification using XR® is increased by approximately 34% while the 50% range is increased by approximately 19% for this camera. A comparison case is modeled using a more undersampled commercial MWIR sensor that predicts a 45% increase in range performance from superresolution.
Systems and Testing I
icon_mobile_dropdown
Real-time image processing and fusion for a new high-speed dual-band infrared camera
A dual-band infrared camera system based on a dual-band quantum well infrared photodetector (QWIP) has been developed for acquiring images from both the mid-wavelength (MWIR) and long-wavelength (LWIR) infrared spectral band. The system delivers exactly pixel-registered simultaneously acquired images. It has the advantage that appropriate signal and image processing permit to exploit differences in the characteristics of those bands. Thus, the camera reveals more information than a single-band camera. It helps distinguishing between targets and decoys and has the ability to defeat many IR countermeasures such as smoke, camouflage and flares. Furthermore, the system permits to identify materials (e.g. glass, asphalt, slate, etc.), to distinguish sun reflections from hot objects and to visualize hot exhaust gases. Furthermore, dedicated software for processing and exploitation in real-time extends the application domain of the camera system. One component corrects the images and allows for overlays with complementary colors such that differences become apparent. Another software component aims at a robust estimation of transformation parameters of consecutive images in the image stream for image registration purposes. This feature stabilizes the images also under rugged conditions and it allows for the automatic stitching of the image stream to construct large mosaic images. Mosaic images facilitate the inspection of large objects and scenarios and create a better overview for human observers. In addition, image based MTI (moving target indication) also for the case of a moving camera is under development. This component aims at surveillance applications and could also be used for camouflage assessment of moving targets.
Broad-band optical test bench (OPTISHOP) to measure MTF and transmittance of visible and IR optical components
Dario Cabib, Amir Rahav, Tamir Barak
CI Systems has developed a new cost effective and modular Optical Test Bench to measure Modulation Transfer Function (MTF) and transmittance of optical components in the Visible/Near Infrared (0.4-1.7 microns) and infrared (3 to 14 microns) spectral ranges (the OPTISHOP system). The optical design concept of the system allows the user to switch from MTF (on- and off-axis) to transmittance measurements, without need of optical alignment by the user. In addition, broad band sources are used for illumination, so that these optical properties can be measured in the whole relevant wavelength range of the components to be tested (usually visible and/or near infrared separately from the infrared range). Other lens measurements such as effective focal length can be made. Back focal length, distortion and field curvature are being developed. The system is based on the standard and proven CTS (Collimator Test System) product line of CI, which is made of reflective optics for wide wavelength coverage, and it is ruggedly built for use in the laboratory, production line or maintenance depot. An advantage of the CTS configuration is that the source-collimator assembly is enclosed in a robust mechanical envelope, which prevents accidental misalignements and breakage, optical misalignments due to environment temperature drifts, soiling of the optics, and easier system transportation. The system is described here, including calibration and validation techniques.
Detector spatial response testing of LWIR FPAs
K. A. Lindahl, W. Burmester, K. L. Whiteaker, et al.
The spatial response of a FPA is an important attribute of image quality. A novel test station for determining detector MTF has been developed and used on LWIR FPAs. The test station focuses an illuminated pinhole aperture onto a FPA, creating a sub-pixel spot. Total system MTF is determined by scanning the spot across the FPA. Optics MTF is measured by moving the imaged spot through focus and applying phase retrieval methods. The Optics MTF is then removed from the measured total MTF to produce the detector MTF. The technique has been applied to large area LWIR FPAs.
Automated testing of ultraviolet, visible, and infrared sensors using shared optics
Complex systems ranging from unmanned vehicles to night vision goggles rely on a various spectral regions to achieve the demanding imaging performance they require. The lines between infrared, visible, and ultraviolet are quickly blurring as multi-sensor systems become more sophisticated and image fusion becomes commonplace. Typically sensor testing requires hardware and software exclusively designed for the spectral region of interest. Thus a system with ultraviolet through infrared imaging capabilities could require up to three separate test benches for sensor characterization. This not only drives up the cost of testing but also leads to a discontinuity of methods and possibly skewed results. This paper will discuss hardware and software developed by the authors that utilize identical test methods and shared optics to complete infrared, visible, and ultraviolet sensor performance analysis. Challenges encompassing multiple source switching and combining will be addressed along with design choices related to specifying optics and targets of sufficient quality and construction to provide performance to cover the full spectral region. Test methodology controlled by a single software suite will be summarized including modulation transfer function, signal to noise ratio, uniformity, focus, distortion, intrascene dynamic range, and sensitivity. Examples of results obtained by these test systems will be presented.
Systems and Testing II
icon_mobile_dropdown
Advances in infrared lens characterization: measurement of MTF using common undersampled IR systems
The modulation transfer function (MTF) measurement has been a staple of optics testing for many years. Obtaining a highly accurate measurement of the MTF of a lens, however, has remained a challenge for a number of reasons. Traditional MTF tests give a measure of overall system performance, rather than characterizing individual parts such as the lens. Also, the theoretical performance of the optics generally outstrips FPA/camera performance by a wide margin. This typically requires intricate hardware setups to quantify lens performance, such as specialized single-detector systems. These systems, however, are very difficult to use, have few other applications, and are quite expensive. This paper will describe an improved technique for measuring the optical quality of infrared optical systems, as well as preliminary research regarding individual component (i.e. - lens) MTF. In particular, the methodology presented will expand upon the traditional "tilted slit" technique and demonstrate an improved test capability for characterization of MTF and other optical unit under test (UUT) performance parameters. We will describe a methodology which uses Gaussian energy profiling and novel collection optics to deliver an MTF measurement capability with resolution and usability superior to that of single point measurement techniques. The paper will also discuss the optical system requirements and mathematical algorithms required to provide a fast, accurate, and high-resolution FFT/MTF capability, and support for a range of other optical UUT characterization modes.
Characterization of a C-QWIP LWIR camera
David P. Forrai, Mark Sempsrott, Robert Fischer, et al.
Large format corrugated quantum well infrared photodetector (C-QWIP) focal plane arrays (FPAs) have been developed over the past two years. The results of this development have demonstrated the potential for this technology to satisfy requirements for very large format high performance long-wave infrared (LWIR) imaging systems. One particular C-QWIP design has focused on developing an FPA that operates in the 8 to 10 &mgr;m spectrum with integration times in the millisecond regime when used against warm backgrounds. This FPA is very suitable for many LWIR applications and has been integrated into a camera system. The specifications of that camera are described in this paper. The characterization of this camera system includes standard electro-optical tests and compares the results of those tests to theoretical models for the FPA. This paper concludes by describing the ongoing effort to tailor the system specifically for the C-QWIP. This includes design features of the read-out integrated circuit (ROIC), dewar-cooler design and interfacing electronics, and video processing. This thorough characterization of the camera has demonstrated the utility of the C-QWIP FPA for LWIR imaging and has established a path forward to further improve the performance of imaging systems implementing this technology.
Near infrared testbed sensor
A new tactical airborne multicolor missile warning testbed was developed and fielded as part of an Air Force Research Laboratory (AFRL) initiative focusing on clutter and missile signature measurements for algorithm development. Multicolor discrimination is one of the most effective ways of improving the performance of infrared missile warning sensors, particularly for heavy clutter situations. Its utility has been demonstrated in multiple fielded sensors. Traditionally, multicolor discrimination has been performed in the mid-infrared, 3-5 μm band, where the molecular emission of CO and CO2 characteristic of a combustion process is readily distinguished from the continuum of a black body radiator. Current infrared warning sensor development is focused on near infrared (NIR) staring mosaic detector arrays that provide similar spectral discrimination in different bands to provide a cost effective and mechanically simpler system. This, in turn, has required that multicolor clutter data be collected for both analysis and algorithm development. The developed sensor test bed is a multi-camera system 1004x1004 FPA coupled with optimized filters integrated with the optics. The collection portion includes a ruggedized field-programmable gate array processor coupled with with an integrated controller/tracker and fast disk array capable of real-time processing and collection of up to 60 full frames per second. This configuration allowed the collection and real-time processing of temporally correlated, radiometrically calibrated data in multiple spectral bands that was then compared to background and target imagery taken previously
Signal processing for distributed sensor concept: DISCO
Distributed Sensor concept - DISCO proposed for multiplication of individual sensor capabilities through cooperative target engagement. DISCO relies on ability of signal processing software to format, to process and to transmit and receive sensor data and to exploit those data in signal synthesis process. Each sensor data is synchronized formatted, Signal-to-Noise Ration (SNR) enhanced and distributed inside of the sensor network. Signal processing technique for DISCO is Recursive Adaptive Frame Integration of Limited data - RAFIL technique that was initially proposed [1] as a way to improve the SNR, reduce data rate and mitigate FPA correlated noise of an individual sensor digital video-signal processing. In Distributed Sensor Concept RAFIL technique is used in segmented way, when constituencies of the technique are spatially and/or temporally separated between transmitters and receivers. Those constituencies include though not limited to two thresholds - one is tuned for optimum probability of detection, the other - to manage required false alarm rate, and limited frame integration placed somewhere between the thresholds as well as formatters, conventional integrators and more. RAFIL allows a non-linear integration that, along with SNR gain, provides system designers more capability where cost, weight, or power considerations limit system data rate, processing, or memory capability [2]. DISCO architecture allows flexible optimization of SNR gain, data rates and noise suppression on sensor's side and limited integration, re-formatting and final threshold on node's side. DISCO with Recursive Adaptive Frame Integration of Limited data may have flexible architecture that allows segmenting the hardware and software to be best suitable for specific DISCO applications and sensing needs - whatever it is air-or-space platforms, ground terminals or integration of sensors network.
Laser dazzling of focal plane array cameras
Laser countermeasures against infrared focal plane array cameras aim to saturate the full camera image. In this paper we will discuss the results of three different dazzling experiments performed with MWIR lasers and show that the obtained results are independent of the read-out mechanism of the camera and can be explained by an expression derived from the point spread function of the optics. This expression also allows us to estimate the required laser power to saturate a complete focal plane array in a camera system. Simulated Images with simulated dazzling effects based on this expression will be shown.
Poster Session
icon_mobile_dropdown
Multimodal human verification using stereo-based 3D inforamtion, IR, and speech
In this paper, we propose a personal verification method using 3D face information, infrared (IR), and speech to improve the rate of single biometric authentication. False acceptance rate (FAR) and false rejection rate (FRR) have been a fundamental bottleneck of real-time personal verification. Proposed method uses principal component analysis (PCA) for face recognition and hidden markov model (HMM) for speech recognition based on stereo acquisition system with IR imagery. 3D face information acquires face's depth and distance using a stereo system. The proposed system consists of eye detection, facial pose direction estimation, and PCA modules. An IR image of the human face presents its unique heat-signature and can be used for recognition. IR images use only for decision whether human face or not. It also uses fuzzy logic for the final decision of personal verification. Based on experimental results, the proposed system can reduce FAR which provides that the proposed method overcomes the limitation of single biometric system and provides stable person authentication in real-time.