Proceedings Volume 6207

Infrared Imaging Systems: Design, Analysis, Modeling, and Testing XVII

cover
Proceedings Volume 6207

Infrared Imaging Systems: Design, Analysis, Modeling, and Testing XVII

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 5 May 2006
Contents: 8 Sessions, 33 Papers, 0 Presentations
Conference: Defense and Security Symposium 2006
Volume Number: 6207

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Modeling I
  • Modeling II
  • Modeling III
  • Modeling IV
  • Joint Session with Conference 6208
  • Systems and Testing I
  • Systems and Testing II
  • Poster Session
Modeling I
icon_mobile_dropdown
Angular distance traveled across the eye as figure of merit for detecting moving targets
Barbara L. O'Kane, Gary Page
The effect of motion on the detectability of low contrast targets is important to predicting target acquisition. Previous perception studies addressing a model for predicting detection of low contrast moving targets used square targets as a reflection of the description of a target given in the ACQUIRE model, along with various angular velocities. The results showed that the figure of merit for probability of detection was a function of the size of the target and the angular distance traveled on the screen. To determine if the moving target model required greater precision in the description of the shape of the target, the present perception study used five different sized targets that had aspect ratios of 4:1 or 1:4 for a total of ten target configurations. The results confirmed the probability of detection as a function of the angular distance traveled and the square root of the area, showing no consistent or significant effect of the horizontal or vertical orientation, nor of velocity. A simplified formula for the angular distance threshold as a function of target size is proposed.
Modeling defined field of regard (FOR) search and detection in urban environments
Currently there is no model validation to confirm the extension of the US Army Night Vision and Electronic Sensors Directorate (NVESD) field-of-view (FOV) urban search model to a field-of-regard (FOR) model. Human perception testing has been performed at NVESD to further understand the search and target acquisition task within a defined FOR in the urban environment. A modification to the current search model is proposed based on 30 simulated long-wave infrared (LWIR) FORs created by the NVESD Electro-optics Simulation Tool-kit. The FOR model was tested against human observer performance using an additional set of real LWIR FOR images that were captured at a US Army MOUT training site. The real imagery included thermal signatures during both day and night, and visible imagery during the day. Comparisons have been made between these perception test results and the time-limited search model. Current model assumptions concerning the relationship between the FOV search and FOR search tasks are evaluated and discussed.
Characteristics of infrared imaging systems which benefit from super-resolution reconstruction
Keith Krapels, Ronald G. Driggers, Eddie Jacobs, et al.
There have been numerous applications of super-resolution reconstruction algorithms to improve the range performance of infrared imagers. These studies show there can be a dramatic improvement in range performance when super-resolution algorithms are applied to under-sampled imager outputs. These occur when the imager is moving relative to the target which creates different spatial samplings of the field of view for each frame. The degree of performance benefit is dependent on the relative sizes of the detector/spacing and the optical blur spot in focal plane space. The blur spot size on the focal plane is dependent on the system F-number. Hence, in this paper we provide a range of these sensor characteristics, for which there is a benefit from super-resolution reconstruction algorithms. Additionally, we quantify the potential performance improvements associated with these algorithms. We also provide three infrared sensor examples to show the range of improvements associated with provided guidelines.
Detector integration time issues associated with FLIR performance
Brian Miller, Eric Flug, Ron Driggers, et al.
IR detector integration time is determined by a combination of the scene or target radiance, the noise of the sensor, and the sensor sensitivity. Typical LWIR detectors such as those used in most U.S. military systems can operate effectively with integration times in the microsecond region. MWIR detectors require much longer integration times (up to several milliseconds) under some conditions to achieve good Noise Equivalent Temperature Difference (NETD). Emerging 3rd Generation FLIR systems incorporate both MWIR and LWIR detectors. The category of sensors know as uncooled LWIR require thermal time constants, similar to integration time, in the millisecond range to achieve acceptable good NETD. These longer integration times and time constants would not limit performance in a purely static environment, but target or sensor motion can induce blurring under some circumstances. A variety of tasks and mission scenarios were analyzed to determine the integration time requirements for combinations of sensor platform movement and look angle. These were then compared to the typical integration times for MWIR and LWIR detectors to establish the suitability of each band for the functions considered.
Modeling II
icon_mobile_dropdown
System engineering trades for the LWIR hyperspectral imager
This paper describes the modeling of system engineering trades for the LWIR hyperspectral imager. First the operational scenario is defined to constrain the system trade space. Then modeling trades for spectral sampling, spectral bandwidth and SNR are presented. Issues unique to operating in the LWIR band are addressed. These trades are presented in the context of current technology for FPA and optical design. Radiometric calibration is addressed in preparation for flight testing of the sensor.
Atmospheric turbulence effects on 3rd generation FLIR performance
Phil Richardson, Ronald G. Driggers
Previous analysis of ground-to-ground 3rd Generation FLIR performance (Driggers, et al1) showed two main performance characteristics of the 3rd Gen sensor's MWIR and LWIR bands: first, that no major differences in detection range were observed between the two bands, and second, that a significant ID range advantage for the MWIR band over the LWIR band resulted from the smaller diffraction spot of the shorter wavelength MWIR. That analysis predicted performance for a variety of atmospheric transmittances but only at a single, relatively low value of atmospheric turbulence. In this paper, analysis of the effect of varying turbulence shows that increased turbulence decreases the ID range performance of the MWIR relative to the LWIR, and that at high turbulence values the two bands have roughly equivalent performance. Further, the LWIR band actually surpasses the ID range performance of MWIR at high turbulence for some systems. Frequency of occurrence data collected in multiple environments shows a predominance of moderate to high turbulence conditions in the real world. The superior ID range performance of the MWIR is thus achievable only under limited real-world conditions, and the LWIR can surpass the performance of the MWIR in significant scenarios (desert, day). For maximum performance under a variety of conditions the dual band capability of 3rd Gen FLIR systems is thus required.
Adaptive deblurring of noisy images
S. Susan Young, Ronald G. Driggers, Brian P. Teaney, et al.
This paper proposes a practical sensor deblur filtering method for images that are contaminated with noise. A sensor blurring function is usually modeled via a Gaussian-like function having a bell shape. The straightforward inverse function results in magnification of noise at the high frequencies. In order to address this issue, we apply a special window to the inverse blurring function. This special window is called the power window, which is a Fourier-based smoothing window that preserves most of the spatial frequency components in the pass-band and attenuates quickly at the transition-band. The power window is differentiable at the transition point which gives a desired smooth property and limits the ripple effect. Utilizing properties of the power window, we design the deblurring filter adaptively by estimating energy of the signal and noise of the image to determine the pass-band and transition-band of the filter. The deblurring filter design criteria are: a) filter magnitude is less than one at the frequencies where the noise is stronger than the desired signal (transition-band); b) filter magnitude is greater than one at the other frequencies (pass-band). Therefore, the adaptively designed deblurring filter is able to deblur the image by a desired amount based on the estimated or known blurring function while suppressing the noise in the output image. The deblurring filter performance is demonstrated by a human perception experiment which 10 observers are to identify 12 military targets with 12 aspect angles. The results of comparing target identification probabilities with blurred, deblurred, adding 2 level of noise to blurred, and deblurred noisy images are reported.
Modeling of IR sensor performance in cold weather
Van A. Hodgkin, Brian Kowalewski, Dave Tomkinson, et al.
Noise in an imaging infrared (IR) sensor is one of the major limitations on its performance. As such, noise estimation is one of the major components of imaging IR sensor performance models and modeling programs. When computing noise, current models assume that the target and background are either at or near a temperature of 300 K. This paper examines how the temperature of the scene impacts the noise in IR sensors and their performance. It exhibits a strategy that can be used to make a 300 K assumption-based model to compute the correct noise. It displays the results of some measurements of signatures of a cold target against a cold background. Range performance of a notional 3rd Gen sensor (midwave IR and long wave IR) is then modeled as a function of scene background temperature.
Modeling III
icon_mobile_dropdown
Current infrared target acquisition approach for military sensor design and wargaming
Ronald G. Driggers, Eddie L. Jacobs, Richard H. Vollmerhausen, et al.
The U.S. Army's infrared target acquisition models have been used for many years by the military sensor community, and there have been significant improvements to these models over the past few years. Significant improvements are the Target Task Performance (TTP) metric for all imaging sensors, the ACQUIRE-LC approach for low contrast infrared targets, and the development of discrimination criteria for the urban environment. This paper is intended to provide an overview of the current infrared target acquisition modeling approach. This paper will discuss recent advances and changes to the models and methodologies used to: (1) design and compare sensors, (2) predict expected target acquisition performance in the field, (3) predict target detection performance for combat simulations, (4) measure and characterize human operator performance in an operational environment (field performance), and (5) relate the models to target acquisition tasks and address targets that are relevant to urban operations. Finally, we present a catalog of discrimination criteria, characteristic dimensions, and target contrasts.
Recursive adaptive frame integration limited
Recursive Frame Integration Limited was proposed as a way to improve frame integration performance and mitigate issues related to high data rate needed for conventional frame integration. The technique applies two thresholds - one tuned for optimum probability of detection, the other to manage required false alarm rate - and allows a non-linear integration process that, along with Signal-to-Noise Ratio (SNR) gain, provides system designers more capability where cost, weight, or power considerations limit system data rate, processing, or memory capability. However, Recursive Frame Integration Limited may have performance issues when single frame SNR is really low. Recursive Adaptive Frame Integration Limited is proposed as a means to improve limited integration performance with really low single frame SNR. It combines the benefits of nonlinear recursive limited frame integration and adaptive thresholds with a kind of conventional frame integration.
Effects of image compression on sensor performance
As the number of fielded sensors increases, together with increasing sensor format size, and more spectral bands, the amount of sensor information available is rapidly multiplying. Additionally, sensors are increasingly being implemented in sensor networks with wired or wireless exchange of information. To handle the increasing load of data, often with limited network bandwidth resources, both still and moving imagery can be highly compressed, resulting in a 50 to 100 fold (or more) decrease in required network bandwidth. However, such high levels of compression are not error-free, and the resulting images contain artifacts that may adversely affect the ability of observers to detect or identify targets of interest. This paper attempts to quantify the effect of image compression in its impact on observer tasks such as target identification. We will address multiple typically-used compression algorithms, at varying degrees of high compression, in a series of controlled perception experiments to isolate the effects and quantify the impact on observer tasking. Recommendations will be made on how to incorporate performance degradation caused by image compression with other sensor design factors in designing a remote sensor with compressed imagery.
Using a targeting metric to predict the utility of an EO imager as a pilotage aid
Richard H. Vollmerhausen, Trang Bui
Army aviators use both image intensified goggles and thermal imagers as night vision aids when flying helicopters at night. The Targeting Task Performance (TTP) metric can be used to predict how well these imagers support the pilotage task under different illumination and thermal contrast conditions. The TTP metric predicts the field performance of the Aviator's Night Vision Imaging System, the Apache Helicopter Pilot's Night Vision System, and the Advanced Helicopter Pilotage system. These three systems represent diverse technologies: image intensified goggles, a first generation thermal imager, and a second-generation thermal imager. The ability of the TTP metric to predict the behavior of these diverse systems is evidence of its suitability as a pilotage metric. This paper discusses the application of the TTP metric to helicopter pilotage. Data from field surveys of Army aviators are used to validate the use of the TTP metric to predict pilotage system performance. Since knowledge of scene contrast is necessary to make sensor design trades using the TTP metric, a discussion of the terrain thermal contrast available in the mid-wave and long-wave infrared is also provided.
Effect of image enhancement on the search and detection task in the urban terrain
This research investigated the effects of using medical imaging enhancement techniques to increase the detectability of targets in the urban terrain. Targets in the urban environment present human observers different challenges than targets located in the traditional, open field, search environment. In the traditional environment, targets typically were military vehicles in a natural background. In the urban environment, targets were humans against a man-made background. The U.S. Army Night Vision and Electronic Sensors Directorate (NVESD) and the U. S. Army Research Laboratory (ARL) explored three image processing techniques: contrast enhancement, edge enhancement, and a multiscale edge domain process referred to as "mountain-view". For the mountain-view presentation, high-contrast edges were enhanced. Human perception experiments were conducted with non-enhanced real imagery collected from an Urban Operations training center. These human perception experiments establish a baseline response. Processing the imagery using the previously mentioned techniques then allowed human perception experiments to be conducted. The performance parameters used for comparison were probability of detection, and time required to detect a target. This research provided a methodology of evaluating and quantifying human performance differences in target acquisition based on image processing techniques in the urban environment.
Modeling IV
icon_mobile_dropdown
Applicability of TOD, MTDP, MRT and DMRT for dynamic image enhancement techniques
Current end-to-end sensor performance measures such as the TOD, MRT, DMRT and MTDP were developed to describe Target Acquisition performance for static imaging. Recent developments in sensor technology (e.g. microscan) and image enhancement techniques (e.g. Super Resolution and Scene-Based Non-Uniformity Correction) require that a sensor performance measure can be applied to dynamic imaging as well. We evaluated the above-mentioned measures using static, dynamic (moving) and different types of enhanced imagery of thermal 4-bar and triangle tests patterns. Both theoretical and empirical evidence is provided that the bar-pattern based methods are not suited for dynamic imaging. On the other hand, the TOD method can be applied easily without adaptation to any of the above-mentioned conditions, and the resulting TOD data are in correspondence with the expectations. We conclude that the TOD is the only current end-to-end measure that is able to quantify sensor performance for dynamic imaging and dynamic image enhancement techniques.
Threat object identification performance for LADAR imagery: comparison of two-dimensional versus three-dimensional imagery
Matthew A. Chaudhuri, Ronald G. Driggers, Brian C. Redman, et al.
This research was conducted to determine the change in human observer range performance when LADAR imagery is presented in stereo 3D vice 2D. It compares the ability of observers to correctly identify twelve common threatening and non-threatening single-handed objects (e.g. a pistol versus a cell phone). Images were collected with the Army Research Lab/Office of Naval Research (ARL/ONR) Short Wave Infrared (SWIR) Imaging LADAR. A perception experiment, utilizing both military and civilian observers, presented subjects with images of varying angular resolutions. The results of this experiment were used to create identification performance curves for the 2D and 3D imagery, which show probability of identification as a function of range. Analysis of the results indicates that there is no evidence of a statistically significant difference in performance between 2D and 3D imagery.
A new approach for object measurement under out-of-focus situation
Accurate measurement is important for object detection, pattern recognition, object classification, and industry inspection. Digital image processing techniques have been used to improve the accuracy of object measurement. However, it is still challenging to measure an object, especially when the camera was out-of-focus during the imaging acquisition step. Sometimes, it is even impossible to measure the blurred object. In this paper, the Gaussian degradation model is used to estimate the edges and the size of the object. Currently this approach is for measure one-dimension. This approach can use to measure (almost) totally blurred object. The experiments show encouraging results.
Modeling the effects of image contrast on thermal target acquisition performance
Jonathan G. Hixson, Brian Teaney, Eddie L. Jacobs
Most sensors allow the user to adjust a parameter at will that will modify the displayed scene contrasts in the image. This parameter is usually referred to as gain. The current US Army thermal target acquisition model (NVThermIP) accounts for gain by introducing a scaling parameter called scene contrast temperature. First, the current US Army theory of target acquisition is reviewed and the particular means for modeling sensor gain are highlighted. Then the definitions of scene and target contrast are discussed. The results of two paired comparison perception experiments are analyzed. One of the experiments gives insight into a target identification task, and another gives some insight into a search task. Conclusions regarding the limits of applicability of values for scene contrast temperature are then discussed.
Joint Session with Conference 6208
icon_mobile_dropdown
Can IR scene projectors reduce total system cost?
There is an incredible amount of system engineering involved in turning the typical infrared system needs of probability of detection, probability of identification, and probability of false alarm into focal plane array (FPA) requirements of noise equivalent irradiance (NEI), modulation transfer function (MTF), fixed pattern noise (FPN), and defective pixels. Unfortunately, there are no analytic solutions to this problem so many approximations and plenty of "seat of the pants" engineering is employed. This leads to conservative specifications, which needlessly drive up system costs by increasing system engineering costs, reducing FPA yields, increasing test costs, increasing rework and the never ending renegotiation of requirements in an effort to rein in costs. These issues do not include the added complexity to the FPA factory manager of trying to meet varied, and changing, requirements for similar products because different customers have made different approximations and flown down different specifications. Scene generation technology may well be mature and cost effective enough to generate considerable overall savings for FPA based systems. We will compare the costs and capabilities of various existing scene generation systems and estimate the potential savings if implemented at several locations in the IR system fabrication cycle. The costs of implementing this new testing methodology will be compared to the probable savings in systems engineering, test, rework, yield improvement and others. The diverse requirements and techniques required for testing missile warning systems, missile seekers, and FLIRs will be defined. Last, we will discuss both the hardware and software requirements necessary to meet the new test paradigm and discuss additional cost improvements related to the incorporation of these technologies.
Bolometers running backward: the synergy between uncooled IR sensors and dynamic IR scene projectors
The leading IR scene projection (IRSP) device technology, resistive emitter arrays, has grown from its early roots in the uncooled microbolometer community into a separate and highly specialized field of its own. IRSP systems incorporating "microbolometers running backwards" are critical tools now ubiquitous in laboratory testing and evaluation of high performance IR sensors and their embedded algorithms. Adoption of IRSPs has reduced the scope of flight/field testing, producing dramatic resource savings and strong system development advantages. Modern IRSP systems provide the capability to project high-resolution (1024 x 1024), high-temperature (750 K) dynamic MWIR-LWIR imagery at frame rates up to 200 Hz, with 16-bit input resolution. Novel IRSP systems are now being developed to test advanced FPAs and sensors requiring wide-format (768 x 1536), cryogenic background (50-80 K), fast-framing (400 Hz), and/or very high-temperature (2500 K) dynamic IR simulation in order to be properly evaluated. The ongoing cycle of sensor improvement and test system evolution is perfectly illustrated by the parallel development of IRSP and emerging FPA/sensor technologies. The cross-pollination of technology between the sensor and projector domains continues to bring innovation to both communities. Technological trends related to semiconductor and microelectrical-mechanical system (MEMS) device fabrication, real-time digital video processing, and EO system design are being exploited by both sensor and projector developers alike - with advantages realized by both. This paper presents a lighthearted overview of the technical evolution of IRSP from its early microbolometer roots, discusses current and emerging IRSP capabilities, illustrates the device-level to system-level synergy between sensors and projectors, and offers a peek into the advanced EO simulation capabilities and technologies which will be required to address emerging FPA and sensor trends.
Are reconstruction filters necessary?
Shannon's sampling theorem (also called the Shannon-Whittaker-Kotel'nikov theorem) was developed for the digitization and reconstruction of sinusoids. Strict adherence is required when frequency preservation is important. Three conditions must be met to satisfy the sampling theorem: (1) The signal must be band-limited, (2) the digitizer must sample the signal at an adequate rate, and (3) a low-pass reconstruction filter must be present. In an imaging system, the signal is band-limited by the optics. For most imaging systems, the signal is not adequately sampled resulting in aliasing. While the aliasing seems excessive mathematically, it does not significantly affect the perceived image. The human visual system detects intensity differences, spatial differences (shapes), and color differences. The eye is less sensitive to frequency effects and therefore sampling artifacts have become quite acceptable. Indeed, we love our television even though it is significantly undersampled. The reconstruction filter, although absolutely essential, is rarely discussed. It converts digital data (which we cannot see) into a viewable analog signal. There are several reconstruction filters: electronic low-pass filters, the display media (monitor, laser printer), and your eye. These are often used in combination to create a perceived continuous image. Each filter modifies the MTF in a unique manner. Therefore image quality and system performance depends upon the reconstruction filter(s) used. The selection depends upon the application.
Systems and Testing I
icon_mobile_dropdown
High performance spectroradiometer for very accurate radiometric calibrations and testing of blackbody sources and EO test equipment
Dario Cabib, Amir Gil, R. A. Buckwald
In the late eighties CI Systems pioneered the radiometric calibration and testing of electro-optical infrared test equipment1,2 by using its advanced in-house developed infrared spectroradiometer (the SR 5000), applied to measurements of signatures of military objects and long path atmospheric spectral transmission. Technological advances of frame rates, temperature resolution, spatial resolution, widened spectral ranges and other performance parameters of Forward Looking Infrared imaging systems (FLIR's) and other electro-optical (EO) devices require more advanced test and calibration equipment. The projected infrared radiation of such equipment must be controlled with better radiance resolution and accuracy. CI has carried out a number of optical modifications of the SR 5000, together with especially dedicated calibration algorithms, to significantly improve the blackbody radiance measurements at temperatures close to room temperature, resulting in: i) a factor of 5 improvement in sensitivity, measured by Noise Equivalent Temperature Difference (NEΔT), and ii) a factor of 5 to 10 improvement in accuracy (in the 3-5 micron and 8-12 microns spectral regions respectively). The most important modifications are the use of a higher D* and smaller detector, a different detector alignment procedure in which the signal to noise ratio is traded off with field of view uniformity of response, and a calibration procedure based on the division of the blackbody temperature range into several independent sub-ranges. As a result, the new spectroradiometer (the SR 5000WNV) has advanced infrared spectroradiometry so that it now allows the EO device manufacturers to characterize the most modern and future test equipment, and insure its being suitable to test the new advanced infrared imaging systems.
Radiometric dynamic scene processing for uncooled IRFPAs
Leo R. Gauthier Jr., Linda M. Howser, Daniel T. Prendergast, et al.
The widespread use of cameras based on uncooled infrared focal plane arrays (IRFPAs) is largely because of rapid commercialization, impressive miniaturization, and low per-unit cost. As performance improves, long-wave IR cameras using uncooled IRFPAs have replaced more expensive cooled units in many applications. The uncooled units generally have a much higher noise floor. However, if the signal is robust, the uncooled units can make the measurements at lower cost. New cameras with smaller pixels continue to reduce the pixel response time, enabling higher frame rates and more applications. Uncooled IRFPAs are thermal detectors, not charge-based devices, and the implicit pixel response time can greatly affect radiometric accuracy. In addition to the pixel response time, the fidelity of radiometric measurements is affected by target size, pixel fill factor, spectral response, stray light, self-heating, and other variables. If radiometric accuracy is required, it is necessary to quantify the effects of these variables. Calibration methods and measurement compensation techniques are described with emphasis on dynamic scene processing applications.
Radiance calibration of target projectors for infrared testing
Greg Matis, Jack Grigor, Jay James, et al.
This paper provides a procedure for radiometric calibration of infrared target projectors using the RAD-9000 MWIR/LWIR spectral radiometer - a high-performance instrument supporting extremely accurate absolute and relative radiometric calibration of EO test systems. We describe the rationale for radiometric calibration, an analysis of error sources typically encountered by investigators during calibration of infrared imaging cameras when using target projectors, and a strategy for performing an absolute system end-to-end radiometric calibration with emphasis on high accuracy and ease of use.
Separation of presampling and postsampling modulation transfer functions in infrared sensor systems
Richard L. Espinola, Jeffrey T. Olson, Patrick D. O'Shea, et al.
New methods of measuring the modulation transfer function (MTF) of electro-optical sensor systems are investigated. These methods are designed to allow the separation and extraction of presampling and postsampling components from the total system MTF. The presampling MTF includes all the effects prior to the sampling stage of the imaging process, such as optical blur and detector shape. The postsampling MTF includes all the effects after sampling, such as interpolation filters and display characteristics. Simulation and laboratory measurements are used to assess the utility of these techniques. Knowledge of these components and inclusion into sensor models, such as the U.S. Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate's NVThermIP, will allow more accurate modeling and complete characterization of sensor performance.
Systems and Testing II
icon_mobile_dropdown
Test techniques for high performance thermal imaging system characterization
David Forrai, Stephen Bertke, Robert Fischer, et al.
Recent requirements for modern low noise thermal imaging systems demand higher performance and more detailed characterization of the system. The statistical uncertainty inherent to the test system can often provide misleading information about system performance. An example would be a test that eliminates pixels based on certain performance parameters such as noise or responsivity. If the test uncertainty exceeds the true variance of the parameter, the test will yield results indicative of the test system rather than the parameter. This results in good pixels being eliminated that potentially impacts operability goals. A sign that test uncertainty dominates the test results is when operability remains nearly uniform between multiple tests while the pixels marked bad by the test changes between tests. In order to minimize the uncertainty in a test, one must consider all aspects of the test system that can affect test results. Those aspects include the physical construction of the test station as well as the underlying statistics associated with the measurement. This paper will show ongoing efforts at L-3 Cincinnati Electronics to lower test uncertainty, increase test repeatability, and qualify test systems for both focal plane array and system level electro-optics testing of thermal imagers.
Advanced manpower and time saving testing concept for development, production, and maintenance of electro-optical systems
Dario Cabib, R. A. Buckwald, Shimon Nirkin, et al.
In all stages of an electro-optics system's life, development, production, and periodic maintenance, a large amount of manpower and time is devoted to testing. Each subsystem separately as well as the system as a whole are tested by a PC controlled test system, which consists of hardware for creation of the appropriate stimuli, and software for tests management and control. A very considerable portion of this manpower and time is devoted by the system manufacturer to configure the test routines, to manually input certain parameter values of the Unit Under Test (UUT) at predefined test nodes, and to reconfigure these routines from time to time, as the needs change during the system's life time. CI has developed the CTE (CI Test Executive), a software package which is a breakthrough in saving manpower and time devoted to electro-optics system testing. The new concept is based on: 1. The CTE can communicate directly with any UUT able to communicate with the outside world through a known protocol, to automatically set the UUT parameters before testing, 2. The user can more easily reconfigure the communication with the UUT through a provided special Excel file, without the help of the test system manufacturer, 3. The interface screen is automatically reconfigured every time the Excel file is changed to build the new test routine, 4. The CTE can simulate the test system stimuli with error injection capability, and simultaneously monitor communication and other hardware functions, 5. Test "verification" signals are provided on-line for the convenience and time saving of the test operator.
First responder thermal imaging cameras: development of performance metrics and test methods
Francine Amon, Anthony Hamins
Thermal imaging cameras are rapidly becoming integral equipment for first responders for use in structure fires and other emergencies. Currently, there are no standardized performance metrics or test methods available to the users and manufacturers of these instruments. The Building and Fire Research Laboratory at the National Institute of Standards and Technology is developing performance evaluation techniques that combine aspects of conventional metrics such as the contrast transfer function (CTF), the minimum resolvable temperature difference (MRTD), and noise equivalent temperature difference (NETD) with test methods that accommodate the special conditions in which first responders use these instruments. First responders typically use thermal imagers when their vision is obscured due to the presence of smoke, dust, fog, and/or the lack of visible light, and in cases when the ambient temperature is uncomfortably hot. Testing has shown that image contrast, as measured using a CTF calculation, suffers when a target is viewed through obscuring media. A proposed method of replacing the trained observer required for the conventional MRTD test method with a CTF calculation is presented. A performance metric that combines thermal resolution with target temperature and sensitivity mode shifts is also being investigated. Results of this work will support the establishment of standardized performance metrics and test methods for thermal imaging cameras that are meaningful to the first responders that use them.
LCD display screen performance testing for handheld thermal imaging cameras
Joshua B. Dinaburg, Francine Amon, Anthony Hamins, et al.
Handheld thermal imaging cameras are an important tool for the first responder community. As their use becomes more prevalent, it will become important for a set of standard test metrics to be available to characterize the performance of these cameras. A major factor in the performance of the imagers is the quality of the image on a display screen. An imager may employ any type of display screen, but the results of this paper will focus on those using liquid crystal displays. First responders, especially firefighters, in the field rely on the performance of this screen to relay vital information during critical situations. Current research on thermal imaging camera performance metrics for first responder applications uses trained observer tests or camera composite output signal measurements. Trained observer tests are subjective and composite output tests do not evaluate the performance of the complete imaging system. It is the goal of this work to develop a non-nondestructive, objective method that tests the performance of the entire thermal imaging camera system, from the infrared destructive, sensor to the display screen. Application of existing display screen performance metrics to thermal imaging cameras requires additional consideration. Most display screen test metrics require a well defined electronic input, with either full black or white pixel input, often encompassing detailed spatial patterns and resolution. Well characterized thermal inputs must be used to obtain accurate, repeatable, and non-destructive display screen measurements for infrared cameras. For this work, a thermal target is used to correlate the measured camera output with the actual display luminance. A test method was developed to determine display screen luminance. A well characterized CCD camera and digital recording device were used to determine an electro-optical transfer function for thermal imaging cameras. This value directly relates the composite output signal to the luminance of the display screen, providing a realistic characterization of system performance.
Standard target sets for field sensor performance measurements
John D. O'Connor, Patrick O'Shea, John E. Palmer, et al.
The US Army Night Vision and Electronic Sensors Directorate (NVESD) Modeling and Simulation Division develops sensors models (FLIR 92, NV Therm, NV Therm IP) that predict the comparative performance of electro-optical sensors. The NVESD modeling branch developed a 12-vehicle, 12-aspect target signature set in 1998 with a known cycle criteria. It will be referred to as the 12-target set. This 12-target set has and will continue to be the modeling "gold standard" for laboratory human perception experiments supporting sensor performance modeling, and has been employed in dozens of published experiments. The 12-target set is, however, too costly for most acquisition field tests and evaluations. The authors developed an 8-vehicle 3-aspect target set, referred to as the 8- target set, and measured its discrimination task difficulty, (N50 and V50). Target identification (ID) range performance predictions for several sensors were made based on those V50/N50 values. A field collection of the 8-target set using those sensors provided imagery for a human perception study. The human perception study found excellent agreement between predicted and measured range performance. The goal of this development is to create a "silver standard" target set that is as dependable in measuring sensor performance as the "gold standard", and is affordable for Milestone A and other field trials.
Poster Session
icon_mobile_dropdown
Performances of multichannel commander's sighting system for military land vehicle application
Young Soo Choi, Hyun Sook Kim, Chang Woo Kim, et al.
The developing multi-channel stabilized commander's sighting system which can be operated by day and night consists of 2nd generation LWIR thermal imager, daylight TV camera, eyesafe 1.54μm Raman shifted Nd:YAG laser rangefinder and direct view telescope for outstanding observation and fire control capabilities. The high performance thermal imager which uses a 480x6 HgCdTe array detector has dual field of views such as 3x2.25° in NFOV and 10x7.5° in WFOV. Daylight TV camera which employs 768x494 color CCD has 4.0cycles/mrad resolution and the same dual FOV. For an eyesafe operation, 1.54μm Raman shifted Nd:YAG laser rangefinder with InGaAs APD detector is incorporated into direct view optics to provide range data to commander and fire control computer with an accuracy of 10 meter. Multi channel EO/IR sensors for day and night views are integrated into the stabilized head mirror. In this paper, the performances of multi channel EO/IR sensors for the commander's sight has been analyzed for a tactical ground application.
Advances in thermal imaging technology in the first responder arena
Francine Amon, Nelson Bryner
Thermal imaging cameras are rapidly becoming integral equipment for first responders for use in structure fires and other emergencies. Currently there are no standardized test methods or performance metrics available to the users and manufacturers of these instruments. The Building and Fire Research Laboratory at the National Institute of Standards and Technology has been conducting research that will provide a quantifiable physical and scientific basis upon which industry standards for imaging performance, testing protocols and reporting practices can be developed. To date, the components of this project have included full-scale fire testing, development of bench-scale testing facilities, computational fluid dynamics and radiation modeling, and performance metric evaluations. A workshop was held in which participants representing thermal imager users, manufacturers, researchers, government agencies, and standards developing organizations discussed the future of thermal imaging technology as it applies to first responders. Performance metrics for thermal and spatial resolution have been explored, with emphasis on the contrast transfer function, minimum resolvable temperature difference, and noise equivalent temperature difference. These performance metrics and associated test methods are currently being applied to conditions that simulate first responder scenarios. A system of thermal classifications was adopted to define the boundaries of testing conditions. A method of relating the performance of the imager's display screen to an image captured from the imager's composite video output is proposed. New testing instrumentation, in which optically simulated thermal scenes are projected onto the imager's sensor, is also under development.
An image enhancement methodology and FPGA-based implementation combining fuzzy logic and image convolution for an infrared imaging system
Ajay Kumar, S. Sarkar, R. P. Agarwal
Modern infrared imaging systems are designed based on highly sensitive infrared focal plane arrays (IRFPA), in which most of the preprocessing is done on the focal plane itself. In spite of many advances in the design of IRFPAs, it has inherent non-uniformities and instabilities, which limits its sensitivity, dynamic range and other advantages. Whenever there is little or no thermal variation in the scene, the thermal imager suffers from its inability to separate out the target of interest from its background. Thus, most of the infrared imagery suffers from poor contrast and high noise. A methodology for contrast enhancement that combines the fuzzy based processing and spatial processing is proposed. Fuzzy based processing improves the image where there are inaccuracies and uncertainties in the image, where as spatial processing is used for improving the contrast and enhancing the details. This results the overall improvement in the image quality under all conditions. This algorithm has been tested on the field-recorded data and is observed that this technique offers excellent results for thermal imager operating in both 3-5 μm and 8-12 μm wavelength regions.
A new nonuniformity correction algorithm for infrared line scanners
Nonuniformity correction (NUC) is a critical task for achieving higher performances in modern infrared imaging systems. The striping fixed pattern noise produced by the scanning-type infrared imaging system can hardly be removed clearly by many scene-based non-uniformity correction methods, which can work effectively for staring focal plane arrays (FPA). We proposed an improved nonuniformity algorithm that corrects the aggregate nonuniformity by two steps for the infrared line scanners (IRLS). The novel contribution in our approach is the integration of local constant statistics (LCS) constraint and neural networks. First, the nonuniformity due to the readout electronics is corrected by treating every row of pixels as one channel and normalizing the channel outputs so that each channel produces pixels with the same mean and standard deviation as median value of the local channels statistics. Second, for IRLS every row is generated by pushbrooming one detector on line sensors, we presume each detector has one neuron with a weight and an offset as correction parameters, which can update column by column recursively at Least Mean Square sense. A one-dimensional median filter is used to produce ideal output of linear neural network and some optimization strategies are added to increase the robustness of learning process. Applications to both simulated and real infrared images demonstrated that this algorithm is self-adaptive and able to complete NUC by only one frames. If the nonuniformity is not so severe then only the first step can obtain a good correction result. Combination of two steps can achieve a higher correction level and remove stripe pattern noise clearly.