Proceedings Volume 5784

Infrared Imaging Systems: Design, Analysis, Modeling, and Testing XVI

cover
Proceedings Volume 5784

Infrared Imaging Systems: Design, Analysis, Modeling, and Testing XVI

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 12 May 2005
Contents: 6 Sessions, 35 Papers, 0 Presentations
Conference: Defense and Security 2005
Volume Number: 5784

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • System Modeling I
  • System Modeling II
  • System Simulation
  • Search
  • Test Hardware
  • Systems and System Test
  • Test Hardware
  • Systems and System Test
  • Test Hardware
  • Systems and System Test
System Modeling I
icon_mobile_dropdown
Handheld threat object identification performance of 2D visible imagery versus 3D visible imagery
Keith Krapels, Ronald G. Driggers, Brian Teaney, et al.
The objective of this research was to determine if there was an improvement in human observer performance, identifying potential weapons or threat objects, when imagery is presented in three dimensions instead of two dimensions. Also it was desired to quantify this potential improvement in performance by evaluating the change in N50 cycle criteria, for this task and target set. The advent of affordable, practical and real-time 3-D displays has led to a desire to evaluate and quantify the performance trade space for this potential application of the technology. The imagery was collected using a dual camera stereo imaging system. A series of eight different resolutions were presented to observers in both two and three dimensional formats. The set of targets consisted of twelve hand held objects. The objects were a mix of potential threats or weapons and possible confusers. Two such objects, for example, are a cellular telephone and a hand grenade. This target set was the same target set used in previously reported research which determined the N50 requirements for handheld objects for both visible and infrared imagers.
LWIR and MWIR fusion algorithm comparison using image metrics
This study determines the effectiveness of a number of image fusion algorithms through the use of the following image metrics: mutual information, fusion quality index, weighted fusion quality index, edge-dependent fusion quality index and Mannos-Sakrison’s filter. The results obtained from this study provide objective comparisons between the algorithms. It is postulated that multi-spectral sensors enhance the probability of target discrimination through the additional information available from the multiple bands. The results indicate that more information is present in the fused image than either single band image. The image quality metrics quantify the benefits of fusion of MWIR and LWIR imagery.
Dual-band sensor fusion for urban target acquisition
Different systems are optimized for and are capable of addressing issues in the different spectral regions. Each sensor has its own advantages and disadvantages. The research presented in this paper focuses on the fusion of MWIR (0.3-0.5 μm) and LWIR (0.8-12 μm) spectrums on one IR Focal Plane Array (FPA). The information is processed and then displayed in a single image in an effort to analyze possible benefits of combining the two bands. The analysis addresses how the two bands differ by revealing the dominant band in terms of temperature value for different objects in a given scene, specifically the urban environment
Identification in static luminance and color noise
Piet Bijl, Marcel P. Lucassen, Jolanda Roelofsen
If images from multiple sources (e.g. from the different bands of a multi-band sensor) are displayed in color, Signal and Noise may appear as luminance and color differences in the image. As a consequence, the perception of color differences may be important for Target Acquisition performance with fused imagery. Luminance and color can be represented in a 3-D space; in the CIE 1994 color difference model, the three perceptual directions are lightness (L*), chroma (C*) and hue (h*). In this 3-D color space, we performed two perception experiments. In Experiment 1, we measured human observer detection thresholds (JND's) for uniformly distributed static noise (fixed pattern noise) in L*, C* or h* on a uniform background. The results show that the JND for noise in L* is significantly lower than for noise in C* or h*. In Experiment 2, we measured the threshold contrast for identification (orientation discrimination) of a Ushaped test target on a noisy background. With test symbol and background noise in L*, the ratio between signal threshold and noise level is constant. With the symbol in a different direction, we found little dependency on noise level. The results may be used to optimize the use of color to human detection and identification performance with multi-band systems.
Probability of identification comparison for targets in the visible, illuminated shortwave infrared, and longwave infrared spectra
John D. O'Connor, Ted Corbin, David Tomkinson
This research describes a comparison of target identification performance between targets in the longwave infrared, illuminated shortwave infrared and visible spectral bands. Increasing levels of Gaussian blur were applied to eight varying aspects of twelve targets in the longwave infrared, illuminated shortwave infrared and visible spectra. A double-blind experiment was conducted with the first group of observers trained to identify all the targets using longwave infrared imagery and the second group trained to identify all the targets using visible imagery. Results of the first group's visible identification scores and the second group's longwave identification scores were compared to their results for illuminated shortwave infrared identification scores. In both cases, the illuminated shortwave infrared identification scores fell below the untrained visible or longwave infrared counterpart.
Urban vehicle cycle criteria for identification
Nicole Devitt, Jonathan G. Hixson, Steve Moyer, et al.
In the urban operations (UO) environment, it may be necessary to identify various vehicles that can be referred to as non-traditional vehicles. A police vehicle might require a different response than a civilian vehicle, or a tactical vehicle. This research reports the measured 50% probability of identification cycle criteria (N50s and V50s) required to identify a different vehicle set than previously researched at NVESD. Longwave infrared (LWIR) and midwave infrared (MWIR) imagery of twelve vehicles at twelve different aspects was collected. Some of the vehicles in this confusion set include an ambulance, a police sedan, a HMMWV, and a pickup truck. This set of vehicles represents those commonly found in urban environments. The images were blurred to reduce the number of resolvable cycles. The results of the human perception experiments allowed the 50% probability of identification cycle criteria (N50s and V50s) to be measured. These results will allow the modeling of sensor performance in the urban terrain for infrared imagers.
Resolvable cycle criteria for identifying personnel based on clothing and armament variations
In the urban environment, it may be necessary to identify personnel based on their type of dress. Observing a police officer or soldier might require a different response than observing an armed civilian. This paper reports on the required number of resolvable cycles to identify different personnel based upon the variations of their clothing and armament. Longwave (LWIR), and midwave infrared (MWIR) images of twelve people at twelve aspects were collected. These images were blurred and 11 human observers performed a 12-alternative forced choice visual identification experiment. The results of the human perception experiments were used to measure the required number of resolvable cycles for identifying these personnel. These results are used in modeling sensor performance tasks and improving war-game simulations oriented to the urban environment.
New methodology for predicting minimum resolvable temperature
Richard Vollmerhausen, Van Hodgkin
The most common form of system performance check for thermal imagers is Minimum Resolvable Temperature (MRT). Viewing 4-bar patterns of various sizes, one at a time, generates an MRT plot. For each size of bar pattern, the MRT is the minimum temperature between bar and space that makes the pattern visible. Small MRT when viewing a large bar pattern indicates good system sensitivity, and small MRT when viewing a small bar pattern indicates good system resolution. Two problems make laboratory MRT difficult to predict. First, because MRT is supposed to represent the best achievable sensor performance, the operator is encouraged to change sensor gain and level for each bar pattern size. This means that the imager is not in a single gain state throughout the MRT measurement. Second, aliasing makes the MRT for sampled imagers difficult to predict. This paper describes a new model for predicting laboratory MRT. The model accounts for variation of the sensor gain during measurement. Also, the model includes the visual bandpass properties of human vision, permitting sampled imager MRT to be accurately predicted. These model changes result in MRT predictions significantly different from previous models. Model results are compared to laboratory measurements.
Range performance modeling for staring focal plane array infrared detectors
The generally accepted models for imaging and range performance modeling of thermal imagers have not been able to properly model under-sampled systems, i.e. staring focal plane arrays (FPAs). The ruling STANAGs 4349 and 4350 on measurement and modeling of Minimum Resolvable Temperature Difference (MRTD), by definition only deal with properly sampled systems and thus cannot address performance beyond the Nyquist frequency. This includes the FLIR92 model which is based on the models defined in STANAG 4350. Range performance modeling, defined through STANAG 4347, is based on MRTD and thus likewise limits performance to below Nyquist frequencies. Practical experience has long shown that this limitation is not valid and development of new modeling techniques to address these problems has been performed e.g. in Germany, the TRM3 model, and in the US, with the NVTherm model. TRM3 addresses the under-sampled systems by introducing a concept of Minimum Temperature Difference Perceived (MTDP) which replaces MRTD for frequencies beyond Nyquist. NVTherm instead introduces a modified MRTD function through the concept of MTF squeeze. Typically, range performance predictions from NVTherm will increase ranges by some 15% over Nyquist resolution based predictions, and TRM3 based predictions exceed Nyquist ranges by up to 30%. A study is done to compare modeling results from these two models with laboratory measurements (MRTD) on QWIP long wave staring FPA based thermal imagers and finally relate these to empirical data from range performance field trials against actual targets.
System Modeling II
icon_mobile_dropdown
A mechanism for the management and optimization of imaging systems with non-uniform imaging quality
Thomas A. Sanderson, Paul Sprague, Steven L. Smith, et al.
When imaging data is collected using airborne remote sensing systems, it is common that the image quality (IQ) of the collected data is not uniform over the entire region of collection. This non-uniformity of IQ is often a limiting factor to the utility of collected data. It would therefore be useful to have a mechanism to predict, assess and manage the non-uniformity of the IQ of remote sensing data both before and after data collection. A mechanism is proposed to model spatially and temporally varying IQ aspects of an imaging collection as a matrix across the region of collection. Within this framework an image quality metric such as a NIIRS based IQE or other IQ predictor is applied to the matrix of parameters, thus sampling IQ such that a 'map' or 'picture' of image quality is created. This allows specific knowledge of IQ performance at particular locations in an image, allowing better resource management when multiple targets with separate collection requirements are collected in the same imaging event. Application to mission planning and optimization of system resources under contingency operations, such as when a system must operate in a degraded state, are also discussed.
The meaning of super-resolution
Ronald Driggers, Keith Krapels, Susan Young
Regarding the terminology "super-resolution", there is frequently confusion with respect to the meaning of the word. In fact, some say that the term has been "hijacked" or stolen from its original meaning and is being applied improperly to a newer area of work. The earlier work involved the estimation of spatial information beyond the MTF band-limit of an imaging system (typically the diffraction limit). The newer work involves the use of successive, multiple frames from an undersampled imager to collectively construct a higher resolution image. The former area has to do with diffraction blur and the latter area has to do with sampling. In this short paper, we describe the nomenclature confusion, the two research areas, present a nomenclature solution proposed by IEEE, and then provide some comments and conclusions.
Superresolution reconstruction and its impact on sensor performance
Jae H. Cha, Eddie Jacobs
Superresolution reconstruction algorithms are increasingly being proposed as enhancements for low resolution electro-optical and thermal sensors. These algorithms exploit either random or programmed motion of the sensor along with some form of estimation to provide a higher density sampling of the scene. In this paper, we investigate the impact of superresolution processing on observer performance. We perform a detailed analysis of the quality of reconstructed images under a variety of scene conditions and algorithm parameters with respect to human performance of a well defined task; target identification of military vehicles. Imagery having synthetic motion is used with the algorithm to produce a series of static images. These images were used in a human perception study of target identification performance. Model predictions were compared with task performance. The implication of these results on the improvement of models to predict sensor performance with superresolution is discussed.
Super-resolution image reconstruction from a sequence of aliased imagery
S. Susan Young, Ronald G. Driggers
This paper presents a super-resolution image reconstruction from a sequence of aliased imagery. The sub-pixel shifts (displacement) among the images are unknown due to uncontrolled natural jitter of the imager. A correlation method is utilized to estimate sub-pixel shifts between each low-resolution aliased image with respect to a reference image. An error-energy reduction algorithm is derived to reconstruct the high-resolution alias-free output image. The main feature of this proposed error-energy reduction algorithm is that we treat the spatial samples from low-resolution images that possess unknown and irregular (uncontrolled) sub-pixel shifts as a set of constraints to populate an over-sampled (sampled above the desired output bandwidth) processing array. The estimated sub-pixel locations of these samples and their values constitute a spatial domain constraint. Furthermore, the bandwidth of the alias-free image (or the sensor imposed bandwidth) is the criterion used as a spatial frequency domain constraint on the over-sampled processing array. The results of testing the proposed algorithm on the simulated low-resolution aliased images from real world non-aliased FLIR (Forward-Looking Infrared) images, real world aliased FLIR images and visible aliased images are provided.
NVThermIP modeling of super-resolution algorithms
Eddie Jacobs, Ronald G. Driggers, Susan Young, et al.
Undersampled imager performance enhancement has been demonstrated using super-resolution reconstruction techniques. In these techniques, the optical flow of the scene or the relative sub-pixel shift between frames is calculated and a high-resolution grid is populated with spatial data based on scene motion. Increases in performance have been demonstrated for observers viewing static images obtained from super-resolving a sequence of frames in a dynamic scene and for dynamic framing sensors. In this paper, we provide explicit guidance on how to model super-resolution reconstruction algorithms within existing thermal analysis models such as NVThermIP. The guidance in this paper will be restricted to static target/background scenarios. Background is given on the interaction of sensitivity and resolution in the context of a super-resolution process and how to relate these characteristics to parameters within the model. We then show results from representative algorithms modeled with NVThermIP. General guidelines for analyzing the effects of super-resolution in models are then presented.
Multispectral imager modeling
This paper describes the modeling of multispectral infrared sensors. The current NVESD infrared sensor model, NVTherm, models single spectral band sensors. The current NVTherm model is being updated to model third generation multispectral infrared sensors. A simple model for the target and its background radiance is presented here and typical results are reported for common materials. The proposed target radiance model supports band selection studies. Spectral atmospheric propagation modeling is accomplished using MODTRAN. Example radiance calculations are presented and compared to data collected for validation. The data supports rejecting the null hypothesis that the model is invalid.
Increasing the depth of field in an LWIR system for improved object identification
Kenneth S. Kubala, Hans B. Wach, Vladislav V. Chumachenko, et al.
In a long wave infrared (LWIR) system there is the need to capture the maximum amount of information of objects over a broad volume for the identification and classification by the human or machine observer. In a traditional imaging system the optics limit the capture of this information to a narrow object volume. This limitation can hinder the observer's ability to navigate and/or identify friend or foe in combat or civilian operations. By giving the observer a larger volume of clear imagery their ability to perform will drastically improve. The system presented allows the efficient capture of object information over a broad volume and is enabled by a technology called Wavefront Coding. A Wavefront Coded system employs the joint optimization of the optics, detection and signal processing. Through a specialized design of the system’s optical phase, the system becomes invariant to the aberrations that traditionally limit the effective volume of clear imagery. In the process of becoming invariant, the specialized phase creates a uniform blur across the detected image. Signal processing is applied to remove the blur, resulting in a high quality image. A device specific noise model is presented that was developed for the optimization and accurate simulation of the system. Additionally, still images taken from a video feed from the as-built system are shown, allowing the side by side comparison of a Wavefront Coded and traditional imaging system.
System Simulation
icon_mobile_dropdown
NV-THERM based sensor effects for imaging simulations
The Night Vision and Electronics Sensors Directorate Electro-optics Simulation Toolkit (NVEOST), follow-on to Paint-The-Night, produces real time simulation of IR scenes and sequences using modeled backgrounds and targets with physics and empirically based IR signatures. Range dependant atmospheric effects are incorporated, realistically degrading the infrared scene impinging on an infrared imaging device. Current sensor effects implementation for Paint the Night (PTN) and the Night Vision Image Generator (NVIG) is a 3 step process. First the scene energy is further attenuated by the sensor optic. Second, a prefilter kernel developed off-line, is applied to scenes or frames to affect the sensor modulation transfer function (MTF) "blurring" of scene elements. Thirdly, sensor noise is overlaid on scenes, or more often frames of scenes. NVESD is improving the PTN functionality, now entitled NVEOST, in several ways. In the near future, a sensor effects tool will directly read an NVTHERM input data file, extract that data which it can utilize and then automatically generate the sensor "world view" of a NVEOST scenario. These will include those elements currently employed: optical transmission, parameters used to calculate prefilter MTF (telescope, detector geometry) and temporal-spatial random noise (σTVH). Important improvements will include treatment of sampling effects (under sampling and super-resolution), certain significant postfilters (signal processing including boost and frame integration) and spatial noise. The sensor effects implementation will require minimal interaction; only a well developed NVTHERM input parameter set will be required. The developments described below will enhance NVEOST's utility not only as a virtual simulator but also as a formidable sensor design tool.
Status of NVESD real time imaging sensor simulation capability
For more than a decade US Army CERDEC Night Vision and Electronic Sensors Directorate (NVESD) has been developing real-time imaging infrared sensor simulations to aid the Army in: sensor design, prototyping, and analysis; search and target acquisition (STA) model development; and evaluation of tactics, techniques and procedures. In recent years, the sensor simulation program has undergone significant programmatic and technical changes, while still seeking to deliver simulation tools to the Army. This paper provides an update on the current state of NVESD simulation capabilities, the technical vision and the simulation validation efforts. Topics to be discussed include the transition of software to the PC platform, an explanation of the principle software products, and a look at future validation experiments and software enhancements.
Data modeling enabled real time image processing for target discrimination
Holger M. Jaenisch, James W. Handley, Marvin P. Carroll, et al.
UMV sensors currently under development for Future Combat Systems (FCS) require imaging capabilities. System firmware limitations also limit onboard image processing capabilities. Data Modeling mitigates these limitations through robust image segmentation and image enhancement using simple equations. To illustrate, we present a novel real-time seeker imaging simulation comprised of empirically derived Data Models for all aspects of the simulation. This includes FPA uniformity, shot noise, target geometry and dynamics, as well as fast real-time image segmentation and image enhancement. We demonstrate image enhancement by conversion of non-linear image processing routines such as Van Cittert deconvolution and Sobel edge detection into a single pass equation without intermediate storage requirements.
IRISIM: infrared imaging simulator
Rami Guissin, Eitan Lavi, Alex Palatnik, et al.
IRISIM is an imaging and video simulation program that models and simulates the entire imaging process of broadband and multispectral infrared imaging systems. IRISIM receives mono- and multi-spectral, high resolution flux images of infrared scenes, processes the imagery as a function of desired scenario and imagery, and generates the resultant imagery (still image or sequence), as it would appear on the operator's display or as an input to an image processing module. The physical models used in IRISIM are based on analytical and empirical models of the imaging process and are implemented in several main modules, including imager characteristics (e.g. optics, scanning, detector, dewar, electronics, display), imager-to-scene geometry, line of sight vibrations and environmental conditions. This paper provides an overview of IRISIM and presents preliminary results of a validation procedure which compares MRTD observer tests using IRISIM simulations to respective lab measurements of actual imagers and to MRTD predictions calculated by TRM3. The results of the validation process indicate a close fit between the compared data sets. Furthermore, IRISIM and TRM3 integration is currently considered as a future platform for IR system performance evaluation.
Search
icon_mobile_dropdown
Search and detection modeling of military imaging systems
Tana Maurer, Ronald G. Driggers, David L. Wilson
This paper provides an overview of research in search and detection modeling of military imaging systems. For more than forty-five years the US Army Night Vision and Electronic Sensors Directorate (NVESD) and others have been working to model the performance of infrared imagers in an effort to link imaging system design parameters to observer-sensor performance in the field. The widely used ACQUIRE model accomplished this by linking the minimum resolvable contrast of the sensor to field performance. From the original hypothesis put forth by John Johnson in 1958, to modeling time limited search, to modeling the impact of motion on target detection, to modeling target acquisition performance in different spectral bands, search has a wide and varied history. This paper will first describe the search-modeling task and then give a description of various topics in search and detection over the years. Some of the topics to be discussed will be classic search, clutter, computational vision models and the ACQUIRE model with its variants. It is hoped that this overview will provide both the novice and experienced search modeler alike with a useful summary and a glance at current issues and future challenges.
Time limited field of regard search
Recent work by the US Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate (NVESD) has led to the Time-Limited Search (TLS) model, which has given new formulations for the field of view (FOV) search times. The next step in the evaluation of the overall search model (ACQUIRE) is to apply these parameters to the field of regard (FOR) model. Human perception experiments were conducted using synthetic imagery developed at NVESD. The experiments were competitive player-on-player search tests with the intention of imposing realistic time constraints on the observers. FOR detection probabilities, search times, and false alarm data are analyzed and compared to predictions using both the TLS model and ACQUIRE.
Search times and probability of detection in time-limited search
When modeling the search and target acquisition process, probability of detection as a function of time is important to war games and physical entity simulations. Recent US Army RDECOM CERDEC Night Vision and Electronics Sensor Directorate modeling of search and detection has focused on time-limited search. Developing the relationship between detection probability and time of search as a differential equation is explored. One of the parameters in the current formula for probability of detection in time-limited search corresponds to the mean time to detect in time-unlimited search. However, the mean time to detect in time-limited search is shorter than the mean time to detect in time-unlimited search and the relationship between them is a mathematical relationship between these two mean times. This simple relationship is derived.
The effect of targets in defilade on the search task
Perception experiments were conducted at Night Vision and Electronic Sensors Directorate (NVESD) to investigate the effect of targets in defilade on the search task. Vehicles were placed in a simulated terrain and were either fully exposed, partially exposed, or placed in hull defilade. These images, along with a number of no-target images, were presented in a time-limited search perception experiment using military observers. The results were analyzed and compared with ACQUIRE predictions to determine if there are factors, other than size, affecting the search task when targets are in defilade.
On the relationship between human search strategies, conspicuity, and search performance
We determined the relationship between search performance with a limited field of view (FOV) and several scanning- and scene parameters in human observer experiments. The observers (38 trained army scouts) searched through a large search sector for a target (a camouflaged person) on a heath. From trial to trial the target appeared at a different location. With a joystick the observers scanned through a panoramic image (displayed on a PC-monitor) while the scan path was registered. Four conditions were run differing in sensor type (visual or thermal infrared) and window size (large or small). In conditions with a small window size the zoom option could be used. Detection performance was highly dependent on zoom factor and deteriorated when scan speed increased beyond a threshold value. Moreover, the distribution of scan speeds scales with the threshold speed. This indicates that the observers are aware of their limitations and choose a (near) optimal search strategy. We found no correlation between the fraction of detected targets and overall search time for the individual observers, indicating that both are independent measures of individual search performance. Search performance (fraction detected, total search time, time in view for detection) was found to be strongly related to target conspicuity. Moreover, we found the same relationship between search performance and conspicuity for visual and thermal targets. This indicates that search performance can be predicted directly by conspicuity regardless of the sensor type.
Test Hardware
icon_mobile_dropdown
Advanced target projector technologies for characterization of staring-array based EO sensors
Alan Irwin, Steve McHugh, Jack Grigor, et al.
This paper describes recent developments in the area of target projection technologies for measurement of staring IR sensor image quality. In addition to the latest reflective target techniques, we describe a novel Variable Slit Target (VSTa) device, which allows extremely precise slit, edge, and rectangular features to be generated at the focus of a reflective target projection system, and stepped across a UUT’s FOV with a high degree of sub-pixel resolution. We also present the Collimator Line of Sight Alignment Techniques (CLOSAT) as a means of both precisely aligning the target projector and adding dynamic capability to static targets. The discussion includes a review of the applicability of VSTa and CLOSAT to current and emerging UUTs incorporating advanced staring focal plane technologies.
Advanced man-portable test systems for characterization of UUTs with laser range finder/designator capabilities
Paul Bryant, Brian Rich, Jack Grigor, et al.
This paper presents the latest developments in instrumentation for military laser range-finder/designator (LRF/D) test and evaluation. SBIR has completed development of two new laser test modules designed to support a wide range of laser measurements including range accuracy and receiver sensitivity, pulse energy and temporal characteristics, beam spatial/angular characteristics, and VIS/IR to laser co-boresighting. The new Laser Energy Module (LEM) provides automated, variable attenuation of UUT laser energy, and performs measurement of beam amplitude and temporal characteristics. The new Laser/Boresight Module (LBM) supports range simulation and receiver sensitivity measurement, performs UUT laser beam analysis (divergence, satellite beams, etc), and supports high-accuracy co-boresighting of VIS, IR, and laser UUT subsystems. The LBM includes a three-color, fiber-coupled laser source (1064, 1540, and 1570 nm), a sophisticated fiber-optic module (FOM) for output energy amplitude modulation, a 1-2 μm SWIR camera, and a variety of advanced triggering and range simulation functions.
RAD9000: a high-performance spectral radiometer for EO calibration applications
Greg Matis, Paul Bryant, Jack Grigor, et al.
This paper provides an update on the RAD9000 MWIR/LWIR spectral radiometer: a high-performance instrument supporting extremely accurate absolute and relative radiometric calibration of EO test systems. The system features an all-reflective optical system, internal and external thermal reference sources, a visible camera-based sighting/alignment capability, modular MWIR and LWIR detector/filter subassemblies, flexible control/display software, and a sophisticated graphical user interface (GUI). We present prototype performance data describing the instrument's thermal sensitivity, radiometric accuracy, spectral resolution, calibration, and other key parameters.
Systems and System Test
icon_mobile_dropdown
Advanced test systems for production testing of cameras with day/night and visible/NIR capabilities
This paper presents the latest developments in instrumentation for military fixed and head-mounted camera test and evaluation. SBIR has completed development of a new variable contrast test system for evaluating camera day/night mode performance. The system utilizes an integrating sphere with variable, full field-of-regard background illumination combined with a collimator, controlled ambient background, a set of variably illuminated chrome-on-glass targets, and visible/NIR filters. The system employs precision azimuth and elevation motion stages to facilitate FOV size and uniformity evaluation. SBIR’s IRWindowsTM software provides a series of automated tests such as boresight, MTF, MRTD, FPN, pixel defects, spectral response and dynamic range/contrast. The system uses a second integrating sphere with a variable luminance control to measure FOV uniformity, individual pixel response, and automatic brightness control efficiency.
Test Hardware
icon_mobile_dropdown
Practical issues with 3D noise measurements and application to modern infrared sensors
The two most important characteristics of every infrared imaging system are its resolution and its sensitivity. The resolution is limited by the system's Modulation Transfer Function (MTF), which is typically measurable. System sensitivity is limited by noise, which for infrared systems is usually thought of as a Noise Equivalent Temperature Difference (NETD). However, complete characterization of system noise in modern systems requires the 3D-Noise methodology (developed at NVESD), which separates the system noise into 7 orthogonal components including both temporal-varying and fixed-pattern noises. This separation of noise components is particularly relevant and important in characterizing Focal Plane Arrays (FPA), where fixed-pattern noise can dominate. Since fixed-pattern noise cannot be integrated out by post-processing or by the eye, it is more damaging to range performance than temporally-varying noise. While the 3D-Noise methodology is straightforward, there are several important practical considerations that must be accounted for in accurately measuring 3D Noise in the laboratory. This paper describes these practical considerations, the measurement procedures used in the Advanced Sensor Evaluation Facility (ASEF) at NVESD, and their application to characterizing modern and future infrared imaging systems.
Systems and System Test
icon_mobile_dropdown
Measurement of uncooled thermal imager noise
Uncooled staring thermal imagers have noise characteristics that are different from cooled thermal imagers (photon detector sensors). For uncooled sensors, typical measurements of some noise components can vary as much as 3 to 5 times the original noise value. Additionally, the detector response often drifts to the point that non-uniformity correction is only good for a short time period. Because the noise can vary so dramatically with time, it can prove difficult to measure the noise associated with uncooled systems. However, it is critical that laboratory measurements provide repeatable and reliable measurement of constructed uncooled thermal imagers. In light of the above difficulties, a primary objective of this research has been to develop a satisfactory measurement for the noise of uncooled staring thermal imagers. In this research effort, three-dimensional noise (3D Noise) data vs. time was collected for several uncooled sensors after nonuniformity correction. Digital and analog noise data vs. time were collected nearly simultaneously. Also, multiple 3D Noise vs. time runs were made to allow the examination of variability. Measurement techniques are being developed to provide meaningful and repeatable test procedures to characterize the uncooled systems.
Test Hardware
icon_mobile_dropdown
Uncertainty analysis of the AEDC 7V chamber
Dustin Crider, Heard Lowry, Randy Nicholson, et al.
For over 30 years, the Space Systems Test Facility and space chambers at the Arnold Engineering Development Center (AEDC) have been used to perform space sensor characterization, calibration, and mission simulation testing of space-based, interceptor, and airborne sensors. In partnership with the Missile Defense Agency (MDA), capability upgrades are continuously pursued to keep pace with evolving sensor technologies. Upgrades to sensor test facilities require rigorous facility characterization and calibration activities that are part of AEDC's annual activities to comply with Major Range Test Facility Base processes to ensure quality metrology and test data. This paper discusses the ongoing effort to characterize and quantify Aerospace Chamber 7V measurement uncertainties. The 7V Chamber is a state-of-the-art cryogenic/vacuum facility providing calibration and high-fidelity mission simulation for infrared seekers and sensors against a low-infrared background. One of its key features is the high fidelity of the radiometric calibration process. Calibration of the radiometric sources used is traceable to the National Institute of Standards and Technology and provides relative uncertainties on the order of two to three percent, based on measurement data acquired during many test periods. Three types of sources of measurement error and top-level uncertainties have been analyzed; these include radiometric calibration, target position, and spectral output. The approach used and presented is to quantify uncertainties of each component in the optical system and then build uncertainty diagrams and easily updated databases to detail the uncertainty for each optical system. The formalism, equations, and corresponding analyses are provided to help describe how the specific quantities are derived and currently used. This paper presents the uncertainty methodology used and current results.
Systems and System Test
icon_mobile_dropdown
IR depth from stereo for autonomous navigation
John S. Zelek, Marc Holbein, Kiana Hajebi, et al.
Visual computations such as depth-from-stereo are highly dependent on edges and textures for the process of image correspondence. IR images typically lack the necessary detail for producing dense depth maps, however, sparse maps may be adequate for autonomous obstacle avoidance. We have constructed an IR stereo head for eventual UGV and UAV night time navigation. In order to calibrate the unit, we have constructed a thermal calibration checkerboard. We show that standard stereo camera calibration based on a checkerboard developed for calibrating visible spectrum cameras can also be used for calibrating an IR stereo pair, with of course hot/cold squares used as opposed to black/white squares. Once calibrated, the intrinsic and extrinsic parameters for each camera provide the absolute depth value if a left-right correspondence can be established. Given the general texture-less characteristic of IR imagery, selecting key salient features that are left-right stable and tractable is key for producing a sparse depth map. IR imagery, like visible and range maps is highly spatially correlated and a dense map can be obtained from a sparse map via propagation. Preliminary results from salient IR feature detection are investigated as well.
Scene-based non-uniformity correction for focal plane arrays using a facet model
Matthias Voigt, Martin Zarzycki, Dennis H. LeMieux, et al.
This paper discusses scene-based estimation of non-uniformity correction (NUC) coefficients for focal-plane array sensors using spatial image neighborhood information (facet model). Several scene-based methods for estimation of non-uniformity correction (NUC) parameters were proposed in the literature, but artifacts can remain in specific situations. The objective of the work is to estimate non-uniformity correction coefficients using random scene images without making assumptions about motion or constant statistics. We show analytically and experimentally, how to reduce fixed pattern noise using a facet model. The method works best if out-of-focus images are available for calibration. We estimate NUC coefficients experimentally from a set of twelve scene images. The facet model approach can be an alternative for applications where artifacts otherwise would remain.
Cooled IR detectors calibration analysis and optimization
As far as high performance IR detector are concerned, MWIR cooled 2D arrays are more and more used for high quality systems which need the uniformity and quality of the response in order to offer a good non uniformity corrections (NUC) and a stable IR detector calibrations. The aim of this paper is to demonstrate the state of the art of Sofradir's detectors regarding the NUC. We will start by presenting all the present performances (Residual Fixed Pattern Noise, two or three points corrections, linearity, stability of the correction versus the time and versus ageing, stability with the focal plane temperature, fast cool down applications, etc.). We will then make a comparison with other 2d arrays' performances, using the same methods of calculation as presented in the datasheet. Finally, we will present a perspective about the advanced studies carried on at Sofradir to increase the performances in this critical area in terms of technological progress or calculation method.