Proceedings Volume 9071

Infrared Imaging Systems: Design, Analysis, Modeling, and Testing XXV

cover
Proceedings Volume 9071

Infrared Imaging Systems: Design, Analysis, Modeling, and Testing XXV

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 10 June 2014
Contents: 14 Sessions, 50 Papers, 0 Presentations
Conference: SPIE Defense + Security 2014
Volume Number: 9071

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 9071
  • Modeling I
  • Modeling II
  • Modeling III: Algorithms
  • Modeling IV: Night Vision Integrated Performance Model
  • Modeling V
  • Testing I
  • Testing II
  • Testing III
  • Systems
  • Targets, Backgrounds, and Atmospherics I
  • Targets, Backgrounds, and Atmospherics II
  • Technologies for Synthetic Environments: Hardware-in-the-Loop
  • Poster Session
Front Matter: Volume 9071
icon_mobile_dropdown
Front Matter: Volume 9071
This PDF file contains the front matter associated with SPIE Proceedings Volume 9071, including the Title Page, Copyright information, Table of Contents, Introduction, and the Conference Committee listing.
Modeling I
icon_mobile_dropdown
Comparison of IRST systems by SNR
Charles C. Kim, Ron Meyer
Infrared (IR) cameras are widely used in systems to search and track. IR search and track (IRST) systems are most often available in one of two distinct spectral bands: mid-wave IR (MWIR) or long-wave IR (LWIR). Many have compared both systems in a number of ways. The comparison included field data and analysis under different scenarios. Yet, it is a challenge to make a right decision in choosing one band over the other band for a new scenario. In some respects, the attempt is like choosing between an apple and an orange. The signal-to-noise ratio (SNR) of a system for a point-like target is one criterion that helps one to make an informed decision. The formula for SNR commonly uses noise equivalent irradiance (NEI) that requires front optics. Such formalism cannot compare two bands before a camera is built complete with front optics. We derive a formula for SNR that utilizes noise equivalent differential temperature (NEDT) that does not require front optics. The formula is further simplified under some assumptions, which identifies critical parameters and provides an insight in comparing two bands. We have shown an example for a simple case.
Field of view selection for optimal airborne imaging sensor performance
Tristan M. Goss, P. Werner Barnard, Halidun Fildis, et al.
The choice of the Field of View (FOV) of imaging sensors used in airborne targeting applications has major impact on the overall performance of the system. Conducting a market survey from published data on sensors used in stabilized airborne targeting systems shows a trend of ever narrowing FOVs housed in smaller and lighter volumes. This approach promotes the ever increasing geometric resolution provided by narrower FOVs, while it seemingly ignores the influences the FOV selection has on the sensor’s sensitivity, the effects of diffraction, the influences of sight line jitter and collectively the overall system performance. This paper presents a trade-off methodology to select the optimal FOV for an imaging sensor that is limited in aperture diameter by mechanical constraints (such as space/volume available and window size) by balancing the influences FOV has on sensitivity and resolution and thereby optimizing the system’s performance. The methodology may be applied to staring array based imaging sensors across all wavebands from visible/day cameras through to long wave infrared thermal imagers. Some examples of sensor analysis applying the trade-off methodology are given that highlights the performance advantages that can be gained by maximizing the aperture diameters and choosing the optimal FOV for an imaging sensor used in airborne targeting applications.
Thermal imager sources of non-uniformities: modeling of static and dynamic contributions during operations
B. Sozzi, M. Olivieri, P. Mariani, et al.
Due to the fast-growing of cooled detector sensitivity in the last years, on the image 10-20 mK temperature difference between adjacent objects can theoretically be discerned if the calibration algorithm (NUC) is capable to take into account and compensate every spatial noise source. To predict how the NUC algorithm is strong in all working condition, the modeling of the flux impinging on the detector becomes a challenge to control and improve the quality of a properly calibrated image in all scene/ambient conditions including every source of spurious signal. In literature there are just available papers dealing with NU caused by pixel-to-pixel differences of detector parameters and by the difference between the reflection of the detector cold part and the housing at the operative temperature. These models don’t explain the effects on the NUC results due to vignetting, dynamic sources out and inside the FOV, reflected contributions from hot spots inside the housing (for example thermal reference far of the optical path). We propose a mathematical model in which: 1) detector and system (opto-mechanical configuration and scene) are considered separated and represented by two independent transfer functions 2) on every pixel of the array the amount of photonic signal coming from different spurious sources are considered to evaluate the effect on residual spatial noise due to dynamic operative conditions. This article also contains simulation results showing how this model can be used to predict the amount of spatial noise.
Comparing and contrasting different broadband MTF definitions and the relationship to range performance predictions
Jonathan G. Hixson, David P. Haefner, Brian Teaney
The monochromatic modulation transfer function (MTF) is a spectral average across wavelength weighted by the sensor’s spectral sensitivity and scaled by the spectral behavior of the source. For reflective band sensors, where there are significant variations in spectral shape of the reflected light, this spectral averaging can result in very different MTFs and, therefore, the resulting performance. In this paper, we explore the influence of this spectral averaging on performance utilizing NV-IPM v1.1 (Night Vision Integrated Performance Model). We report the errors in range performance when a system is characterized with one illumination and the performance is quoted for another. Our results summarize the accuracy of different assumptions to how a monochromatic MTF can be approximated, and how the measurement conditions under which a system was characterized should be considered when modeling performance.
Method for quantifying image quality in push-broom hyperspectral cameras
Gudrun Hoeye, Trond Løke, Andrei Fridman
We propose a method for measuring and quantifying image quality in push-broom hyperspectral cameras in terms of spatial misregistration—such as keystone and variations in the point-spread-function across spectral channels—and image sharpness. The method is suitable for both traditional push-broom hyperspectral cameras where keystone is corrected in hardware and cameras where keystone is corrected in post-processing, such as resampling and mixel cameras. We show how the measured camera performance can be presented graphically in an intuitive and easy-to-understand way, comprising both image sharpness and spatial misregistration in the same figure. For the misregistration we suggest that both the mean standard deviation and the maximum value for each pixel are shown. We also suggest a possible additional parameter for quantifying camera performance: probability of misregistration being larger than a given threshold. Finally, we have quantified the performance of a HySpex SWIR 384 camera prototype using the suggested method. The method appears well suited for assessing camera quality and for comparing the performance of different hyperspectral imagers, and could become the future standard for how to measure and quantify the image quality of push-broom hyperspectral cameras.
Modeling II
icon_mobile_dropdown
An analysis of the performance trade space for multifunction EO-IR systems
As pixels have gotten smaller and focal plane array sizes larger, it may be practical to make EO-IR systems which are inherently multifunctional. A system intended to perform threat warning, pilotage imaging and target acquisition imaging would be a multifunctional system. This notional system could be panoramic or hemispheric, with cameras covering all of space simultaneously. It could save cost and weight over federated systems. However, can all of these disparate tasks be performed successfully by a single system, or will the trade-offs compromise the potential savings? Targeting sensors have typically been designed to create long range, high resolution imagery for detection and identification. The imagery is optimized to suppress the scene/clutter and maximize the target signature. Pilotage sensors have typically been wide field of view, unity magnification systems which maximize scene contrast to enable safe flight. Threat warning sensors are intended to detect non or under resolved (spatially or temporally) targets/events using algorithms, and to discriminate them from clutter or solar glint. The first two applications involve imagery for human operator consumption, while the third feeds algorithms. With these disparate performance goals, there is a wide variety of competing metrics used to optimize these sensors -- F/no, FOV/IFOV, frame rate, NETD, NEI, FAR, Probability of Identification, etc. This study is a look at how these performance parameters and system descriptors trade and their relative impacts.
Observer analysis and its impact on task performance modeling
Fire fighters use relatively low cost thermal imaging cameras to locate hot spots and fire hazards in buildings. This research describes the analyses performed to study the impact of thermal image quality on fire fighter fire hazard detection task performance. Using human perception data collected by the National Institute of Standards and Technology (NIST) for fire fighters detecting hazards in a thermal image, an observer analysis was performed to quantify the sensitivity and bias of each observer. Using this analysis, the subjects were divided into three groups representing three different levels of performance. The top-performing group was used for the remainder of the modeling. Models were developed which related image quality factors such as contrast, brightness, spatial resolution, and noise to task performance probabilities. The models were fitted to the human perception data using logistic regression, as well as probit regression. Probit regression was found to yield superior fits and showed that models with not only 2nd order parameter interactions, but also 3rd order parameter interactions performed the best.
Lab and field measurements to evaluate a differential polarimetric IR (DPIR) search sensor
Roger Thompson Jr., Keith Krapels, Van Hodgkin, et al.
Differential polarimetry has shown the ability to enhance target signatures by reducing background signatures, thus effectively increasing the signal-to-noise ratio on target. This method has mainly been done for resolved, high contrast targets. For ground-to-air search and tracking of small, slow, airborne targets, the target at range can be sub-pixel and hard to detect against the background sky. Given the unpolarized nature of the thermal emission of the background sky, it should be possible to use differential polarimetry to “filter out” the background, and thus enhance the ability of detecting sub-pixel targets. The first step in testing this hypothesis is to devise a set of surrogate sample targets and measure their polarimetric properties in the thermal IR in both the lab and the field. The goal of this paper is to determine whether or not it is feasible to use differential polarimetry to search, detect, and track small airborne objects.
Modeling segmentation performance in NV-IPM
Imaging sensors produce images whose primary use is to convey information to human operators. However, their proliferation has resulted in an overload of information. As a result, computational algorithms are being increasingly implemented to simplify an operator's task or to eliminate the human operator altogether. Predicting the effect of algorithms on task performance is currently cumbersome requiring estimates of the effects of an algorithm on the blurring and noise, and “shoe-horning” these effects into existing models. With the increasing use of automated algorithms with imaging sensors, a fully integrated approach is desired. While specific implementation algorithms differ, general tasks can be identified that form building blocks of a wide range of possible algorithms. Those tasks are segmentation of objects from the spatio-temporal background, object tracking over time, feature extraction, and transformation of features into human usable information. In this paper research is conducted with the purpose of developing a general performance model for segmentation algorithms based on image quality. A database of pristine imagery has been developed in which there is a wide variety of clearly defined regions with respect to shape, size, and inherent contrast. Both synthetic and “natural” images make up the database. Each image is subjected to various amounts of blur and noise. Metrics for the accuracy of segmentation have been developed and measured for each image and segmentation algorithm. Using the computed metric values and the known values of blur and noise, a model of performance for segmentation is being developed. Preliminary results are reported.
Predicted NETD performance of a polarized infrared imaging sensor
Bradley Preece, Van A. Hodgkin, Roger Thompson, et al.
Polarization filters are commonly used as a means of increasing the contrast of a scene thereby increasing sensor range performance. The change in the signal to noise ratio (SNR) is a function of the polarization of the target and background, the type and orientation of the polarization filter(s), and the overall transparency of the filter. However, in the mid-wave and longwave infrared bands (MWIR and LWIR), the noise equivalent temperature difference (NETD), which directly affects the SNR, is a function of the filter’s re-emission and its reflected temperature radiance. This paper presents a model, by means of a Stokes vector input, that can be incorporated into the Night Vision Integrated Performance Model (NV-IPM) in order to predict the change in SNR, NETD, and noise equivalent irradiance (NEI) for infrared polarimeter imaging systems. The model is then used to conduct a SNR trade study, using a modeled Stokes vector input, for a notional system looking at a reference target. Future laboratory and field measurements conducted at Night Vision Electronic Sensors Directorate (NVESD) will be used to update, validate, and mature the model of conventional infrared systems equipped with polarization filters.
Modeling III: Algorithms
icon_mobile_dropdown
Technologies for low-bandwidth high-latency unmanned ground vehicle control
Teresa Pace, Ken Cogan, Lee Hunt, et al.
Automation technology has evolved at a rapid pace in recent years; however, many real-world problems require contextual understanding, problem solving, and other forms of higher-order thinking that extends beyond the capabilities of robots for the foreseeable future. This limits the complexity of automation which can be supplied to modern unmanned ground robots (UGV) and necessitates human-in-the-loop monitoring and control for some portions of missions. In order for the human operator to make decisions and provide tasking during key portions of the mission, existing solutions first derive significant information from a potentially dense reconstruction of the scene utilizing LIDAR, video, and other onboard sensors. A dense reconstruction contains too much data for real-time transmission over a modern wireless data link, so the robot electronics must first condense the scene representation prior to transmission. The control station receives this condensed scene representations and provides visual information to the human operator; the human operator then provides tele-operation commands in real-time to the robot. This paper discusses approaches to dense scene reduction of the data required to transmit to a human-in-the loop as well as the challenges associated with them. In addition, the complex and unstructured nature of real-world environments increases the need for tele-operation. Furthermore, many environments reduce the bandwidth and increase the latency of the link. Ultimately, worsening conditions will cause the tele-operation control process to break down, rendering the robot ineffective. In a worst-case scenario, extreme conditions causing a complete loss-of-communications could result in mission failure and loss of the vehicle.
High-dynamic range imaging using FAST-IR imagery
One of the biggest and challenging limitations of infrared cameras in surveillance applications is the limited dynamic range. Image blooming and other artifacts may hide important details in the scene when saturation occurs. Many different techniques such as using multiple exposure times have been developed in the past to help overcome these issues. However all these techniques feature non-negligible limitations. This paper presents a new high-dynamic range algorithm called Optimized Enhanced High Dynamic Range Imaging (OEHDRI). It is based on a pixel-wise exposure-time independent calibration as well as a pixel based frame summing with proper interleaved integration times. This technique benefits from the use of a high frame rate camera (< 20,000 fps). Description of the hardware is also included.
A virtual environment for modeling and testing sensemaking with multisensor information
Denise Nicholson, Kathleen Bartlett, Robert Hoppenfeld, et al.
Given today’s challenging Irregular Warfare, members of small infantry units must be able to function as highly sensitized perceivers throughout large operational areas. Improved Situation Awareness (SA) in rapidly changing fields of operation may also save lives of law enforcement personnel and first responders. Critical competencies for these individuals include sociocultural sensemaking, the ability to assess a situation through the perception of essential salient environmental and behavioral cues, and intuitive sensemaking, which allows experts to act with the utmost agility. Intuitive sensemaking and intuitive decision making (IDM), which involve processing information at a subconscious level, have been cited as playing a critical role in saving lives and enabling mission success. This paper discusses the development of a virtual environment for modeling, analysis and human-in-the-loop testing of perception, sensemaking, intuitive sensemaking, decision making (DM), and IDM performance, using state-of-the-art scene simulation and modeled imagery from multi-source systems, under the “Intuition and Implicit Learning” Basic Research Challenge (I2BRC) sponsored by the Office of Naval Research (ONR). We present results from our human systems engineering approach including 1) development of requirements and test metrics for individual and integrated system components, 2) the system architecture design 3) images of the prototype virtual environment testing system and 4) a discussion of the system’s current and future testing capabilities. In particular, we examine an Enhanced Interaction Suite testbed to model, test, and analyze the impact of advances in sensor spatial, and temporal resolution to a user’s intuitive sensemaking and decision making capabilities.
Performance assessment of compressive sensing imaging
Compressive sensing (CS) can potentially form an image of equivalent quality to a large format, megapixel array, using a smaller number of individual measurements. This has the potential to provide smaller, cheaper, and lower bandwidth imaging systems. To properly assess the value of such systems, it is necessary to fully characterize the image quality, including artifacts, sensitivity to noise, and CS limitations. Full resolution imagery of an eight tracked vehicle target set at range was used as an input for simulated single-pixel CS camera measurements. The CS algorithm then reconstructs images from the simulated single-pixel CS camera for various levels of compression and noise. For comparison, a traditional camera was also simulated setting the number of pixels equal to the number of CS measurements in each case. Human perception experiments were performed to determine the identification performance within the trade space. The performance of the nonlinear CS camera was modeled with the Night Vision Integrated Performance Model (NVIPM) by mapping the nonlinear degradations to an equivalent linear shift invariant model. Finally, the limitations of compressive sensing modeling will be discussed.
Low-cost computational imaging infrared sensor
Kyle Bryant, W. Derrick Edwards, Ryan K. Rogers
Computational imaging techniques can be used to extend the depth of field of imaging sensors such that the sensors become less expensive to build and athermalize with no loss to performance. Optical phase can be manipulated to create an image that is optimized for a detection and tracking algorithm as well as reconstructed digitally to form an image suitable for viewing. A typical low-cost sensor which is used for target detection and tracking may run an algorithm which requires different features and resolution from its imagery than would a system optimized for a human. This offers a unique opportunity to optimize both optics and image processing for a system which can maximize mission performance as well as minimize production cost. Simple computational techniques have not yet been successful in passive, low-signal environments due to noise issues. This study examines the use of a simple computational technique in an algorithmic application in which optimal reconstruction may occur with lower noise. This paper will describe the model, simulation, and prototype which resulted from a detailed and novel system design and modeling process. The goal of this effort is to accurately model the anticipated performance and to prove actual cost savings of a tracking sensor which employs computational imaging techniques.
Modeling IV: Night Vision Integrated Performance Model
icon_mobile_dropdown
Color camera measurement and modeling for use in the Night Vision Integrated Performance Model (NV-IPM)
The necessity of color balancing in day color cameras complicates both laboratory measurements as well as modeling for task performance prediction. In this proceeding, we discuss how the raw camera performance can be measured and characterized. We further demonstrate how these measurements can be modeled in the Night Vision Integrated Performance Model (NV-IPM) and how the modeled results can be applied to additional experimental conditions beyond those used during characterization. We also present the theoretical framework behind the color camera component in NV-IPM, where an effective monochromatic imaging system is created from applying a color correction to the raw color camera and generating the color corrected grayscale image. The modeled performance shows excellent agreement with measurements for both monochromatic and colored scenes. The NV-IPM components developed for this work are available in NV-IPM v1.2.
Imaging system sensitivity analysis with NV-IPM
This paper describes the sensitivity analysis capabilities to be added to version 1.2 of the NVESD imaging sensor model NV-IPM. Imaging system design always involves tradeoffs to design the best system possible within size, weight, and cost constraints. In general, the performance of a well designed system will be limited by the largest, heaviest, and most expensive components. Modeling is used to analyze system designs before the system is built. Traditionally, NVESD models were only used to determine the performance of a given system design. NV-IPM has the added ability to automatically determine the sensitivity of any system output to changes in the system parameters. The component-based structure of NV-IPM tracks the dependence between outputs and inputs such that only the relevant parameters are varied in the sensitivity analysis. This allows sensitivity analysis of an output such as probability of identification to determine the limiting parameters of the system. Individual components can be optimized by doing sensitivity analysis of outputs such as NETD or SNR. This capability will be demonstrated by analyzing example imaging systems.
Modeling laser radar systems in the Night Vision Integrated Performance Model (NV-IPM)
Active imaging systems are currently being developed to increase the target acquisition and identification range performance of electro-optical systems. This paper reports on current efforts to extend the Night Vision Integrated Performance Model (NV-IPM) to include laser radar (LADAR) systems for unresolved targets. Combining this new LADAR modeling capability with existing sensor and environment capabilities already present in NV-IPM will enable modeling and trade studies for military relevant systems.
Modeling V
icon_mobile_dropdown
Modeling static and dynamic detection of humans in rural terrain
Historically, the focus of detection experiments and modeling at the Night Vision and Electronic Sensors Directorate (NVESD) has been on detecting military vehicles in rural terrains. A gap remains in understanding the detection of human targets in rural terrains and how it might differ from detection of vehicles. There are also improvements that can be made in how to quantify the effect of human movement on detectability. Two experiments were developed to look at probability of detection and time to detect fully exposed human targets in a low to moderate clutter environment in the infrared waveband. The first test uses static images of standing humans while the second test uses videos of humans walking across the scene at various ranges and speeds. Various definitions of target and background areas are explored to calculated contrast and target size. Ultimately, task difficulty parameters (V50s) are calculated to calibrate NVESD sensor performance models, specifically NVThermIP and NV-IPM, for the human detection task. The focus of the analysis in this paper is primarily on the static detection task since the analysis for the dynamic detection experiment is still in the early stages. Results will be presented as well as a plan for future work in this area.
Uncertainty analysis of sensor performance parameters in the shortwave infrared spectral range based on nightglow as the main lightsource
Images collected in the shortwave infrared (SWIR) spectral range, 1-2.5 μm, are similar to visual (VIS) images and are easier to interpret for a human operator than images collected in the thermal infrared range, >3 μm. The ability of SWIR radiation to penetrate ordinary glass also means that conventional lens materials can be used. The night vision capability of a SWIR camera is however dependent on external light sources. At moonless conditions the dominant natural light source is nightglow, but the intensity is varying, both locally and temporally. These fluctuations are added to variations in other parameters and therefore the real performance of a SWIR camera at moonless conditions can be quite different compared with the expected performance. Collected measured data from the literature on the temporal and local variations of nightglow are presented and the variations of the nightglow intensity and other measured parameters are quantified by computing standard and combined standard uncertainties. The analysis shows that the uncertainty contributions from the nightglow variations are significant. However, nightglow is also found to be a potentially adequate light source for SWIR applications.
A uniform method for analytically modeling mulit-target acquisition with independent networked imaging sensors
The problem solved in this paper is easily stated: for a scenario with 𝑛 networked and moving imaging sensors, 𝑚 moving targets and 𝑘 independent observers searching imagery produced by the 𝑛 moving sensors, analytically model system target acquisition probability for each target as a function of time. Information input into the model is the time dependence of 𝘗 and 𝜏, two parameters that describe observer-sensor-atmosphere-range-target properties of the target acquisition system for the case where neither the sensor nor target is moving. The parameter 𝘗 can be calculated by the NV-IPM model and 𝜏 is estimated empirically from 𝘗. In this model 𝑛, 𝑚 and 𝑘 are integers and 𝑘 can be less than, equal to or greater than 𝑛. Increasing 𝑛 and 𝑘 results in a substantial increase in target acquisition probabilities. Because the sensors are networked, a target is said to be detected the moment the first of the 𝑘 observers declares the target. The model applies to time-limited or time-unlimited search, and applies to any imaging sensors operating in any wavelength band provided each sensor can be described by 𝘗 and 𝜏 parameters.
SWIR range performance prediction for long-range applications
E. Guadagnoli, P. Ventura, Gianni Barani, et al.
Long range imaging systems have applications in vessel traffic monitoring, border and coastal observation, and generic surveillance. Often, sign reading and identification capabilities are required, and medium or long-wave infrared systems are simply not the best solution for these tasks, because of the low scene contrast. Among reflected light imagers, the short-wave infrared has a competitive advantage over the visible and near-infrared spectrum, being less affected by path attenuation, scattering and turbulence. However, predicting a SWIR system long range performance still represents a challenge because of the need of an accurate atmospheric modelling. In this paper, we present the key limiting performance factors for long range applications, and how we used popular atmospheric models to extract the synthetic simulation parameters needed for range performance prediction. We then present a case study for a long range application, where the main requirement is to read a vessel name at distances greater than 10km. The results show a significant advantage of SWIR over visible and near-infrared solutions for long range identification tasks.
Testing I
icon_mobile_dropdown
NIST traceable measurements of radiance and luminance levels of night-vision-goggle test-instruments
G. P. Eppeldauer, V. B. Podobedov
In order to perform radiance and luminance level measurements of night-vision-goggle (NVG) test instruments, NIST developed new-generation transfer-standard radiometers (TR). The new TRs can perform low-level radiance and luminance measurements with SI traceability and low uncertainty. The TRs were calibrated against NIST detector/radiometer standards holding the NIST photometric and radiometric scales. An 815 nm diode laser was used at NIST for the radiance responsivity calibrations. A spectrally flat (constant) filter correction was made for the TRs to correct the spectral responsivity change of the built-in Si photodiode for LEDs peaking at different wavelengths in the different test sets. The radiance responsivity transfer to the test instruments (test-sets) is discussed. The radiance values of the test instruments were measured with the TRs. The TRs propagate the traceablity to the NIST detector-based reference scales. The radiance uncertainty obtained from three TR measurements was 4.6 % (𝑘=2) at a luminance of 3.43 x 10-4 cd/m2. The output radiance of the previously used IR sphere source and the radiance responsivity of a previously used secondary standard detector unit, which was originally calibrated against an IR sphere source, were also measured with the TRs. The performances of the NVG test instruments were evaluated and the manufacturer produced radiance and luminance levels were calibrated with SI/NIST traceability.
Methodology for lens transmission measurement in the 8-13 micron waveband: integrating sphere versus camera-based
Transmission is a key parameter in describing an IR-lens, but is also often the subject of controversy. One reason is the misinterpretation of “transmission” in infrared camera practice. If the camera lens is replaced by an alternative one the signal will be affected by two parameters: proportional to the square of the effective aperture based F-number and linearly to the transmission. The measure to collect energy is defined as the Energy Throughput ETP, and the signal level of the IR-camera is proportional to ETP. Most published lens transmission values are based on spectrophotometric measurement of plane-parallel witness pieces obtained from coating processes. Published aperture based F-numbers derive very often from ray tracing values in the on-axis bundle. The following contribution is about transmission measurement. It highlights the bulk absorption and coating issues of infrared lenses. Two different setups are built and tested, an Integrating Sphere (IS)-based setup and a Camera-Based (CB) setup. The comparison of the two principles also clarifies the impact of the F-number. One difficulty in accurately estimating lens transmission lies in measuring the ratio between the signal of ray bundles deviated by the lens under test and the signal of non-deviated ray bundles without lens (100% transmission). There are many sources for errors and deviations in LWIR-region including: background radiation, reflection from “rough” surfaces, and unexpected transmission bands. Care is taken in the set up that measured signals with and without the lens are consistent and reproducible. Reference elements such as uncoated lenses are used for calibration of both setups. When solid angle-based radiometric relationships are included, both setups yield consistent transmission values. Setups and their calibration will be described and test results on commercially available lenses will be published.
Modulation transfer function measurement of microbolometer focal plane array by Lloyd's mirror method
Today, both military and civilian applications require miniaturized and cheap optical systems. One way to achieve this trend consists in decreasing the pixel pitch of focal plane arrays (FPA). In order to evaluate the performance of the overall optical systems, it is necessary to measure the modulation transfer function (MTF) of these pixels. However, small pixels lead to higher cut-off frequencies and therefore, original MTF measurements that are able to extract frequencies up to these high cut-off frequencies, are needed. In this paper, we will present a way to extract 1D MTF at high frequencies by projecting fringes on the FPA. The device uses a Lloyd mirror placed near and perpendicular to the focal plane array. Consequently, an interference pattern of fringes can be projected on the detector. By varying the angle of incidence of the light beam, we can tune the period of the interference fringes and, thus, explore a wide range of spatial frequencies, and mainly around the cut-off frequency of the pixel which is one of the most interesting area. Illustration of this method will be applied to a 640×480 microbolometer focal plane array with a pixel pitch of 17µm in the LWIR spectral region.
Reporting NETD: why measurement techniques matter
Ryan K. Rogers, W. Derrik Edwards, Caleb E. Waddle, et al.
For over 30 years, the U.S. Army Aviation and Missile Research, Development, and Engineering Center (AMRDEC) has specialized in characterizing the performance of infrared (IR) imaging systems in the laboratory and field. In the late 90’s, AMRDEC developed the Automated IR Sensor Test Facility (AISTF) which allowed efficient deployment testing of aviation and missile IR sensor systems. More recently, AMRDEC has tested many uncooled infrared (UCIR) sensor systems that have size, weight, power, and cost (SWAPC) benefits for certain fielded U.S. Army imaging systems. To compensate for relatively poor detector sensitivities, most UCIR systems operate with very fast focal ratio or F-number (f/#) optics. AMRDEC has recently found that measuring the Noise Equivalent Temperature Difference (NETD) with traditional techniques used with cooled infrared systems produce biased results when applied to systems with faster f/# values or obscurations. Additionally, in order to compare these camera cores or sensor systems to one another, it is imperative to scale the NETD values for f/#, focus distance, and waveband differences accurately. This paper will outline proper measurement techniques to report UCIR camera core and system-level NETD, as well as demonstrate methods to scale the metric for these differences.
Test equipment and methods to characterize fully assembled SWIR digital imaging systems
John Green, Tim Robinson
This paper describes test equipment and methods used to characterize short-wave infrared (SWIR) digital imaging systems. The test equipment was developed under the Air Force Research Laboratory (AFRL) contract Advanced Night Vision Imaging System – Cockpit Integration. The test equipment measures relative spectral responsivity, noise equivalent irradiance, dynamic range, linearity, dark noise, image uniformity, and captures image artifacts.
Calibration of uncooled LWIR microbolometer imagers to enable long-term field deployment
Radiometric calibration methods are described that enable long-term deployment of uncooled microbolometer infrared imagers without on-board calibration sources. These methods involve tracking the focal-plane-array and/or camera-body temperatures and compensating for the changing camera response. The compensation is derived from laboratory measurements with the camera viewing a blackbody source while the camera temperature is varied in a thermal chamber. Results are shown that demonstrate absolute temperature uncertainty of 0.35 °C or better over a 24-hour period, with more than half of this uncertainty inherent in the blackbody source to which the measurements are compared. This work was driven by environmental remote sensing applications, but the calibration methods are also relevant to a wide range of infrared imaging applications.
Testing II
icon_mobile_dropdown
Spectral and angular responses of microbolometer IR FPA: a characterization method using a FTIR
Aurélie Touvignon, Alain Durand, Fabien Romanens, et al.
In order to evaluate the impact of technological evolutions on the spectral responsivity of microbolometer FPAs (Focal Plane Arrays) as well as to find out a way to estimate the mechanical stability of microbolometric pixel membranes, ULIS is proposing a new method to measuring the spectral response of the detector array over a large region (area of pixels) simultaneously. This is done by tweaking the standard protocol of a commercial FTIR (Fourier Transform InfraRed) spectrometer where the IR detector is replaced by the array to be measured. All the calculations (i.e. interferogram processing) are taken care of externally. We use this new set up to measure the angular spectral response of the detector array and to analyse the relationship between spectral response and mechanical behaviour of the pixel. Firstly the setup of this measurement is presented and some preliminary technical issues are outlined. Then we focus on the results obtained from the measurements on 17μm pitch pixels over a wide range of angles of incidence (from normal to 45° incidence). Finally, we share some theoretical insights on both those results and the inherent limitations of this protocol using a simple optical cavity model.
NVLabCAP: an NVESD-developed software tool to determine EO system performance
Engineers at the US Army Night Vision and Electronic Sensors Directorate have recently developed a software package called NVLabCap. This software not only captures sequential frames from thermal and visible sensors, but it also can perform measurements of signal intensity transfer function, 3-dimensional noise, field of view, super-resolved modulation transfer function, and image bore sight. Additionally, this software package, along with a set of commonly known inputs for a given thermal imaging sensor, can be used to automatically create an NV-IPM element for that measured system. This model data can be used to determine if a sensor under test is within certain tolerances, and this model can be used to objectively quantify measured versus given system performance.
An alternate method for performing MRTD measurements
Alan Irwin, Jack Grigor
The Minimum Resolvable Temperature Difference test (MRTD) is one of the tests typically required to characterize the performance of thermal imaging systems. The traditional test methodology is very time intensive, requiring data collection at multiple temperatures and target frequencies. This paper will present an alternate methodology using a controlled blackbody temperature ramp rate. This allows selection of the temperature at which a target is determined “resolved” without stopping. Test results using the traditional method will be compared to test results using this alternate method.
Testing III
icon_mobile_dropdown
Characterization of SWIR cameras by MRC measurements
Cameras for the SWIR wavelength range are becoming more and more important because of the better observation range for day-light operation under adverse weather conditions (haze, fog, rain). In order to choose the best suitable SWIR camera or to qualify a camera for a given application, characterization of the camera by means of the Minimum Resolvable Contrast MRC concept is favorable as the MRC comprises all relevant properties of the instrument. With the MRC known for a given camera device the achievable observation range can be calculated for every combination of target size, illumination level or weather conditions. MRC measurements in the SWIR wavelength band can be performed widely along the guidelines of the MRC measurements of a visual camera. Typically measurements are performed with a set of resolution targets (e.g. USAF 1951 target) manufactured with different contrast values from 50% down to less than 1%. For a given illumination level the achievable spatial resolution is then measured for each target. The resulting curve is showing the minimum contrast that is necessary to resolve the structure of a target as a function of spatial frequency. To perform MRC measurements for SWIR cameras at first the irradiation parameters have to be given in radiometric instead of photometric units which are limited in their use to the visible range. In order to do so, SWIR illumination levels for typical daylight and twilight conditions have to be defined. At second, a radiation source is necessary with appropriate emission in the SWIR range (e.g. incandescent lamp) and the irradiance has to be measured in W/m2 instead of Lux = Lumen/m2. At third, the contrast values of the targets have to be calibrated newly for the SWIR range because they typically differ from the values determined for the visual range. Measured MRC values of three cameras are compared to the specified performance data of the devices and the results of a multi-band in-house designed Vis-SWIR camera system are discussed.
Wafer level test solutions for IR sensors
Sebastian Giessmann, Frank-Michael Werner
Wafer probers provide an established platform for performing electrical measurements at wafer level for CMOS and similar process technologies. For testing IR sensors, the requirements are beyond the standard prober capabilities. This presentation will give an overview about state of the art IR sensor probing systems reaching from flexible engineering solutions to automated production needs. Cooled sensors typically need to be tested at a target temperature below 80 K. Not only is the device temperature important but also the surrounding environment is required to prevent background radiation from reaching the device under test. To achieve that, a cryogenic shield is protecting the movable chuck. By operating that shield to attract residual gases inside the chamber, a completely contamination-free test environment can be guaranteed. The use of special black coatings are furthermore supporting the removal of stray light. Typically, probe card needles are operating at ambient (room) temperature when connecting to the wafer. To avoid the entrance of heat, which can result in distorted measurements, the probe card is fully embedded into the cryogenic shield. A shutter system, located above the probe field, is designed to switch between the microscope view to align the sensor under the needles and the test relevant setup. This includes a completely closed position to take dark current measurements. Another position holds a possible filter glass with the required aperture opening. The necessary infrared sources to stimulate the device are located above.
Systems
icon_mobile_dropdown
Performance analysis of panoramic infrared systems
Orges Furxhi, Ronald G. Driggers, Gerald Holst, et al.
Panoramic imagers are becoming more commonplace in the visible part of the spectrum. These imagers are often used in the real estate market, extreme sports, teleconferencing, and security applications. Infrared panoramic imagers, on the other hand, are not as common and only a few have been demonstrated. A panoramic image can be formed in several ways, using pan and stitch, distributed aperture, or omnidirectional optics. When omnidirectional optics are used, the detected image is a warped view of the world that is mapped on the focal plane array in a donut shape. The final image on the display is the mapping of the omnidirectional donut shape image back to the panoramic world view. In this paper we analyze the performance of uncooled thermal panoramic imagers that use omnidirectional optics, focusing on range performance.
Targets, Backgrounds, and Atmospherics I
icon_mobile_dropdown
Evaluation of turbulence mitigation methods
Atmospheric turbulence is a well-known phenomenon that diminishes the recognition range in visual and infrared image sequences. There exist many different methods to compensate for the effects of turbulence. This paper focuses on the performance of two software-based methods to mitigate the effects of low- and medium turbulence conditions. Both methods are capable of processing static and dynamic scenes. The first method consists of local registration, frame selection, blur estimation and deconvolution. The second method consists of local motion compensation, fore- /background segmentation and weighted iterative blind deconvolution. A comparative evaluation using quantitative measures is done on some representative sequences captured during a NATO SET 165 trial in Dayton. The amount of blurring and tilt in the imagery seem to be relevant measures for such an evaluation. It is shown that both methods improve the imagery by reducing the blurring and tilt and therefore enlarge the recognition range. Furthermore, results of a recognition experiment using simulated data are presented that show that turbulence mitigation using the first method improves the recognition range up to 25% for an operational optical system.
Developing a broad spectrum atmospheric aerosol characterization for remote sensing platforms over desert regions
Remotely sensed imagery of targets embedded in Earth’s atmosphere requires characterization of aerosols between the space-borne sensor and ground to accurately analyze observed target signatures. The impact of aerosol microphysical properties on retrieved atmospheric radiances has been shown to negatively affect the accuracy of remotely sensed data collects. Temporally and regionally specific meteorological conditions require exact site atmospheric characterization, involving extensive and timely observations. We present a novel methodology which fuses White Sands New Mexico regional aerosol micro pulse lidar (MPL) observations with sun photometer direct and diffuse products for broad-wavelength (visible – longwave infrared) input into the radiative transfer model MODTRAN5. Resulting radiances are compared with those retreived from the NASA Aqua MODIS instrument.
FTIR characterization of atmospheric fluctuations along slant paths
Sensor system noise needs to be characterized to determine the limits of detecting a feature from an observed source. For passive infrared spectral sensors, the noise is characterized in terms of the noise equivalent spectral radiance, NESR. The total NESR (NESRtotal) has two components, the internal NESR of the instrument (NESRinstr) and the external NESR of the path being viewed by the sensor (NESRpath). In the case of an FTIR instrument, the NESRinstr is measured by viewing a stable blackbody at close range thereby removing the effects of the path on the spectrally dependent noise. The standard deviation of the sine transform of the interferogram is then computed to estimate NESRinstr. In our application, however, the NESRpath is our signal, and it is measured by viewing an atmospheric scene and removing the effect due to the instrument. A histogram of the spectrally dependent noise spectrum is then computed. The full-width of this histogram is taken at the 1/e2 points and is driven by temperature and species concentration fluctuations along the path. Both of these effects can dominate over the instrument noise. In the following, we compare preliminary values of path spectral fluctuations determined from a ground-based FTIR for a selected slant path to measured values of the refractive index structure constant (Cn2) along the same path.
Targets, Backgrounds, and Atmospherics II
icon_mobile_dropdown
Validation of atmospheric turbulence simulations of extended scenes
Atmospheric turbulence causes blur, distortion and intensity fluctuations that can severely degrade image quality of electro-optic and thermal imaging systems. Realistic simulated imagery is needed to evaluate the effects of turbulence and provide input for the evaluation of mitigation and image processing algorithms, since turbulence-based data collections can be cost prohibitive and time consuming. In this work we validate an existing turbulence image simulator against a well-characterized dataset, including resolution targets. The robust dataset was collected through a diurnal cycle for a variety of ranges.
Aerosol MTF revisited
Different views of the significance of aerosol MTF have been reported. For example, one recent paper [OE, 52(4)/2013, pp. 046201] claims that the aerosol MTF "contrast reduction is approximately independent of spatial frequency, and image blur is practically negligible". On the other hand, another recent paper [JOSA A, 11/2013, pp. 2244-2252] claims that aerosols "can have a non-negligible effect on the atmospheric point spread function". We present clear experimental evidence of common significant aerosol blur and evidence that aerosol contrast reduction can be extremely significant. In the IR, it is more appropriate to refer to such phenomena as aerosol-absorption MTF. The role of imaging system instrumentation on such MTF is addressed too.
DARPA super resolution vision system (SRVS) robust turbulence data collection and analysis
Richard L. Espinola, Kevin R. Leonard, Roger Thompson, et al.
Atmospheric turbulence degrades the range performance of military imaging systems, specifically those intended for long range, ground-to-ground target identification. The recent Defense Advanced Research Projects Agency (DARPA) Super Resolution Vision System (SRVS) program developed novel post-processing system components to mitigate turbulence effects on visible and infrared sensor systems. As part of the program, the US Army RDECOM CERDEC NVESD and the US Army Research Laboratory Computational & Information Sciences Directorate (CISD) collaborated on a field collection and atmospheric characterization of a two-handed weapon identification dataset through a diurnal cycle for a variety of ranges and sensor systems. The robust dataset is useful in developing new models and simulations of turbulence, as well for providing as a standard baseline for comparison of sensor systems in the presence of turbulence degradation and mitigation. In this paper, we describe the field collection and atmospheric characterization and present the robust dataset to the defense, sensing, and security community. In addition, we present an expanded model validation of turbulence degradation using the field collected video sequences.
Three dimensional temperature estimation of a fire plume using multiple longwave infrared camera views
Michele B. Lohr, Michael E. Thomas, Todd M. Neighoff, et al.
In order to determine true radiometric quantities in intense fires a three dimensional (3D) understanding of the fire radiometric properties is desirable, e.g., for estimating peak fire temperatures. Imaging pyrometry with a single infrared camera view can provide only two dimensional path-averaged radiometric information. Multiple camera views, however, can form the basis for determining 3D radiometric information such as radiance, emissivity, and temperature. Analytically the fire can be divided into sub-volumes in which radiometric properties are assumed roughly constant. Using geometric and thermal equilibrium relationships between the fire sub-volumes, together with LWIR camera imagery acquired at multiple carefully defined camera views, radiometric properties of each sub-volume can be estimated. In this work, initial proof-of-principle results were obtained by applying this analysis to sets of LWIR camera imagery acquired during intense (2500 – 3000 K) fires. We present 3D radiance and temperature maps of the fires obtained using this novel approach.
Technologies for Synthetic Environments: Hardware-in-the-Loop
icon_mobile_dropdown
Development and evaluation of technologies for testing visible and infrared imaging sensors
H. S. Lowry, S. L. Steely, W. J. Phillips, et al.
Ground testing of space and airborne imaging sensor systems is supported by visible-to-long wave infrared (LWIR) imaging sensor calibration and characterization, as well as hardware-in-the-loop (HWIL) simulation with high-fidelity complex scene projection to validate sensor mission performance. To accomplish this successfully, there must be the development and evaluation of technologies that are used in space simulation chambers for such testing, including emitter-array cryotesting, silicon-carbide mirror cryotesting, and flood-source development. This paper provides an overview of the efforts being investigated and implemented at Arnold Engineering Development Complex (AEDC).
Real-time scene and signature generation for ladar and imaging sensors
Leszek Swierkowski, Chad L. Christie, Leonid Antanovskii, et al.
This paper describes development of two key functionalities within the VIRSuite scene simulation program, broadening its scene generation capabilities and increasing accuracy of thermal signatures. Firstly, a new LADAR scene generation module has been designed. It is capable of simulating range imagery for Geiger mode LADAR, in addition to the already existing functionality for linear mode systems. Furthermore, a new 3D heat diffusion solver has been developed within the VIRSuite signature prediction module. It is capable of calculating the temperature distribution in complex three-dimensional objects for enhanced dynamic prediction of thermal signatures. With these enhancements, VIRSuite is now a robust tool for conducting dynamic simulation for missiles with multi-mode seekers.
Real-time simulation of combined short-wave and long-wave infrared vision on a head-up display
Landing under adverse weather conditions can be challenging, even if the airfields are well known to the pilots. This is true for civil as well as military aviation. Within the scope of this paper we concentrate especially on fog conditions. The work has been conducted within the project ALICIA. ALICIA is a research and development project co-funded by European Commission under the Seventh Framework Programme. ALICIA aims at developing new and scalable cockpit applications which can extend operations of aircraft in degraded conditions: All Conditions Operations. One of the systems developed is a head-up display that can display a generated symbology together with a raster-mode infrared image. We will detail how we implemented a real-time enabled simulation of a combined short-wave and long-wave infrared image for landing. A major challenge was to integrate several already existing simulation solutions, e.g., for visual simulation and sensors with the required data-bases. For the simulations DLRs in-house sensor simulation framework F3S was used, together with a commercially available airport model that had to be heavily modified in order to provide realistic infrared data. Special effort was invested for a realistic impression of runway lighting under foggy conditions. We will present results and sketch further improvements for future simulations.
Ultrahigh-temperature emitter pixel development for scene projectors
Kevin Sparkman, Joe LaVeigne, Steve McHugh, et al.
To meet the needs of high fidelity infrared sensors, under the Ultra High Temperature (UHT) development program, Santa Barbara Infrared Inc. (SBIR) has developed new infrared emitter materials capable of achieving extremely high temperatures. The current state of the art arrays based on the MIRAGE-XL generation of scene projectors is capable of producing imagery with mid-wave infrared (MWIR) apparent temperatures up to 700K with response times of 5 ms. The Test Resource Management Center (TRMC) Test and Evaluation/Science and Technology (TandE/SandT) Program through the U.S. Army Program Executive Office for Simulation, Training and Instrumentations (PEO STRI) has contracted with SBIR and its partners to develop a new resistive array based on these new materials, using a high current Read-In Integrated Circuit (RIIC) capable of achieving higher temperatures as well as faster frame rates. The status of that development will be detailed within this paper, including performance data from prototype pixels.
Scalable emitter array development for infrared scene projector systems
Kevin Sparkman, Joe LaVeigne, Steve McHugh, et al.
Several new technologies have been developed over recent years that make a fundamental change in the scene projection for infrared hardware in the loop test. Namely many of the innovations are in Read In Integrated Circuit (RIIC) architecture, which can lead to an operational and cost effective solution for producing large emitter arrays based on the assembly of smaller sub-arrays. Array sizes of 2048x2048 and larger are required to meet the high fidelity test needs of today’s modern infrared sensors. The Test Resource Management Center (TRMC) Test and Evaluation/Science and Technology (T and E/S and T) Program through the U.S. Army Program Executive Office for Simulation, Training and Instrumentations (PEO STRI) has contracted with SBIR and its partners to investigate integrating new technologies in order to achieve array sizes much larger than are available today. SBIR and its partners have undertaken several proof-of-concept experiments that provide the groundwork for producing a tiled emitter array. Herein we will report on the results of these experiments, including the demonstration of edge connections formed between different ICs with a gap of less than 10µm.
Poster Session
icon_mobile_dropdown
Background character research for synthetical performance of thermal imaging systems
Song-lin Chen, Ji-hui Wang, Xiao-wei Wang, et al.
Background is assumed to be uniform usually for evaluating the performance of thermal imaging systems, however the impact of background cannot be ignored for target acquisition in reality, background character is important research content for thermal imaging technology. A background noise parameter 𝜎 was proposed in MRTD model and used to describe background character. Background experiments were designed, and some typical backgrounds (namely lawn background, concrete pavement background, trees background and snow background) character were analyzed by 𝜎. MRTD including 𝜎 was introduced into MRTD-Channel Width (CW) model, the impact of above typical backgrounds for target information quantity were analyzed by MRTD-CW model with background character. Target information quantity for different backgrounds was calculated by MRTD-CW, and compared with that of TTP model. A target acquisition performance model based on MRTD-CW with background character will be research in the future.
Application of responsivity and noise evaluation method to infrared thermal imaging sensors
Dong-Ik Kim, Ghiseok Kim, Geon-Hee Kim, et al.
In this study, the evaluation method for the responsivity and noise characteristics of a commercial infrared thermal imaging camera and a custom-made sensor module was presented. Signal transfer functions (SiTFs) and noise equivalent temperature differences (NETDs) of the two sensor modules were obtained by using a differential mode blackbody that is able to control the temperature difference ΔT between an infrared target and its background. And we verified the suitability of our evaluation method through the comparison between the found NETD and the specification of the camera. In addition, the difference of 0.01 K of the two noise equivalent temperature differences calculated from with and without nonuniformity correction suggests that the nonuniformity correction is essential process for the evaluation of the infrared thermal imaging cameras. Finally, in case of the custom-made sensor module, only temporal NETD was found because of its higher nonuniformity characteristics.
WAHRSIS: A low-cost high-resolution whole sky imager with near-infrared capabilities
Soumyabrata Dev, Florian M. Savoy, Yee Hui Lee, et al.
Cloud imaging using ground-based whole sky imagers is essential for a fine-grained understanding of cloud formations, which can be useful in many applications. Some such imagers are available commercially, but their cost is relatively high, and their flexibility is limited. Therefore, we built a new daytime Whole Sky Imager (WSI) called Wide Angle High-Resolution Sky Imaging System (WAHRSIS). The strengths of our new design are its simplicity, low manufacturing cost, and high image resolution. Our imager captures the entire hemisphere in a single picture using a digital camera with a Fish-eye lens. The camera was modified to capture light across the visible and near-infrared spectral ranges. This paper describes the design of the device as well as the geometric and radiometric calibration of the imaging system.