Show all abstracts
View Session
- Front Matter: Volume 11001
- Modeling I
- Modeling II
- Modeling III
- Modeling IV
- Modeling V
- Modeling VI
- Test I
- Test II
- Atmospheric Effects I
- Atmospheric Effects II
- Poster Session
Front Matter: Volume 11001
Front Matter: Volume 11001
Show abstract
This PDF file contains the front matter associated with SPIE Proceedings Volume 11001, including the Title Page, Copyright information, Table of Contents, Author and Conference Committee lists.
Modeling I
30 years of value engineering to the IR community
Show abstract
The SPIE Infrared Imaging Systems: Design, Analysis, Modeling and Testing subconference is now holding its 30th annual conference – one of the longest running conferences in the SPIE technical conference series. Over the years, this working group, it’s Gov’t, Industry and academic participants have shared technical innovations on all aspects of IR and EO design approaches, analysis methods, modeling and simulation tools and extensive discussions on a wide range of testing methods. The wealth of information, training and collaboration resulting from this conference has helped form, shape and grow the careers of sensor systems engineers around the world, fostering new technology growth and proliferation to benefit us all. This presentation will look back at some of the significant highlights of the conference, notable technical concepts and in some cases their evolution, key papers, and show how our industry has significantly benefitted from this SPIE Gem.
Modeling II
Evaluating the performance of reflective band imaging systems: a tutorial
Show abstract
At NVESD, the targeting task performance (TTP) metric applies a weighting of different system specifications, that are determined from the scene geometry, to calculate a probability of task performance. In this correspondence we detail how to utilize an imaging system specification document to obtain a baseline performance estimate using the Night Vision Integrated Performance Model (NV-IPM), the corresponding input requirements, and potential assumptions. We then discuss how measurements can be performed to update the model to pro- vide a more accurate prediction of performance, detailing the procedures taken at the NVESD Advanced Sensor Evaluation Facility (ASEF) lay utilizing the Night Vision Laboratory Capture (NVLabCap) software. Finally, we show how the outputs of the measurement can be compared to those of the initial specification sheet based model, and evaluated against a requirements document. The modeling components and data set produced for this work are available upon request, and will serve as a means to benchmark performance for both modeling and measurement methods.
Nonlinear pixel non-uniformity: emulation and correction
Show abstract
All infrared focal plane array (FPA) sensors suffer from spatial non-uniformity or fixed-pattern noise (FPN). The severity of the FPN depends on the underlying manufacturing materials, methods, and tolerances, and can greatly affect overall imager performance. A key part of sensor characterization is the ability to map a known input radiance to an observed output digital count value. The presence of FPN requires a per-pixel response to be measured and specified. With this forward model defined, the inverse can be used to correct the spatial variation and ensure FPN does not corrupt other measurement estimates. In general, both the forward and inverse models are nonlinear in nature and require special care to ensure correct implementation. In this correspondence we outline a least squares emulation and correction estimation method for linear and nonlinear correction terms. We discuss the tradeoffs between computational complexity for different non-linear functions and the potential gains in reduction of fixed pattern noise. The algorithms utilize centering and scaling to improve numerical stability and is computationally efficient. In support of the reproducible research effort, the Matlab functions associated with this work can be found on the Mathworks file exchange [1].
SWIR sensor "see-spot" modelling and analysis
Show abstract
The reduced Rayleigh scattering of SWIR radiation, when compared to the visible and NIR band, can be exploited to obtain higher contrast images even under challenging atmospheric conditions. Additionally the SWIR band neatly covers the most popular wavelengths used for laser designation and ranging, and hence SWIR imagers can be used to target and detect these sources. A SWIR sensor can also be used in night vision applications by taking advantage of an atmospheric phenomenon called night sky radiance (or night glow) that emits five to seven times more illumination than starlight, nearly all of it in the SWIR wavelengths. This paper presents a radiometry model intended for the design and analysis of a SWIR imaging sensor that is expanded to included night-time scenarios and laser “see-spot” range performance. The model is also adapted for input variable compliance with the industry standard NV-IPM range performance model, thereby enabling cross-correlation between the range performance predictions of the two models’ results. Some SWIR sensor design examples that trade off the imaging range performance and the “see-spot” range performance are presented, and the results are discussed.
Sunburn study of VOx microbolometers
Show abstract
Sunburn is a phenomenon which occurs when pixels are damaged, either temporarily or permanently, as a result of direct exposure to sun radiation. Consequently the image exhibits artifacts seen as white or black spots. In this study we investigate the effect of the F/# on sunburn in VOx bolometer. The experimental setup was designed to simulate sun radiation at flight conditions. We study the sunburn intensity, its recovery time and its dependence on F/#. As part of the data analysis, we used a SNR model to characterize the sunburn intensities. A good correlation was found between the SNR model and the naked eye observation of the sunburn spots.
Modeling III
Experiments in detecting obscured objects using longwave infrared polarimetric passive imaging
Show abstract
Polarization has been shown to improve object-clutter discrimination in longwave infrared imaging, particularly if the object and clutter have the same apparent surface temperature and the viewing angle relative to an object's surface is off normal. This work describes experimentation to investigate the feasibility of using polarimetric infrared imagery to enhance object-clutter discrimination when the object is hidden by foliage. Many obscurations have small gaps where optical signatures from background objects can be partially seen. In long range imaging, large pixel size typically creates heterogeneous pixel mixtures consisting of multiple material surfaces. This mixture degrades an object's signature; however, due to the significant polarization contrast from the materials, object-clutter discrimination is still possible. Methodology and results from controlled experiments are presented herein which demonstrate the potential capability of object detection using polarization sensitive imagery.
Range performance across the field of view of a camera
Show abstract
Wide Field of View (WFOV) cameras have the potential to increase situational awareness. To achieve larger fields of view, engineers and scientists can introduce a distorted lens system with non-rectilinear optical projections into the camera. These large angular spans and distorted optical projections change the instantaneous Field of View (iFOV) of each pixel across the Focal Plane Array (FPA) substantially enough to affect the overall performance of the imaging system. In this correspondence, we provide an example of how to evaluate the per pixel performance of a WFOV camera system. We began by using the Targeting Task Performance (TTP) metric and the Night Vision Integrated Performance Model (NVIPM) to predict the performance of a WFOV camera based on its expected intrinsic camera parameters such as: focal length, ideal optical projection model, resolution, and noise properties. Then, we measured the Modulation Transfer Function (MTF), Noise Equivalent Temperature Difference (NETD), and iFOV across the FPA using two different direct measurement techniques to update the model predictions and evaluate range performance across the field of view of a camera.
An intensified camera module for the range performance model TRM4
Show abstract
Range performance is the key figure for assessing and comparing electro-optical imagers for target acquisition. TRM4 is a widely used model to calculate the range performance based on parameters of the imaging system and the environmental conditions. It enables the user to assess imaging devices working in the spectral range from the visible to the thermal infrared using the same modeling principles and taking into account aliasing effects of sampled imagers. This paper presents the intensified camera module that will be included in the next TRM4 release, TRM4.v3. The module extends the modeling capabilities of TRM4 to imagers that employ an image intensifier tube coupled by means of fiber optics tapers or lenses to a CCD or CMOS detector array. These systems combine advantages of large detector arrays with low-light level imaging capabilities in the visible and part of the near-infrared spectral bands, which makes them interesting for reconnaissance during night. We describe how the image intensifier tube and the coupling element are modeled and integrated into the TRM4 modeling chain. We also address the different user options to specify relevant input data as well as the main extensions of the TRM4 modeling equations. Based on the intensified camera module, we simulate and compare the performance of an intensified camera and an electron multiplied CCD for various night illumination conditions. Finally, we give an outlook on validating the intensified camera module against lab measurements.
Simulating human vehicle identification performance with infrared imagery and augmented reality assistance
Show abstract
The U.S. Army CCDC C5ISR Night Vision and Electronic Sensors Directorate is researching the use of augmented reality (AR) technologies to improve the situational awareness and decision-making capabilities of EO/IR sensor operators. A major research requirement for such technologies involves defining the accuracy of AR information required to improve human performance for tasks related to EO/IR sensors, as AR systems may unintentionally provide inaccurate information to operators, which may be worse than providing no information at all. We designed a simulation to assess human performance during a vehicle identification task (using infrared imagery) when the operator receives aid via an AR system. U.S. Soldiers, trained to identify vehicles using infrared sensors, viewed images of the vehicles at different ranges in blocks while a simulated AR system attempted to identify each vehicle. Each block of images possessed an inherent level of AR accuracy (either 100%, 75%, or 50%). Performance with AR was compared to baseline performance (i.e., completing the task with no AR assistance). We further explored human performance by examining time-constrained decisions with AR. While perfect AR information was generally used effectively by participants, AR mistakes progressively increased human errors and slowed response times. Human performance varied as the range to the target increased, indicating greater dependency on the AR system as the task became more difficult. Time constraints reduced identification accuracy and usually affected unaided and aided performance similarly. Our work demonstrates the importance of simulation as a tool for understanding the effects of AR on military task performance.
Implementation of a non-linear CMOS and CCD focal plane array model in ASSET
Show abstract
Electro-optical and infrared (EO/IR) sensor and scene generation models are useful tools that can facilitate understanding the behavior of an imaging system and its data processing chain under myriad scenarios without expensive and time-consuming testing of an actual system. EO/IR models are especially important to researchers in remote sensing where truth data is required but often costly and impractical to obtain. The Air Force Institute of Technology (AFIT) Sensor and Scene Emulation Tool (ASSET) is an educational, engineering-level tool developed to rapidly generate large numbers of physically realistic EO/IR data sets. This work describes the implementation of a focal plane array (FPA) model of charge-coupled device (CCD) and complementary metal-oxide semiconductor (CMOS) photodetectors as a component in ASSET. The FPA model covers conversion of photo-generated electrons to voltage and then to digital numbers. It incorporates sense node, source follower, and analog-to-digital converter (ADC) components contributing to gain non-linearities and includes noise sources associated with the detector and electronics such as shot, thermal, 1/f, and quantization noise. This paper describes the higher fidelity FPA and electronics model recently incorporated into ASSET, and it also details model validation using an EO/IR imager in laboratory measurements. The result is an improved model capable of rapidly generating realistic synthetic data representative of a wide range of EO/IR systems for use algorithm development and assessment, particularly when large numbers of truth data sets are required (e.g., machine learning).
Image quality for an IRFPA: a system integrator point of view
Show abstract
This paper presents the image quality issues encountered at system level when working with infrared imagers. This work highlights some characteristics that are usually not specified for off-theshelf components. The first part of this work discusses the need for short-term and long-term stability. We present a comparative study of two sensors in the MW band based on the residual fixed pattern noise and defective pixels. In the second part we present the issues of patterns defects. We conduct an experimental study on twelve persons to estimate the perception threshold of such defects within an infrared sequence.
Validation of an infrared sensor model with field collected imagery of unresolved unmanned aerial vehicle (UAV) targets
Show abstract
An IR sensor model is validated using experimentally derived peak pixel SNR versus range for detection of either an unresolved or a resolved UAV target. The model provided estimated time-averaged peak SNR values for the ranges used in the field collection. A MWIR camera and a LWIR camera provided the measured data. Commercially available UAVs were flown along a line from the cameras to a clear sky region for background. A laser range finder measured the range at seven stopping points along the path. The data resulted in five ranges of unresolved target information for the MWIR camera and four ranges for the LWIR camera. This paper provides details for using the data collected from the model to match the cameras used in the field collection. Also, the processing used to extract peak SNR versus range from imagery is presented.
Modeling IV
EO system design and performance optimization by image-based end-to-end modeling
Show abstract
Image-based Electro-Optical system simulation including an end-to-end performance test is a powerful tool to characterize a camera system before it has been built. In particular, it can be used in the design phase to make an optimal trade-off between performance on the one hand and SWaPC (Size, Weight, Power and Cost) criteria on the other. During the design process, all components can be simulated in detail, including optics, sensor array properties, chromatic and geometrical lens corrections, signal processing, and compression. Finally, the overall effect on the outcome can be visualized, evaluated and optimized. In this study, we developed a detailed model of the CMOS camera system imaging chain (including scene, image processing and display). In parallel a laboratory sensor test, analytical model predictions and an image-based simulation were applied to two operational high-end CMOS camera lens assemblies (CLA) with different FPA sizes (2.5K and 4K) and optics. The model simulation was evaluated by comparing simulated (display) stills and videos with recorded imagery using both physical (SNR) and psychophysical measures (acuity and contrast thresholds using the TOD-methodology) at different shutter times, zoom settings, target sizes and contrasts, target positions in the visual field, and target speeds. The first results show the model simulations are largely in line with the recorded sensor images with some minor deviations. The final goal of the study is a detailed, validated and powerful sensor performance prediction model.
Generalization of active radar imaging and passive imaging models applied to wide band terahertz array imaging systems
Show abstract
System solutions for commercial applications such as autonomous driving, augmented reality, medical imaging, and security imaging, exploit active illumination. In these applications, the active source is used to provide photons but also to code and decode relevant information such as range or spectral response. The wavelengths of choice range from visible to millimeter waves depending on the application and associated requirements. Across these wavelengths, the targets range from Lambertian to specular. For single element and scanned systems, ranging is commonly modeled using conventions borrowed from the radar and antenna community. Staring and scanning systems that facilitate resolution in the cross-range are modeled using conventions borrowed from the synthetic aperture radar community or the passive imaging community. All the borrowed conventions, however, make assumptions about the size and nature of the target in relationship to the illumination and wavelength, unresolved versus resolved and Lambertian versus specular. These assumptions are relevant for the calculation of system signal to noise ratio and resolution; therefore, they should be carefully considered when adopting the conventions. Examples of systems where modeling falls between active radar and passive imaging include wide band Terahertz array imaging systems and solid state lidar systems. This paper generalizes and bridges the models used by the active radar community and the passive imaging community. We apply the model to a wide band terahertz array imaging system enabled by terahertz array technology recently developed at imec. The model is validated using simulated measurements from a two-dimensional terahertz array.
A data-constrained algorithm for the emulation of long-range turbulence-degraded video
Show abstract
Atmospheric turbulence can cause significant image quality degradation in long-range, ground-to-ground imagery. There is recent interest in characterizing the performance of machine learning algorithms for long-range imaging applications. However, such as task requires a databases of realistic turbulence-degraded imagery. Modeling and simulation provides a reliable, repeatable means of generating long-range data at a substantial cost-savings compared to live field collections. We present updates to the Night Vision Electronic Sensors Directorate (NVESD) Turbulence Simulation algorithm that simulates the effect of turbulence on imagery by imposing realistic blur and distortion on pristine input imagery for a given range, turbulence condition, and optical parameters. Key improvements to the model are: (1) the incorporation of the exact short-exposure atmospheric modulation transfer function into the blurring routine; (2) a random walk algorithm that generates blur and distortion statistics on-the-fly at the characteristic frequency of turbulence degradations. The algorithm is fast and lightweight, computationally-speaking, so as to be scalable to high-performance computing. We perform a qualitative assessment of the results with real field imagery, as well as a quantitative comparison using the structural similarity metric (SSIM).
Image visualization for infrared cameras using radiometry
Show abstract
Infrared (IR) cameras typically output single channel frames with 13-16 bit dynamic range. On the other hand, typical Commercial Off-The-Shelf (COTS) monitors can display three channel frames with only 8 bit dynamic range. Therefore, visualization of IR images while preserving details is a challenging task for utilizing the full potential of thermal cameras. In this paper, we propose a radiometry based method for converting 13-16 bit IR images to 8 bit single channel images for visualization purposes, which is called temperature quantizer. In the proposed method, we use the histogram of the calibrated temperature values. In this context, we collected data using an uncooled Long Wave IR (LWIR), a cooled Middle Wave IR (MWIR) and a cooled LWIR camera. The proposed method is compared with a basic min-max scaler and a Digital Number (DN) quantizer that uses raw image histogram. For quantitative comparison, we used standard deviation and entropy values. Experiments show that the proposed method performs better than the reference methods quantitatively and qualitatively.
Modeling V
Using synthetic environments to assess multi-sensor system performance
T. Haynes
Show abstract
Modern surveillance systems frequently include imaging equipment operating in more than one waveband. Setting requirements for new systems requires an understanding of the performance of individual sensors, the differences between them and how they can be used to complement each other to improve overall system performance. Whilst it remains useful to assess sensor performance with quantified metrics using reference targets it is also important to understand the subjective aspects of recognition and identification by human observers. This requires imagery from sensors to assess, when sensors or relevant target sets are not available synthetic scene generation can be used to fill in the gaps.
An update on PWP enhancement for LWIR target acquisition sensors
Robert Short,
Duke Littlejohn,
Ronald Driggers
Show abstract
Advanced LWIR sensors have recently emerged that have all or some of the following features: large array formats, small detectors, Fλ/d over 2 (well-sampled and low in-band aliasing), and ROICs capable of using all available photons (either through deep wells, faster framerates, or digital readout). We have spent the past year studying MTF restoration techniques for these systems. Initial application of our restoration approach (called PWP for the combination of small pitch, deep electron wells, and image processing) have encountered issues with real imagery. Problems for implementing restoration may include: fixed pattern noise, aliasing, and interpolation methods. We provide an update on our findings and a path forward for successful optimization of future advanced LWIR imagers.
Meteorological property and temporal variable effect on spatial semivariance of infrared thermography of soil surfaces for detection of foreign objects
Show abstract
The environmental phenomenological properties responsible for the thermal variability evident in the use of thermal infrared (IR) sensor systems is not well understood. The research objective of this work is to understand the environmental and climatological properties contributing to the temporal and spatial thermal variance of soils. We recorded thermal images of surface temperature of soil as well as several meteorological properties such as weather condition and solar irradiance of loamy soil located at the Cold Regions Research and Engineering Lab (CRREL) facility. We assessed sensor performance by analyzing how recorded meteorological properties affected the spatial structure by observing statistical differences in spatial autocorrelation and dependence parameter estimates.
Design of target source for missile tracking capabilities and guarantee vehicle
Show abstract
In order to verify the ability to simulate infrared target energy and track targets in a laboratory environment, the infrared target energy simulation and tracking test system was studied. An infrared target source for simulating the energy distribution of the flying target and a test vehicle capable of meeting the corresponding attitude conditions were designed. Another set of test software was designed to obtain the test data. Using an off-axis parabolic mirror with a focal length of 700mm, a multi-mirror and off-axis mirror is used to form a reflective collimator, simulating the spatial distance, and adopting 1% and 10% two attenuators and aperture adjustment to achieve the third gear. The need for energy. The fine adjustment lifting mechanism is used to realize the adjustment of the pitch angle of 0°~10°, and the rolling mechanism is used to adjust the roll angle of 0°~10°, and the hydraulic lifting mechanism is used to reach the 0~1000mm lifting index. Static and dynamic characteristics analysis is carried out for the key components of the test vehicle to ensure that the test vehicle meets the requirements of strength, stiffness and stability. The system has the characteristics of high precision, wide coverage and strong versatility. It provides a good test and simulation platform for verifying the tracking ability of infrared targets.
Modeling VI
Canonical images
Show abstract
Most forms of optical image formation involve the use of an optical system to form a real image on an array of sensing elements. The output from the sensing elements is a sampled image. Mathematically, this process is described by convolution of the point spread function of the system (including the sensing elements) and the projection of objects in the image plane. In general, this process cannot be done mathematically in closed form for arbitrary images. Sampling and image processing algorithms are often assessed with respect to their performance on sampled images that are accepted standards. By applying known degradations such as noise and blur to a standard image, operating on the degraded image with a prospective algorithm, and comparing the result with the original uncorrupted image, an image processing algorithm or sampling scheme can be assessed. A weakness of this approach is the fact that the accepted standard is just that: an accepted standard. The image contains within itself uncertainties associated with the original image acquisition process. These uncertainties place bounds on the utility of the image. In this research we introduce the concept of canonical images. Canonical images are closed form, mathematically computable images that retain the essentials of the linear shift invariant image formation process. We derive one form of a canonical image, show its properties, and show how complex images can be generated using superposition. We also demonstrate how arbitrary images can be decomposed into canonical images that approximate them. We discuss applications for canonical images that include modeling and simulation, sensor testing, perception testing, and algorithm development.
Optimizing microscan for radiometry with cooled IR cameras
Fabian Göttfert,
Johannes Bohm,
Konrad Heisig,
et al.
Show abstract
Optical microscanning is a popular method in infrared imaging, providing a relatively cost-efficient means to increase the spatial resolution of the camera system. Here, we discuss the impact of microscan on parameters relevant for thermography applications. Other than imaging applications, thermography is extremely sensitive to changes of the absolute irradiation power caused by the additional microscan optics. Additional reflectance and stray radiation must be avoided or corrected for. Also, current and future developments in detector technology, such as reduced pixel pitch, will pose new challenges to the feasibility of microscan. As a practical example, we will present a microscan implementation adapted to the specific needs of thermography applications with infrared cameras based on a cooled detector. A fast rotating filter wheel with precisely adjusted deflection windows is used to produce the desired image shift. This allows us to utilize microscan while retaining the high imaging speeds usually required in applications employing cooled infrared detectors. Often, calibrated thermographic cameras are used in applications necessitating a wide range of calibrated temperature measuring ranges reaching from -40 °C to <2000 °C. Optimizing the design of the microscan device for compactness opens the possibility to combine the feature with additional optical filters, allowing wide temperature measurement ranges as well as imaging within different spectral windows.
A method for solving 2D nonlinear partial differential equations exemplified by the heat-diffusion equation
Will Waldron
Show abstract
This paper explores a technique to solve nonlinear partial differential equations (PDEs) using finite differences. This method is intended for higher fidelity analysis than first-order equations and quicker analysis than finite element analysis (FEA). The set of finite difference equations are linearized using Newton's Method to find an optimal solution. Throughout the paper, the Heat-Diffusion Equation is used as an example of method implementation.
The results from using this method were checked against a simple program written in a graduate
Computational Physics class and a NASTRAN case. Overall, the methodology in this paper produced results that matched NASTRAN and the simple case well.
Test I
Parameter exploration for spectral estimation of speckle imagery
Show abstract
Efforts to extend speckle-based focal plane array (FPA) modulation transfer function (MTF) measurements beyond the detector Nyquist frequency have unearthed challenging spectral estimation issues. In an attempt to better understand the task of speckle image spectral estimation, this paper explores the nuances of various estimation techniques, making comparisons using both real speckle imagery and simulated data. Parameters and features of the techniques investigated include number of image realizations, the size of image realizations and applications of windows to speckle imagery spectral estimation. Real-world testing considerations such as laser stability and the challenge of collecting significant numbers of independent image realizations are addressed in the analysis. Results show the advantage increasing the number of realizations has on estimation variance, the robustness of smaller realization segments when battling speckle field imagery spatial non-uniformities and the benefits of windowing image segments with regards to power spectral density (PSD) estimation accuracy.
Through display measurement of signal intensity transfer function and noise for thermal systems
Show abstract
Typical thermal system performance measurements include measurements from a sensor’s digital or analog output while system performance characterizations are based upon measurements from those outputs while characterizing the performance of the display separately. This can be a improper assumption because additional signal processing could occur between the sensor test port and the display. Recent research has focused on the characterization of thermal system displays for better model fidelity. The next evolution in this research is to introduce a means for characterizing thermal system signal intensity transfer (SITF) and three dimensional noise (3DN) performance for systems that have a display as well as a known digital output. This correspondence presents an attempted means to characterize the SITF and 3DN performance for a thermal system when only using a display as the output.
Advancing the performance of extended area blackbody sources in order to stay ahead of the IR camera improvements
Show abstract
This paper discusses the improved capabilities developed for and results from a new generation of infrared blackbody sources. Over time infrared sensors have evolved from detectors to imagers and recently IR imagers have branched into low cost uncooled sensors and high resolution, high performing arrays. As the IR cameras get better, the performance of the test equipment needs to improve to maintain a 10:1 (goal), 4:1 (minimum) performance advantage. Two tests that rely on the stability of the extended area blackbody source are Noise Equivalent Temperature Difference (NETD) and Minimum Resolvable Temperature Difference (MRTD). The current state of the art for NETD for a cooled camera is <18 mK and commercially available BB’s are stable to 2 mK; less than the desired 10:1 ratio. This paper presents a three-pronged approach to improving the performance of the blackbody resulting in a greater than 4 x improvement in the blackbody’s stability. Data will be presented over typical testing temperature range of 5°C to 150°C. Additional improved performance parameters will be discussed including faster temperature transition.
Test II
Vantablack properties in commercial thermal infrared imaging systems
Show abstract
We describe the use of Vantablack® in commercial infrared imaging systems, including in thermal infrared cameras and test equipment. Vantablack® is an ultra-black coating developed by Surrey NanoSystems Ltd. in the United Kingdom and supported in the USA by Santa Barbara Infrared, Inc. Vantablack® was originally developed for satellite-borne blackbody calibration systems and is now available in two versions, either directly applied to surfaces using vacuum-deposition technology (Vantablack®), or by spraying and then post-processing (Vantablack®-S). In this paper, we present the results of Vantablack®-S coated cold shields and blackbody calibration sources; and compare performance to other industry standard black coatings. Also included are results of environmental testing on coated surfaces demonstrating that, while they are not meant to be touched, they withstand extremes of heat, vacuum and thermal cycling as well or better than other black coatings.
Atmospheric Effects I
Multispectral short-range imaging through artificial fog
Show abstract
This paper attempts to quantify thermal infrared (both longwave and midwave), shortwave infrared, and visible-light sensor performance under different test-chamber fogs. We find that the performance of LWIR imaging is impacted significantly less by light-to-moderate fog than the other two IR sensors and the visible imager. The paper recommends additional fog chamber tests that will be useful for the development of imaging simulation capability that accurately models fog across these wavebands.
Measuring optical turbulence using a laser DIMM in support of characterization of imaging system performance
Show abstract
The MITA (Motion Imagery Task Analyzer) project was conceived by CBP OA (Customs and Border Protection - Office of Acquisition) and executed by JHU/APL (Johns Hopkins University Applied Physics Laboratory) and C5ISR Center NVESD (Command, Control, Computers, Communications, Cyber, Intelligence, Surveillance, and Reconnaissance Center Night Vision and Electronic Sensors Directorate). The intent is to develop an efficient methodology to characterize imaging system performance objectively, in a field setting, using a target resolution board and simultaneously measuring the turbulence along the camera line of sight. The initial design, development, and testing of MITA was previously reported (Hixson et al) and an additional set of field measurements and subsequent modeling results are reported here. The initial MITA design uses a transmitter at the imaging system location and a DIMM receiver in the field with the resolution target to measure the path turbulence, so a strong understanding of the path-averaged turbulence reciprocity is needed for proper implementation of the MITA system. To this end, a test series was conducted to explore the reciprocity of path averaged optical turbulence measurements using two scintillometers and the laser DIMM receiver in both bi-static and mono-static configurations. Finally, the path averaged measurements are compared with modeled turbulence along the path based off of the available meteorological data.
Profiling atmospheric turbulence using time-lapse imagery from two cameras
Show abstract
For effective turbulence compensation, especially in highly anisoplanatic scenarios, it is useful to know the turbulence distribution along a path. Irradiance-based techniques suffer from saturation when profiling turbulence over long ranges and hence alternate techniques are currently being explored. In an earlier work, a method to estimate turbulence parameters such as path weighted Cn2 and Fried’s coherence length r0 from turbulence induced random, differential motion of extended features in the time-lapse imagery of a distant target was demonstrated. A technique to measure the distribution of turbulence along an experimental path using the time-lapse imagery of a target from multiple cameras is presented in this work. The approach uses an LED array as target and two cameras separated by a few feet at the other end of the path imaging the LED board. By measuring the variances of the difference in wavefront tilts sensed by a single camera and between the two cameras due to a pair of LEDs with varying separations, turbulence information along the path can be extracted. The mathematical framework is discussed and the technique has been applied on experimental data collected over a 600 m approximately horizontal path over grass. A potentially significant advantage of the method is that it is phase based, and hence can be applied over longer paths. The ultimate goal of this work is to profile turbulence remotely from a single site using targets of opportunity. Imaging elevated targets over slant paths will help in better understanding how turbulence varies with altitude in the surface layer.
Atmospheric Effects II
Measurement and analysis of infrared atmospheric aerosol blur
Show abstract
Infrared image quality can be degraded by atmospheric aerosol scattering. Aerosol interactions are dependent upon the atmospheric conditions and wavelength. Measurements of an edge target at range in the LWIR under hot humid weather provided a blur on the image plane, which is characterized by an MTF. In this experiment, the edge spread function, measured at range, was differentiated to obtain the line spread function and transformed into a MTF. By dividing the total measured MTF by the imager and turbulence MTFs, the aerosol MTF was obtained. Numerical analysis performed using MODTRAN and existing known scattering theory was compared with the experimental results. The measured and numerical results demonstrated a significant aerosol MTF suggesting that the aerosol MTF should be included in sensor performance analysis.
Probabilistic metrics to quantify the accuracy of sparse and redundant representation models of atmospheric turbulence
Show abstract
Long-range imaging requires effective compensation for the wavefront distortions caused by atmospheric turbulence. These distortions can be characterized by their effect on the point spread function (PSF). Consequently, synthesizing PSFs with the appropriate turbulence properties, for a given set of optics, is critical for modeling and mitigating turbulence. Recent work on sparse and redundant dictionary methods demonstrated three-orders of magnitude reduction in computing time needed to create synthetic PSFs, compared to traditional methods based on a wave propagation approach. The central challenge in harnessing the computational benefit of a dictionary-based approach is careful choice of the dictionary, or set of dictionaries. The choice must adequately capture the range of turbulence conditions and optical parameters present in the desired application or the computational benefits will not be realized. Thus, it is critical to understand the extent to which a dictionary, trained on data with one set of parameters, can be used to synthesize PSFs that represent a different set of experimental conditions. In this work, we examine statistical tests that provide metrics for quantifying the similarity between two sets of PSFs, then we use these results to measure dictionary performance. We show that our measure of dictionary performance is a function of the turbulence conditions and the experimental optics underlying the training data used to create a dictionary. Knowledge of the functional form of the dictionary performance metric allows us to choose the ideal dictionary, or set of dictionaries, to efficiently model a given range of turbulence and optical conditions. We find that choosing dictionary training data with slightly less turbulence than the desired turbulence condition improves similarity between synthetic PSF and experimentally measured PSF.
Identifying low-profile objects from low-light UAS imagery using cascading deep learning
Show abstract
Unmanned aircraft systems (UAS) have gained utility in the Navy for many purposes, including facility needs, security, and intelligence, surveillance, and reconnaissance (ISR). UAS surveys can be employed in place of personnel to reduce safety risks, but they generate significant quantities of data that often require manual review. Research and development of automated methods to identify targets of interest in this type of imagery data can provide multiple benefits, including increasing efficiency, decreasing cost, and potentially saving lives through identification of hazards or threats. This paper presents a methodology to efficiently and effectively identify cryptic target objects from UAS imagery. The approach involves flight and processing of airborne imagery in low-light conditions to find low-profile objects (i.e., birds) in beach and desert-like environments. The object classification algorithms combat the low-light conditions and low-profile nature of the objects of interest using cascading models and a tailored deep convolutional neural network (CNN) architecture. Models were able to identify and count endangered birds (California least terns) and nesting sites on beaches from UAS survey data, achieving negative/positive classification accuracies from candidate images upwards of 97% and an f1 score for detection of 0:837.
Poster Session
Inverse analysis of NIR and SWIR reflectance spectra for dye mixtures in fabrics using analytical basis functions
R. Viger,
S. Ramsey,
T. Mayo,
et al.
Show abstract
This study describes inverse spectral analysis of near-infrared (NIR, 0.7-0.9 μm) and shortwave infrared (SWIR, 0.9- 1.7 μm) reflectance spectra, which is for tailoring the NIR-SWIR reflectance of dyed fabrics. The parametric models used for inverse analysis are defined by linear combinations of gaussian functions, which are for modeling dyes in fabric whose absorption spectra span the NIR/SWIR spectral range. In general, these linear combinations are not unique, and thus suggest investigation concerning the concept of optimal linear combinations of analytical basis functions, which is in terms of analytical formulations, number of functions, and consistency of formulations with underlying physical processes. Prototype modeling is applied to NIR/SWIR absorbing dyes, and their mixtures, in fabric samples, which consist of a cotton blend. The results of this study demonstrate parametric modeling using linear combinations of gaussian functions for simulating NIR/SWIR spectral responses corresponding to variable dye and dye blend concentrations in fabrics.
Simple correction model for blurred images of uncooled bolometer type infrared cameras
Show abstract
Image blurring often occur when one uses uncooled bolometer type infrared cameras. We will report a simple way to correct the blurred images, which differs from the well-known correction method such as reconstruction of point-spread function and deconvolution. Uncooled infrared sensors have a long thermal time constant which causes to overflow the signal to the adjacent pixels. It is the reason why the images are blurred by the motion and the vibration. Therefore, the image correction is necessary to restore the signal and the shape of the objects under that condition. Our correction model enables a real-time image processing due to simple calculation method. We demonstrated the correction model by using an infrared camera attached to a vehicle, with a gyroscopic sensor to detect the orientation and the motion of the camera. When the camera is shaken by 15 degrees per second, our correction model recovers signal intensity by 65 % of the still image compared with 32 % after blurred, keeping noise level low.
Mode-selective read-in integrated circuit with improved input range for infrared scene projectors
Show abstract
As infrared (IR) imaging systems are being used more often in military fields, the importance of IR sensor evaluation system has emerged. Owing to their non-destructiveness and cost effectiveness, hardware-in-the-loop (HWIL) systems with IR scene projectors (IRSPs) are now being widely used. IRSPs generate virtual IR scenes to evaluate IR imaging systems, which has two performance parameters: thermal range and thermal resolution. Specifically, IR scene quality is determined by the thermal resolution performance and the input digital depth increment can provide a suitable solution to improve resolution. However, the input digital depth increment is limited by system noise and setting a sufficient thermal resolution with wide thermal range is difficult. In this paper, a mode-selective read-in integrated circuit (RIIC) with native transistor is proposed. The native transistor having almost zero threshold voltage, increases the input range, which helps to improve noise margin. A prototype of the RIIC was fabricated using a 0.18-μm 1-poly 6-metal CMOS process and its performance was estimated from measured data. Thermal resolution below 325 K was less than 30 mK and was 185 mK above 325 K in high-current mode; with 14-bit digital resolution, the thermal range varied from 270-325 K to 270-990 K.
Residual stress analysis of anodic aluminum oxide thin films for infrared emitter device application
Show abstract
Infrared scene projector (IRSP) is a tool for evaluating IR sensors by projecting virtual IR images from the IR mitter array. IR sensor assessment using IRSP is time and cost-effective, and it is safer than field-testing conducted by observing the actual weapon system in operation. Two important performance parameters of the IRSP are maximum apparent temperature and operating speed. While the apparent temperature can be increased by increasing the input power, the operating speed of the IRSP is limited by the thermal rising time, which is determined by the structural dimension and constituent material of the IR emitter device. To improve the operating speed of the device, materials with higher thermal conductivity and low heat capacity must be used. The emitter device has a suspended structure; therefore, a suitable material should be chosen. In this paper, the application of anodic aluminum oxide (AAO) is proposed as a material for the emitter device. The proposed AAO is suitable for high temperature operation owing to its mechanical strength and high melting point (~2345 K).Its higher thermal conductivity and porous structure, which lead to heat capacity reduction, can shorten the thermal rising time and operate in high speed. Before applying AAO to the IR emitter device, the residual stress of AAO thin film is analyzed by changing the fabrication conditions of the thin film to solve membrane deformation problem in the device fabrication process based on the FEA simulation.
Introducing a general purpose S/W for IR image generation and analysis
Show abstract
Recent guided missiles equipped with the IR sensor system detect or identify target objects by using the infrared signal contrast between the object and the background within the infrared image obtained from the IR sensor. In either situations requiring for high detectability or requiring for low observability, it is important to obtain information about infrared signatures of the object under various environmental conditions prior to the applications. Infrared signal analyses can be performed in two different ways either by direct measurements or by computer simulations. However the methods by direct measurements can be a costly or unavailable way when the object is located on a hardly reachable area under various environmental conditions while a computer simulation software can be a versatile and useful tool in analyzing IR signals from the object under those harsh situations. Demands for versatile infrared signature simulation tools have been a recent upsurge over many countries for military or commercial applications. In this study, we are introducing a general purpose simulation software developed for IR image generation and analysis. The software is developed for unsteady state thermal analyses of 3-D objects constructed by using different materials coated with various paints under varying environmental conditions. Also the software has the capabilities of analyzing the CRI, the detection probability and the detection range information from the images generated.