Show all abstracts
View Session
- Front Matter: Volume 8355
- Testing I
- Testing II
- Targets, Background, and Atmospherics I
- Targets, Background, and Atmospherics II
- Smart Processing I: Joint Session with Conference 8353
- Smart Processing II: Joint Session with Conference 8353
- Targets, Background, and Atmospherics III
- Modeling I
- Modeling II
- Modeling III
- Modeling IV
- Poster Session
Front Matter: Volume 8355
Front Matter: Volume 8355
Show abstract
This PDF file contains the front matter associated with SPIE Proceedings Volume 8355, including the Title Page, Copyright information, Table of Contents, and Conference Committee listing.
Testing I
Spectral responsivity calibrations of two types of pyroelectric radiometers using three different methods
Show abstract
Spectral responsivity calibrations of two different types of pyroelectric radiometers have been made in the infrared
region up to 14 μm in power mode using three different calibration facilities at NIST. One pyroelectric radiometer is a
temperature-controlled low noise-equivalent-power (NEP) single-element pyroelectric radiometer with an active area of
5 mm in diameter. The other radiometer is a prototype using the same type of pyroeletric detector with dome-input
optics, which was designed to increase absorptance and to minimize spectral structures to obtain a constant spectral
responsivity. Three calibration facilities at NIST were used to conduct direct and indirect responsivity calibrations tied to
absolute scales in the infrared spectral regime. We report the calibration results for the single-element pyroelectric
radiometer using a new Infrared Spectral Comparator Facility (IRSCF) for direct calibration. Also, a combined method
using the Fourier Transform Infrared Spectrophotometry (FTIS) facility and single wavelength laser tie-points are
described to calibrated standard detectors with an indirect approach. For the dome-input pyroelectric radiometer, the
results obtained from another direct calibration method using a circular variable filter (CVF) spectrometer and the FTIS
are also presented. The inter-comparison of different calibration methods enables us to improve the responsivity
uncertainty performed by the different facilities. For both radiometers, consistent results of the spectral power
responsivity have been obtained applying different methods from 1.5 μm to 14 μm with responsivity uncertainties
between 1 % and 2 % (k = 2). Relevant characterization results, such as spatial uniformity, linearity, and angular
dependence of responsivity, are shown. Validation of the spectral responsivity calibrations, uncertainty sources, and
improvements for each method will also be discussed.
Using GStreamer to perform real-time MRTD measurements on thermal imaging systems
Show abstract
The GStreamer architecture allows for simple modularized processing. Individual GStreamer elements have been
developed that allow for control, measurement, and ramping of a blackbody, for capturing continuous imagery
from a sensor, for segmenting out a MRTD target, for applying a blur equivalent to that of a human eye and a
display, and for thresholding a processed target contrast for "calling" it. A discussion of each of the components
will be followed by an analysis of its performance relative to that of human observers.
PV-MCT working standard radiometer
Show abstract
Sensitive infrared working-standard detectors with large active area are needed to extend the signal dynamic range of the
National Institute of Standards and Technology (NIST) pyroelectric transfer-standards used for infrared spectral power
responsivity calibrations. Increased sensitivity is especially important for irradiance mode responsivity measurements.
The noise equivalent power (NEP) of the NIST used pyroelectric transfer-standards is about 8 nW/Hz1/2, equal to a D*=
5.5 x 107 cm Hz1/2/W. A large-area photovoltaic HgCdTe (PV-MCT) detector was custom made for the 2.5 μm to
11 μm wavelength range using a 4-stage thermoelectric cooler. At least an order of magnitude lower NEP was expected
than that of the pyroelectric transfer-standards to measure irradiance. The large detector area was produced with multiple
p-n junctions. The periodical, multiple-junction structure produced a spatial non-uniformity in the detector response. The
PV-MCT radiometer was characterized for spatial non-uniformity of response using different incident beam sizes to
evaluate the uncertainty component caused by the spatial non-uniformity. The output voltage noise and also the current
and voltage responsivities were evaluated at different signal gains and frequencies. The output voltage noise was
decreased and the voltage responsivity was increased to lower the NEP of the radiometer. The uncertainty of the spectral
power responsivity measurements was evaluated. It is recommended to use a bootstrap type trans-impedance amplifier
along with a cold field-of-view limiter to improve the NEP of the PV-MCT radiometer.
Noise estimation of an MTF measurement
Show abstract
The modulation transfer function (MTF) measurement is critical for understanding the performance of an EOIR system.
Unfortunately, due to both spatially correlated and spatially un-correlated noise sources, the performance of the MTF
measurement (specifically near the cutoff) can be severely degraded. When using a 2D imaging system, the intrinsic
sampling of the 1D edge spread function (ESF) allows for redundant samples to be averaged suppressing the noise
contributions. The increase in the signal-to-noise will depend on the angle of the edge with respect to the sampling along
with the specified re-sampling rate. In this paper, we demonstrate how the information in the final ESF can be used to
identify the contribution of noise. With an estimate of the noise, the noise-limited portion of MTF measurement can be
identified. Also, we demonstrate how the noise-limited portion of the MTF measurement can be used in combination
with a fitting routine to provide a smoothed measurement.
Laser speckle MTF processing and test development for VIS and IR sensors
Show abstract
Using band-limited LASER speckle to measure the Modulation Transfer Function (MTF) of an image
sensor offers simplified procedure and inexpensive laboratory set up compared with the traditional method
of using a knife edge on the sensor imaging plane. This speckle technique has been previously
demonstrated by Glen Boreman's group on devices in the visible range. We have extended the procedure
to short-wave infrared (IR) sensor at 1.55 micron. Similar measurements were also made at 532 nanometer
on a commercial visible (VIS) sensor. The experiments show that the LASER speckle method to be
accurate when compared to knife-edge measurements for data below Nyquist. The measured MTF data
support optical system design and image quality modeling for both VIS and IR sensing applications.
Testing II
Advanced trend removal in 3D noise calculation
Show abstract
While it is now common practice to use a trend removal to eliminate low frequency xed pattern noise in
thermal imaging systems, there is still some disagreement as to whether one means of trend removal is better
than another and whether or not the strength of the trend removal should be limited. The dierent methods
for trend removal will be presented as well as an analysis of the calculated noise as a function of their strengths
will be presented for various thermal imaging systems. In addition, trend removals were originally put in place
in order to suppress the low-frequency component of the Sigma VH term. It is now prudent to perform a trend
removal at an intermediate noise calculation step in order to suppress the low frequency component of both the
Sigma V and Sigma H components. A discussion of the ramications of this change in measurement will be
included for thermal modeling considerations.
On-axis and off-axis characterization of MWIR and LWIR imaging systems using quadri-wave interferometry
Show abstract
The Quadri-Wave Lateral Shearing Interferometry (QWLSI) is an innovative wave front sensing technique that is
commercially available for MWIR and LWIR applications. We present this technology and its application to the
metrology, on and off-axis, of infrared imaging systems. The bench is only composed of a collimated reference beam
that creates a source point at infinity, the objective to analyze and the sensor placed a few millimeters after the focal spot.
Thanks to this direct measurement configuration, the alignment process is very simple and fast. A complete
characterization (aberrations, MTF, field curvature) for several field points is possible within a few minutes.
Rapid electro-optical (EO) TPS development in a military environment
Show abstract
Santa Barbara Infrared, Inc. has deployed IRWindows as an Electro-Optical test development and execution
environment for military Test Program Sets (TPS). Advantages of TPS development for EO systems in the
IRWindows environment are seen compared to the TPS development in ATLAS. The advantages of the IRWindows
environment are:
1. Faster learning curve (graphical user interface is easier than test line interface)
2. Faster TPS development time (real time changes and asset control interface allows for faster development)
3. Asset control panel allows user to control assets real time and monitor all asset functions during development
4. Unit Under Test (UUT) image viewer allows user to set test parameters like Region of Interest more easily and
more precisely
5. Continuous mode tests (like MTF allows user to real time adjustments)
6. Open architecture for test modifications
This paper will outline the details of how these advantages are utilized and how not only development time is
decreased but also how test execution time can be minimized making traditionally long TPS run times on EO systems
more efficient.
Targets, Background, and Atmospherics I
Atmospheric effects on target acquisition
Show abstract
Imaging systems have advanced significantly in the last decades in terms of low noise and better resolution. While
imaging hardware resolution can be limited by collection aperture size or by the camera modulation transfer function
(MTF), it is the atmosphere that usually limits image quality for long range imaging. The main atmospheric distortions
are caused by optical turbulence, absorption, and scattering by particulates in the atmosphere. The effects of the turbulent
medium over long/short exposures are image blur and wavefront tilts that cause spatio-temporal image shifts. This blur
limits the frequency of line pairs that can be resolved in the target's image and thus affects the ability to acquire targets.
The observer appears to be able to ignore large-scale distortions while small-scale distortions blur the image and degrade
resolution. Resolution degradations due to turbulence are included in current performance models by the use of an
atmospheric MTF. Turbulence distortion effects are characterized by both short and long exposure MTFs. In addition to
turbulence, scattering and absorption produced by molecules and aerosols in the atmosphere cause both attenuation and
additional image blur according to the atmospheric aerosol MTF. The absorption can have significant effect on target
acquisition in infrared (IR) imaging. In the present work, a brief overview and discussion of atmospheric effects on target
acquisition in the IR is given.
Improved motion estimation for restoring turbulence-distorted video
Show abstract
Artificial displacement (the apparent motion of stationary objects) is one important component of atmospheric
turbulence distortion, which has led many researchers to propose motion compensation as a solution. Defining a
sufficiently dense set of motion estimates for successful restoration is challenging, particularly for time sensitive
applications. We introduce a new, control grid implementation of optical
flow that allows for rapid and analytical
solutions to the motion estimation problem. Our results demonstrate the effectiveness of using the resulting
motion field for removing articial displacements in turbulence distorted videos.
Impact of atmospheric aerosols on long range image quality
Show abstract
Image quality in high altitude long range imaging systems can be severely limited by atmospheric absorption, scattering,
and turbulence. Atmospheric aerosols contribute to this problem by scattering target signal out of the optical path and by
scattering in unwanted light from the surroundings. Target signal scattering may also lead to image blurring though, in
conventional modeling, this effect is ignored. The validity of this choice is tested in this paper by developing an aerosol
modulation transfer function (MTF) model for an inhomogeneous atmosphere and then applying it to real-world
scenarios using MODTRAN derived scattering parameters. The resulting calculations show that aerosol blurring can be
effectively ignored.
Fried deconvolution
Show abstract
In this paper we present a new approach to deblur the effect of atmospheric turbulence in the case of long range
imaging. Our method is based on an analytical formulation, the Fried kernel, of the atmosphere modulation
transfer function (MTF) and a framelet based deconvolution algorithm. An important parameter is the refractive
index structure which requires specific measurements to be known. Then we propose a method which provides a
good estimation of this parameter from the input blurred image. The final algorithms are very easy to implement
and show very good results on both simulated blur and real images.
Turbulence stabilization
Show abstract
We recently developed a new approach to get a stabilized image from a sequence of frames acquired through
atmospheric turbulence. The goal of this algorihtm is to remove the geometric distortions due by the atmosphere
movements. This method is based on a variational formulation and is efficiently solved by the use of Bregman
iterations and the operator splitting method. In this paper we propose to study the influence of the choice
of the regularizing term in the model. Then we proposed to experiment some of the most used regularization
constraints available in the litterature.
Turbulence mitigation of short exposure image data using motion detection and background segmentation
Show abstract
Many remote sensing applications are concerned with observing objects over long horizontal paths and often the
atmosphere between observer and object is quite turbulent, especially in arid or semi-arid regions. Depending on the
degree of turbulence, atmospheric turbulence can cause quite severe image degradation, the foremost effects being
temporal and spatial blurring. And since the observed objects are not necessarily stationary, motion blurring can also
factor in the degradation process. At present, the majority of these image processing methods aim exclusively at the
restoration of static scenes. But there is a growing interest in enhancing turbulence mitigation methods to include moving
objects as well. Therefore, the approach in this paper is to employ block-matching as motion detection algorithm to
detect and estimate object motion in order to separate directed movement from turbulence-induced undirected motion.
This enables a segmentation of static scene elements and moving objects, provided that the object movement exceeds the
turbulence motion. Local image stacking is carried out for the moving elements, thus effectively reducing motion blur
created by averaging and improving the overall final image restoration by means of blind deconvolution.
Targets, Background, and Atmospherics II
Short-exposure passive imaging through path-varying convective boundary layer turbulence
Show abstract
As is well known, the turbulent coherence diameter is evaluated via an integral over path varying turbulence.
However, a recent analysis also suggests a system aperture size effect that interacts with the coherence diameter
effect. This effect, due to the phase structure function, produces an altered behavior on the short-exposure
atmospheric modulation transfer function (MTF). This behavior can be modeled as multiplicative adjustments
to two dimensionless imaging scenario parameters. To illustrate these effects, path dependent turbulence effects
are introduced through the context of a daytime convective boundary layer scenario featuring turbulence strength
that varies as a function of height to the minus-four-thirds power. Two path geometry cases are studied: slant
path propagation above flat terrain, where the object viewed and observer are at varying heights, and propagation
between an object viewed and an observer at equal heights above the terrain situated on opposite sides of a valley.
Results for both cases show the newly proposed atmospheric MTF is unaltered in form, but that path dependent
scaling laws apply to the two governing dimensionless parameters. Scaling relations are plotted for each case
studied, and the integral relations developed can be easily computed for further specific cases.
An efficient turbulence simulation algorithm
Show abstract
Turbulence mitigation techniques require input data representing a wide variety of turbulent atmospheric
and weather conditions in order to produce robust results and wider ranges of applicability. In the past, this
has implied the need for numerous data collection equipment items to account for multiple frequency bands
and various system configurations. However, recent advancements in turbulence simulation techniques
have resulted in viable options to real-time data collection with various levels of available simulation
accuracy. This treatment will detail the development and implementation of an extension to the second
order statistical turbulence simulation model presented by Repasi1 and others. The Repasi model is
extended to include the effects of various wavelengths, optical configurations, and short exposure imaging
on angle of arrival fluctuation statistics. The result of the development is an atmospheric turbulence
simulation technique that is physics-based but less computationally intensive than phase-based or deflector
screen approaches. In these cases, the statistical approach detailed in this paper provides the user with an
opportunity to obtain a better trade-off between accuracy and simulation run-time. The mathematical
development and reasoning behind the changes to the previous statistical model will be presented, and
sample imagery produced by the extended technique will be included. The result is a model that captures
the major turbulence effects required for algorithm development for large classes of mitigation techniques.
Energy conservation: a forgotten property of the turbulent point spread function
Show abstract
Energy conservation is an essential feature of the optical waves propagating through refractive turbulence. It was well
understood for almost 30 years, that energy conservation has a very important consequence for the fluctuations in the
images of the incoherent objects observed through turbulence. Namely the image of the uniformly illuminated areas of
the object does not scintillate. As a consequence the low-contrast parts of the scene exhibit weak fluctuation even for
very strong turbulence, but scintillations near the sharp edges can be strong even for the weak turbulence. Energy
conservation property of the turbulent Point Spread Function (PSF) is essential for modeling of the turbulent image
distortions, both for the development of the image processing techniques and for simulations of the turbulent imaging.
However it is completely neglected in the current literature on the turbulent imaging theory and modeling.
We discuss the relations between the energy conservation and anisoplanatism for the most common turbulence imaging
models. Our analysis reveals that the only isoplanatic authentic turbulent PSF that is compliant with energy conservation
corresponds to the thin aperture plane phase screen model of turbulence. This implies that for the near-the-ground
imaging, and even for the astronomical-type imaging under strong turbulence conditions the turbulent PSF has to be
modeled as a random function of four arguments with certain constraints.
We show some practical ways how the three functional constrains on the turbulent PSF: nonnegative values, finite
bandwidth and energy conservation can be complied with in practical generation of turbulent PSF.
Long-term measurements of atmospheric point-spread functions over littoral waters as determined by atmospheric turbulence
Show abstract
During the FATMOSE trial, held over the False Bay (South Africa) from November 2009 until October 2010, day and
night (24/7) high resolution images were collected of point sources at a range of 15.7 km. Simultaneously, data were
collected on atmospheric parameters, as relevant for the turbulence conditions: air- and sea temperature, windspeed,
relative humidity and the structure parameter for refractive index: Cn
2. The data provide statistical information on the
mean value and the variance of the atmospheric point spread function and the associated modulation transfer function
during series of consecutive frames. This information allows the prediction of the range performance for a given sensor,
target and atmospheric condition, which is of great importance for the user of optical sensors in related operational areas
and for the developers of image processing algorithms. In addition the occurrence of "lucky shots" in series of frames is
investigated: occasional frames with locally small blur spots. The simultaneously measured short exposure blur and the
beam wander are compared with simultaneously collected scintillation data along the same path and the Cn
2 data from a
locally installed scintillometer. By using two vertically separated sources, the correlation is determined between the
beam wander in their images, providing information on the spatial extension of the atmospheric turbulence (eddy size).
Examples are shown of the appearance of the blur spot, including skewness and astigmatism effects, which manifest
themselves in the third moment of the spot and its distortion. An example is given of an experiment for determining the
range performance for a given camera and a bar target on an outgoing boat in the False Bay.
Hyperspectral image turbulence measurements of the atmosphere
Show abstract
A Forward Looking Interferometer (FLI) sensor has the potential to be used as a means of detecting aviation hazards in
flight. One of these hazards is mountain wave turbulence. The results from a data acquisition activity at the University
of Colorado's Mountain Research Station will be presented here. Hyperspectral datacubes from a Telops Hyper-Cam
are being studied to determine if evidence of a turbulent event can be identified in the data. These data are then being
compared with D&P TurboFT data, which are collected at a much higher time resolution and broader spectrum.
Smart Processing I: Joint Session with Conference 8353
Infrared detector size: how low should you go?
Show abstract
In the past five years, significant progress has been accomplished in the reduction of infrared detector pitch and detector
size. Recently, longwave infrared detectors in limited quantities have been fabricated with a detector pitch of 5
micrometers. Detectors with 12 micrometer pitch are now becoming standard in both the midwave infrared (MWIR)
and longwave infrared (LWIR) sensors. Persistent surveillance systems are pursuing 10 micrometer detector pitch in
large format arrays. The fundamental question that most system designers and detector developers desire an answer to
is: "how small can you produce an infrared detector and still provide value in performance?" If a system is mostly
diffraction-limited, then developing a smaller detector is of limited benefit. If a detector is so small that it does not
collect enough photons to produce a good image, then a smaller detector is not much benefit. Resolution and signal-tonoise
are the primary characteristics of an imaging system that contribute to targeting, pilotage, search, and other human
warfighting task performance. In this paper, we investigate the task of target discrimination range performance as a
function of detector size/pitch. Results for LWIR and MWIR detectors are provided and depend on a large number of
assumptions that are reasonable.
Implementation of intensity ratio change and line-of-sight rate change algorithms for imaging infrared trackers
Show abstract
The use of the intensity change and line-of-sight (LOS) change concepts have previously been documented in the open-literature
as techniques used by non-imaging infrared (IR) seekers to reject expendable IR countermeasures (IRCM).
The purpose of this project was to implement IR counter-countermeasure (IRCCM) algorithms based on target intensity
and kinematic behavior for a generic imaging IR (IIR) seeker model with the underlying goal of obtaining a better
understanding of how expendable IRCM can be used to defeat the latest generation of seekers.
The report describes the Intensity Ratio Change (IRC) and LOS Rate Change (LRC) discrimination techniques. The
algorithms and the seeker model are implemented in a physics-based simulation product called Tactical Engagement
Simulation Software (TESS™). TESS is developed in the MATLAB®/Simulink® environment and is a suite of RF/IR
missile software simulators used to evaluate and analyze the effectiveness of countermeasures against various classes of
guided threats.
The investigation evaluates the algorithm and tests their robustness by presenting the results of batch simulation runs of
surface-to-air (SAM) and air-to-air (AAM) IIR missiles engaging a non-maneuvering target platform equipped with
expendable IRCM as self-protection. The report discusses how varying critical parameters such track memory time,
ratio thresholds and hold time can influence the outcome of an engagement.
Smart Processing II: Joint Session with Conference 8353
Turbulence compensation: an overview
Show abstract
In general, long range visual detection, recognition and identification are hampered by turbulence caused by atmospheric
conditions. Much research has been devoted to the field of turbulence compensation. One of the main advantages of
turbulence compensation is that it enables visual identification over larger distances. In many (military) scenarios this is
of crucial importance. In this paper we give an overview of several software and hardware approaches to compensate for
the visual artifacts caused by turbulence. These approaches are very diverse and range from the use of dedicated
hardware, such as adaptive optics, to the use of software methods, such as deconvolution and lucky imaging. For each
approach the pros and cons are given and it is indicated for which scenario this approach is useful. In more detail we
describe the turbulence compensation methods TNO has developed in the last years and place them in the context of the
different turbulence compensation approaches and TNO's turbulence compensation roadmap. Furthermore we look
forward and indicate the upcoming challenges in the field of turbulence compensation.
A real-time atmospheric turbulence mitigation and super-resolution solution for infrared imaging systems
Show abstract
Imagery acquired with modern imaging systems is susceptible to a variety of degradations, including blur from the point
spread function (PSF) of the imaging system, aliasing from undersampling, blur and warping from atmospheric
turbulence, and noise. A variety of image restoration methods have been proposed that estimate an improved image by
processing a sequence of these degraded images. In particular, multi-frame image restoration has proven to be a
particularly powerful tool for atmospheric turbulence mitigation (TM) and super-resolution (SR). However, these
degradations are rarely addressed simultaneously using a common algorithm architecture, and few TM or SR solutions
are capable of performing robustly in the presence of true scene motion, such as moving dismounts. Still fewer TM or
SR algorithms have found their way into practical real-time implementations. In this paper, we describe a new L-3 joint
TM and SR (TMSR) real-time processing solution and demonstrate its capabilities. The system employs a recently
developed versatile multi-frame joint TMSR algorithm that has been implemented using a real-time, low-power FPGA
processor system. The L-3 TMSR solution can accommodate a wide spectrum of atmospheric conditions and can
robustly handle moving vehicles and dismounts. This novel approach unites previous work in TM and SR and also
incorporates robust moving object detection. To demonstrate the capabilities of the TMSR solution, results using field
test data captured under a variety of turbulence levels, optical configurations, and applications are presented. The
performance of the hardware implementation is presented, and we identify specific insertion paths into tactical sensor
systems.
Turbulence degradation and mitigation performance for handheld weapon ID
Show abstract
Atmospheric turbulence can severely limit the range performance of state-of-the-art large aperture imaging sensor
systems, specifically those intended for long range ground to ground target identification. Simple and cost-effective
mitigation solutions which operate in real-time are desired. Software-based post-processing techniques are attractive as
they lend themselves to easy implementation and integration into the back-end of existing sensor systems. Recently,
various post-processing algorithms to mitigate turbulence have been developed and implemented in real-time hardware.
To determine their utility in Army-relevant tactical scenarios, an assessment of the impact of the post processing on
observer performance is required. In this paper, we test a set of representative turbulence mitigation algorithms on field
collected data of human targets carrying various handheld objects in varying turbulence conditions. We use a controlled
human perception test to assess handheld weapon identification performance before and after turbulence mitigation post-processing.
In addition, novel image analysis tools are implemented to estimate turbulence strength from the scene.
Results of this assessment will lead to recommendations on cost-effective turbulence mitigation strategies suitable for
future sensor systems.
Patch-based local turbulence compensation in anisoplanatic conditions
Show abstract
Infrared imagery over long ranges is hampered by atmospheric turbulence effects, leading to spatial resolutions worse
than expected by a diffraction limited sensor system. This diminishes the recognition range and it is therefore important
to compensate visual degradation due to atmospheric turbulence. The amount of turbulence is spatially varying due to
anisoplanatic conditions, while the isoplanatic angle varies with atmospheric conditions. But also the amount of
turbulence varies significantly in time. In this paper a method is proposed that performs turbulence compensation using a
patch-based approach. In each patch the turbulence is considered to be approximately spatially and temporally constant.
Our method utilizes multi-frame super-resolution, which incorporates local registration, fusion and deconvolution of the
data and also can increase the resolution. This makes our method especially suited to use under anisoplanatic conditions.
In our paper we show that our method is capable of compensating the effects of mild to strong turbulence conditions.
Targets, Background, and Atmospherics III
High fidelity simulations of infrared imagery with animated characters
Show abstract
High fidelity simulations of IR signatures and imagery tend to be slow and do not have effective support for animation of
characters. Simplified rendering methods based on computer graphics methods can be used to overcome these
limitations. This paper presents a method to combine these tools and produce simulated high fidelity thermal IR data of
animated people in terrain.
Infrared signatures for human characters have been calculated using RadThermIR. To handle multiple character models,
these calculations use a simplified material model for the anatomy and clothing. Weather and temperature conditions
match the IR-texture used in the terrain model. The calculated signatures are applied to the animated 3D characters that,
together with the terrain model, are used to produce high fidelity IR imagery of people or crowds.
For high level animation control and crowd simulations, HLAS (High Level Animation System) has been developed.
There are tools available to create and visualize skeleton based animations, but tools that allow control of the animated
characters on a higher level, e.g. for crowd simulation, are usually expensive and closed source. We need the flexibility
of HLAS to add animation into an HLA enabled sensor system simulation framework.
Infrared signature measurements with the ABB dual-band hyperspectral imager
Show abstract
MR-i is an imaging Fourier-Transform spectro-radiometer. This field instrument generates spectral
datacubes in the MWIR and LWIR. It is designed to acquire the spectral signatures of rapidly evolving
events.
The spectroradiometer is modular. The two output ports of the instrument can be populated with different
combinations of detectors (imaging or not). For instance, to measure over a broad spectral range one output
port can be equipped with a LWIR camera while the other port is equipped with a MWIR camera. No
dichroic filters are used to split the bands, hence enhancing the sensitivity. Both ports can be equipped with
cameras imaging the same spectral range but set at different sensitivity levels in order to increase the
measurement dynamic range and avoid saturation of bright parts of the scene while simultaneously
obtaining good measurements of the faintest parts of the scene. Various telescope options can be used for
the input port.
Comparison of image restoration algorithms in the context of horizontal-path imaging
Show abstract
We have looked at applying various image restoration techniques used in astronomy to the problem of imaging through
horizontal-path turbulence. The input data comes from an imaging test over a 2.5km path. The point-spread function
(PSF) is estimated directly from the data and supplied to the deconvolution algorithms. We show the usefulness of using
this approach, together with the analytical form of the turbulent PSF due to D. Fried, for reference-less imaging
scenarios.
Modeling I
Modeling boost performance using a two dimensional implementation of the targeting task performance metric
Show abstract
Using post-processing filters to enhance image detail, a process commonly referred to as boost, can significantly affect
the performance of an EO/IR system. The US Army's target acquisition models currently use the Targeting Task
Performance (TTP) metric to quantify sensor performance. The TTP metric accounts for each element in the system
including: blur and noise introduced by the imager, any additional post-processing steps, and the effects of the Human
Visual System (HVS). The current implementation of the TTP metric assumes spatial separability, which can introduce
significant errors when the TTP is applied to systems using non-separable filters. To accurately apply the TTP metric to
systems incorporating boost, we have implement a two-dimensional (2D) version of the TTP metric. The accuracy of the
2D TTP metric was verified through a series of perception experiments involving various levels of boost. The 2D TTP
metric has been incorporated into the Night Vision Integrated Performance Model (NV-IPM) allowing accurate system
modeling of non-separable image filters.
Performance evaluation of optimization methods for super-resolution mosaicking on UAS surveillance videos
Show abstract
Unmanned Aircraft Systems (UAS) have been widely applied into military reconnaissance and surveillance by
exploiting the information collected from the digital imaging payload. However, the data analysis of UAS videos is
frequently limited by motion blur; the frame-to-frame movement induced by aircraft roll, wind gusts, and less than ideal
atmospheric conditions; and the noise inherent within the image sensors. Therefore, the super-resolution mosaicking on
low-resolution UAS surveillance video frames, becomes an important task for UAS video processing and is a pre-step for
further effective image understanding.
Here we develop a novel super-resolution framework which does not require the construction of sparse
matrices. This method applied image operators in spatial domain and adopted an iterated back-projection method to
conduct super-resolution mosaics from UAS surveillance video frames. The Steepest Descent method, Conjugate
Gradient method and Levenberg Marquardt algorithm are used to numerically solve the nonlinear optimization problem
in the modeling of super-resolution mosaic. A quantity comparison in computation time and visual performance of the
super-resolution using the three numerical methods is performed. The Levenberg Marquardt algorithm provides a
numerical solution to the least squares curve fitting, which avoids the time-consuming computation of the inverse of the
pseudo Hessian matrix in regular singular value decomposition (SVD). The Levenberg Marquardt method, interpolating
between the Gauss-Newton algorithm (GNA) and the method of gradient descent, is efficient, robust, and easy to
implement. The results obtained in our simulations shows a great improvement of the resolution of the low resolution
mosaic of up to 47.54 dB for synthetic images, and a considerable visual improvement in sharpness and visual details for
real UAS surveillance frames. The convergence is generally reached in no more than ten iterations.
Modeling II
Human target acquisition performance
Show abstract
The battlefield has shifted from armored vehicles to armed insurgents. Target acquisition (identification, recognition, and detection) range performance involving humans as targets is vital for modern warfare. The acquisition and neutralization of armed insurgents while at the same time minimizing fratricide and civilian casualties is a mounting concern. U.S. Army RDECOM CERDEC NVESD has conducted many experiments involving human targets for infrared and reflective band sensors. The target sets include human activities, hand-held objects, uniforms & armament, and other tactically relevant targets. This paper will define a set of standard task difficulty values for identification and recognition associated with human target acquisition performance.
Validating an analytical technique for calculating detection probability given time-dependent search parameters
Show abstract
The search problem discussed in this paper is easily stated: given search parameters (Ρ∞, τ) that are known
functions of time, calculate how the probability of a single observer to acquire a target grows with time. This
problem was solved analytically in a previous paper. To investigate the validity of the solution, videos generated
using NVIG software show the view from a vehicle traveling at two different speeds along a flat, straight road.
Small, medium and large sized equilateral triangles with the same gray level as the road but without texture were
placed at random positions on a textured road and military observers were tasked to find the targets. Analysis of this
video in perception experiments yields experimental probability of detection as a function of time. Static perception
tests enabled Ρ∞ and τ to be measured as a function of range for the small, medium and large triangles. Since range is a known function of time, Ρ∞ and τ were known as functions of time. This enabled the calculation of modeled
detection probabilities which were then compared with measured detection probabilities.
A standard data set for performance analysis of advanced IR image processing techniques
Show abstract
Modern IR cameras are increasingly equipped with built-in advanced (often non-linear) image and signal processing
algorithms (like fusion, super-resolution, dynamic range compression etc.) which can tremendously influence
performance characteristics. Traditional approaches to range performance modeling are of limited use for these types of
equipment. Several groups have tried to overcome this problem by producing a variety of imagery to assess the impact of
advanced signal and image processing. Mostly, this data was taken from classified targets and/ or using classified imager
and is thus not suitable for comparison studies between different groups from government, industry and universities. To
ameliorate this situation, NATO SET-140 has undertaken a systematic measurement campaign at the DGA technical
proving ground in Angers, France, to produce an openly distributable data set suitable for the assessment of fusion,
super-resolution, local contrast enhancement, dynamic range compression and image-based NUC algorithm
performance. The imagery was recorded for different target / background settings, camera and/or object movements and
temperature contrasts. MWIR, LWIR and Dual-band cameras were used for recording and were also thoroughly
characterized in the lab. We present a selection of the data set together with examples of their use in the assessment of
super-resolution and contrast enhancement algorithms.
Benchmarking image fusion algorithm performance
Show abstract
Registering two images produced by two separate imaging sensors having different detector sizes and fields of
view requires one of the images to undergo transformation operations that may cause its overall quality to
degrade with regards to visual task performance. This possible change in image quality could add to an already
existing difference in measured task performance. Ideally, a fusion algorithm would take as input unaltered
outputs from each respective sensor used in the process. Therefore, quantifying how well an image fusion
algorithm performs should be base lined to whether the fusion algorithm retained the performance benefit
achievable by each independent spectral band being fused. This study investigates an identification perception
experiment using a simple and intuitive process for discriminating between image fusion algorithm
performances. The results from a classification experiment using information theory based image metrics is
presented and compared to perception test results. The results show an effective performance benchmark for
image fusion algorithms can be established using human perception test data. Additionally, image metrics have
been identified that either agree with or surpass the performance benchmark established.
Modeling III
Metrics for image-based modeling of target acquisition
Show abstract
This paper presents an image-based system performance model. The image-based system model uses an image metric to
compare a given degraded image of a target, as seen through the modeled system, to the set of possible targets in the
target set. This is repeated for all possible targets to generate a confusion matrix. The confusion matrix is used to
determine the probability of identifying a target from the target set when using a particular system in a particular set of
conditions. The image metric used in the image-based model should correspond closely to human performance. The
image-based model performance is compared to human perception data on Contrast Threshold Function (CTF) tests,
naked eye Triangle Orientation Discrimination (TOD), and TOD including an infrared camera system.
Image-based system performance modeling is useful because it allows modeling of arbitrary image processing. Modern
camera systems include more complex image processing, much of which is nonlinear. Existing linear system models,
such as the TTP metric model implemented in NVESD models such as NV-IPM, assume that the entire system is linear
and shift invariant (LSI). The LSI assumption makes modeling nonlinear processes difficult, such as local area
processing/contrast enhancement (LAP/LACE), turbulence reduction, and image fusion.
Measuring the performance of super-resolution reconstruction algorithms
Show abstract
For many military operations situational awareness is of great importance. This situational awareness and related
tasks such as Target Acquisition can be acquired using cameras, of which the resolution is an important characteristic.
Super resolution reconstruction algorithms can be used to improve the effective sensor resolution. In order
to judge these algorithms and the conditions under which they operate best, performance evaluation methods
are necessary. This evaluation, however, is not straightforward for several reasons. First of all, frequency-based
evaluation techniques alone will not provide a correct answer, due to the fact that they are unable to discriminate
between structure-related and noise-related effects. Secondly, most super-resolution packages perform additional
image enhancement techniques such as noise reduction and edge enhancement. As these algorithms improve
the results they cannot be evaluated separately. Thirdly, a single high-resolution ground truth is rarely available.
Therefore, evaluation of the differences in high resolution between the estimated high resolution image
and its ground truth is not that straightforward. Fourth, different artifacts can occur due to super-resolution
reconstruction, which are not known on forehand and hence are difficult to evaluate.
In this paper we present a set of new evaluation techniques to assess super-resolution reconstruction algorithms.
Some of these evaluation techniques are derived from processing on dedicated (synthetic) imagery.
Other evaluation techniques can be evaluated on both synthetic and natural images (real camera data). The
result is a balanced set of evaluation algorithms that can be used to assess the performance of super-resolution
reconstruction algorithms.
Weighted contrast metric for imaging system performance
Show abstract
There have been significant improvements in the image quality metrics used in the NVESD model suite in recent
years. The introduction of the Targeting Task Performance (TTP) metric to replace the Johnson criteria yielded
significantly more accurate predictions for under-sampled imaging systems in particular. However, there are
certain cases which cause the TTP metric to predict optimistic performance. In this paper a new metric for
predicting performance of imaging systems is described. This new weighted contrast metric is characterized as
a hybrid of the TTP metric and Johnson criteria. Results from a number of historical perception studies are
presented to compare the performance of the TTP metric and Johnson criteria to the newly proposed metric.
Improved fusing infrared and electro-optic signals for high-resolution night images
Show abstract
Electro-optic (EO) images exhibit the properties of high resolution and low noise level, while it is a challenge to
distinguish objects with infrared (IR), especially for objects with similar temperatures. In earlier work, we proposed a
novel framework for IR image enhancement based on the information (e.g., edge) from EO images. Our framework
superimposed the detected edges of the EO image with the corresponding transformed IR image. Obviously, this
framework resulted in better resolution IR images that help distinguish objects at night. For our IR image system, we
used the theoretical point spread function (PSF) proposed by Russell C. Hardie et al., which is composed of the
modulation transfer function (MTF) of a uniform detector array and the incoherent optical transfer function (OTF) of
diffraction-limited optics. In addition, we designed an inverse filter based on the proposed PSF to transform the IR image.
In this paper, blending the detected edge of the EO image with the corresponding transformed IR image and the original
IR image is the principal idea for improving the previous framework. This improved framework requires four main steps:
(1) inverse filter-based IR image transformation, (2) image edge detection, (3) images registration, and (4) blending of
the corresponding images. Simulation results show that blended IR images have better quality over the superimposed
images that were generated under the previous framework. Based on the same steps, the simulation result shows a
blended IR image of better quality when only the original IR image is available.
Modeling IV
Locally adaptive contrast enhancement and dynamic range compression
Show abstract
In surveillance applications, the visibility of details within an image is necessary to ensure
detection. However, bright spots in images can occupy most of the dynamic range of the
sensor, causing lower energy details to appear dark and difficult to see. In addition, shadows
from structures such as buildings or bridges obscure features within the image, further limiting
contrast. Dynamic range compression and contrast enhancement algorithms can be used to
improve the visibility of these low energy details. In this paper, we propose a locally adaptive
contrast enhancement algorithm based on the multi-scale wavelet transform to compress the
dynamic range of images as well as increase the visibility of details obscured by shadows.
Using an edge detector as the mother wavelet, this algorithm operates by increasing the gain of
low energy gradient magnitudes provided by the wavelet transform, while simultaneously
decreasing the gain of higher energy gradient magnitudes. Limits on the amount of gain
imposed are set locally to prevent the over-enhancement of noise. The results of using the
proposed method on aerial images show that this method outperforms common methods in its
ability to enhance small details while simultaneously preventing ringing artifacts and noise
over-enhancement.
Impact of waveband on target-to-background contrast of camouflage
Show abstract
The purpose of military camouflage is to make an object hard to see, or to confuse a
hostile observer as to its true nature. Perfect camouflage would do this by making itself
invisible against its surroundings. Currently, the best camouflage attempts to make the
target appear to be a natural part of the background. An imperfect but still very useful
metric of this similarity between target and background is the at-range contrast difference
between them, and the smaller that is, the harder it is to discern the camouflaged object
and the longer it takes to determine its true nature. The intrinsic contrast difference in the
reflective wavebands (i.e., visible through short wave infrared), is a function of the
spectral nature of the scene illumination and the spectral reflectivity of the camouflage
and background. Until recently, military camouflages have been typically designed to
work best in the visible band against one of the generic background types such as
woodland, desert, arctic, etc., without significant attention paid to performance against a
different background, type of scene illumination, or different waveband. This paper
documents an investigation into the dependence of the contrast difference behavior of
camouflage as a function of waveband, background, and scene illumination using battle
dress uniforms (BDU) as material.
Performance modeling and assessment of infrared-sensors applicable for TALOS project UGV as a function of target/background and environmental conditions
Show abstract
TALOS (Transportable and Autonomous Land bOrder Surveillance system - www.talos-border.eu) is an international
research project co-funded from EU 7th Framework Program funds in Security priority. The main objective of TALOS
project is to develop and field test the innovative concept of a mobile, autonomous system for protecting European land
borders. Unmanned Ground Vehicles (UGVs) are major components of TALOS project. The UGVs will be equipped
with long range radar for detection of moving vehicle and people, as well as long focal length EO/IR sensors allowing
the operator to recognize and identify the detected objects of interest. Furthermore medium focal length IR sensors are
used to allow the operator to drive the UGV. Those sensors must fulfill mission requirements for extremely various
environmental conditions (backgrounds, topographic characteristics, climatic conditions, weather conditions) existing
from Finland in the North and Bulgaria / Turkey in the South of Europe. An infrared sensor performance model was
developed at ONERA in order to evaluate target detection, recognition and identification range for several simulations
cases representative of the whole environmental variability domain. Results analysis allows assessing the operability
domain of the infrared sensors. This paper presents the infrared sensor performance evaluation methodology and the
synthesis of a large number of simulation results applied to two infrared sensors of interest: a medium / long range
cooled MWIR sensor for observation and a short / medium uncooled LWIR sensor for navigation.
Evaluating the efficiency of a night-time, middle-range infrared sensor for applications in human detection and recognition
Show abstract
In law enforcement and security applications, the acquisition of face images is critical in producing key trace
evidence for the successful identication of potential threats. In this work we, first, use a near infrared (NIR)
sensor designed with the capability to acquire images at middle-range stand-off distances at night. Then, we
determine the maximum stand-off distance where face recognition techniques can be utilized to efficiently recognize individuals at night at ranges from 30 to approximately 300 ft. The focus of the study is on establishing
the maximum capabilities of the mid-range sensor to acquire good quality face images necessary for recognition.
For the purpose of this study, a database in the visible (baseline) and NIR spectrum of 103 subjects is assembled
and used to illustrate the challenges associated with the problem. In order to perform matching studies, we use
multiple face recognition techniques and demonstrate that certain techniques are more robust in terms of recognition performance when using face images acquired at different distances. Experiments show that matching
NIR face images at longer ranges (i.e. greater than about 300 feet or 90 meters using our camera system) is a
very challenging problem and it requires further investigation.
Compensating internal temperature effects in uncooled microbolometer-based infrared cameras
Show abstract
In this paper the effects of the internal temperature on the response of uncooled microbolometer cameras have
been studied. To this end, different temperature profiles steering the internal temperature of the cameras have
been generated, and black-body radiator sources have been employed as time and temperature constant radiation
inputs. The analysis conducted over the empirical data has shown the existence of statistical correlation between
camera's internal temperature and the fluctuations in the read-out data. Thus, when measurements of the
internal temperature are available, effective methods for compensating the fluctuations in the read-out data can
be developed. This claim has been tested by developing a signal processing scheme, based on a polynomial
model, to compensate for the output of infrared cameras equipped with amorphous-Silicon and Vanadium-Oxide
microbolometers.
A standardized way to select, evaluate, and test an analog-to-digital converter for ultrawide bandwidth radiofrequency signals based on user's needs, ideal, published,and actual specifications
Show abstract
The most important adverse impact on the Electronic Warfare (EW) simulation is that the number of signal sources that
can be tested simultaneously is relatively small. When the number of signal sources increases, the analog hardware,
complexity and costs grow by the order of N2, since the number of connections among N components is O(N*N) and the
signal communication is bi-directional. To solve this problem, digitization of the signal is suggested. In digitizing a
radiofrequency signal, an Analog-to-Digital Converter (ADC) is widely used. Most research studies on ADCs are
conducted from designer/test engineers' perspective. Some research studies are conducted from market's perspective.
This paper presents a generic way to select, evaluate and test ultra high bandwidth COTS ADCs and generate
requirements for digitizing continuous time signals from the perspective of user's needs. Based on user's needs, as well as
vendor's published, ideal and actual specifications, a decision can be made in selecting a proper ADC for an application.
To support our arguments and illustrate the methodology, we evaluate a Tektronix TADC-1000, an 8-bit and 12
gigasamples per second ADC. This project is funded by JEWEL lab, NAWCWD at Point Mugu, CA.
Poster Session
Determining detection, recognition, and identification ranges of thermal cameras on the basis of laboratory measurements and TTP model
Show abstract
TTP (Targeting Task Performance) model is widely used for the estimation of theoretical performance of observation
devices. It is used, for example, in the NVTERM software and makes it possible to determine the detection, recognition
and identification ranges for a standard target types on the basis of known technical parameters of analyzed device.
Many theoretical analysis concerning TTP model can be found, as well as a few experimental, field test results. However
the usability of the TTP model for the calculation of range parameters on the basis of laboratory test results has not been
widely analyzed. The paper presents an attempt to apply TTP model for the estimation of range parameters of thermal
cameras using laboratory measurement results of camera properties. The test stand consists of an IR collimator, a
standard IR source, a set of test targets and a computer with data acquisition card. The method used for the measurement
of aforementioned characteristics will be described and the algorithms used to finally estimate the range parameters of a
tested thermal camera using TTP model.
Testing of infrared image enhancing algorithm in different spectral bands
Show abstract
The paper presents results of testing the infrared image quality enhancing algorithm based on histogram processing.
Testing were performed on real images registered in NIR, MWIR, and LWIR spectral bands. Infrared images are a very
specific type of information. The perception and interpretation of such image depends not only on radiative properties of
observed objects and surrounding scenery. Probably still most important are skills and experience of an observer itself.
In practice, the optimal settings of the camera as well as automatic temperature range or contrast control do not guarantee
the displayed images are optimal from observer's point of view. The solution to this are algorithms of image quality
enhancing based on digital image processing methods. Such algorithms can be implemented inside the camera or applied
later, after image registration. They must improve the visibility of low-contrast objects. They should also provide
effective dynamic contrast control not only across entire image but also selectively to specific areas in order to maintain
optimal visualization of observed scenery. In the paper one histogram equalization algorithm was tested. Adaptive nature
of the algorithm should assure significant improvement of the image quality and the same effectiveness of object
detection. Another requirement and difficulty is that it should also be effective for any given thermal image and it should
not cause a visible image degradation in unpredictable situations. The application of tested algorithm is a promising
alternative to a very effective but complex algorithms due to its low complexity and real time operation.
An experimental validation of the Gauss-Markov model for nonuniformity noise in infrared focal plane array sensors
Show abstract
The aim of this research is to experimentally validate a Gauss-Markov model, previously developed by our
group, for the non-uniformity parameters of infrared (IR) focal plane arrays (FPAs). The Gauss-Markov model
assumed that both, the gain and the offset parameters at each detector, are random state-variables modeled by a
recursive discrete-time process. For simplicity, however, we have regarded here the gain parameter as a constant
and assumed that solely the offset parameter follows a Gauss-Markov model. Experiments have been conducted
at room temperature and IR data was collected from black-body radiator sources using microbolometer-based
IR cameras operating in the 8 to 12 μm. Next, well-known statistical techniques were used to analyze the offset
time series and determinate whether the Gauss-Markov model truly fits the temporal dynamics of the offset. The
validity of the Gauss-Markov model for the offset parameter was tested at two time scales: seconds and minutes.
It is worth mentioning that the statistical analysis conducted in this work is a key in providing mechanisms for
capturing the drift in the fixed pattern noise parameters.
Modification of infrared signature of naval vessels
Show abstract
Every naval vessel can be detected and identified on the basis of its characteristics. The reduction of signature or
matching it to the surrounding environment are one of the key tasks regarding survivability on a modern battlefield. The
typical coatings applied on the outer surfaces of vessels are various kinds of paints. Their purpose is to protect the hull
from aggressive sea environment and to provide camouflage in the visual spectrum as well as scatter and deflect
microwave radiation. Apart from microwave and visual, infrared is most important spectral band used for detection
purposes. In order to obtain effective protection in infrared the thermal signature of a vessel is required. It is determined
on the basis of thermal contrast between a vessel itself and actual background and depends mostly on radiant properties
of the hull. Such signature can be modified by altering apparent temperature values or the directions, in which the
infrared radiation is emitted. The paper discusses selected methods of modification of vessel's infrared signature and
effectiveness of infrared camouflage. Theoretical analyses were preceded by experimental measurements. The
measurement-class infrared cameras and imaging spectroradiometers were used in order to determine the radiant
exitance from different surface types. Experiments were conducted in selected conditions taking into account solar
radiation and radiation reflected from elements of the surrounding scenery. Theoretical analysis took into account radiant
angular properties of a vessel hull and attenuation of radiation after passing through the atmosphere. The study was
performed in MWIR and LWIR ranges.
Radiometric calibration for MWIR cameras
Show abstract
Korean Multi-purpose Satellite-3A (KOMPSAT-3A), which weighing about 1,000 kg is scheduled to be launched
in 2013 and will be located at a sun-synchronous orbit (SSO) of 530 km in altitude. This is Korea's rst satellite
to orbit with a mid-wave infrared (MWIR) image sensor, which is currently being developed at Korea Aerospace
Research Institute (KARI). The missions envisioned include forest re surveillance, measurement of the ocean
surface temperature, national defense and crop harvest estimate.
In this paper, we shall explain the MWIR scene generation software and atmospheric compensation techniques
for the infrared (IR) camera that we are currently developing. The MWIR scene generation software we have
developed taking into account sky thermal emission, path emission, target emission, sky solar scattering and
ground re
ection based on MODTRAN data. Here, this software will be used for generating the radiation image
in the satellite camera which requires an atmospheric compensation algorithm and the validation of the accuracy
of the temperature which is obtained in our result.
Image visibility restoration algorithm is a method for removing the eect of atmosphere between the camera
and an object. This algorithm works between the satellite and the Earth, to predict object temperature noised
with the Earth's atmosphere and solar radiation. Commonly, to compensate for the atmospheric eect, some
softwares like MODTRAN is used for modeling the atmosphere. Our algorithm doesn't require an additional
software to obtain the surface temperature. However, it needs to adjust visibility restoration parameters and the
precision of the result still should be studied.
Infrared image segmentation with Gaussian mixture modeling
Show abstract
Infrared imaging allows surveillance during the night, thus it has been widely used for military and security applications.
However, infrared images are generally characterized by low resolution, low contrast, and an unclear texture with no
color information. Moreover, various types of noises and background clutters can degrade the image quality. This paper
discusses multi-level segmentation for infrared images. The expectation-maximization algorithm is adopted to cluster
pixels on the basis of Gaussian mixture models. The use of the multi-level segmentation method enables the extraction of
human target regions from the background of the image. Several infrared images are processed to demonstrate the
effectiveness of the presented method.
Evaluation of the effects of some remarkable internal and external factors on an infrared seeker
Show abstract
Seekers are one of the most important subsystems of guided aerial munitions such that they are used both to detect and
track prespecified targets within specific engagement scenarios. Among them, infrared (IR) types constitute a significant
portion of seekers. Actually, performance characteristics of seekers depend on some certain factors. Regarding the type
of their sources, these factors can be classified as internal and external factors. Sensitivity, resolution, optics,detectors,
dome geometry, and materials happen to the most significant internal factors acting on the IR seekers while atmospheric
transmittance, and visibility can be counted within the remarkable external factors. In this study, the basic effects of the
above mentioned internal and external factors on the performance characteristics of a generic IR seeker is examined and
corresponding interpretations are presented at the end of the work.