Proceedings Volume 9997

Target and Background Signatures II

cover
Proceedings Volume 9997

Target and Background Signatures II

Purchase the printed version of this volume at proceedings.com or access the digital version at SPIE Digital Library.

Volume Details

Date Published: 8 December 2016
Contents: 8 Sessions, 26 Papers, 16 Presentations
Conference: SPIE Security + Defence 2016
Volume Number: 9997

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • UAV Detection
  • Camouflage Effectiveness
  • Multi-/Hyperspectral Signatures
  • Image Interpretation I
  • Image Interpretation II
  • Signature and Scene Modelling
  • Poster Session
  • Front Matter: Volume 9997
UAV Detection
icon_mobile_dropdown
Detection of acoustic, electro-optical and RADAR signatures of small unmanned aerial vehicles
Alexander Hommes, Alex Shoykhetbrod, Denis Noetel, et al.
We investigated signatures of small unmanned aerial vehicles (UAV) with different sensor technologies ranging from acoustical antennas, passive and active optical imaging devices to small-size FMCW RADAR systems. These sensor technologies have different advantages and drawbacks and can be applied in a complementary sensor network to benefit from their different strengths.
Detection of mini-UAVs in the presence of strong topographic relief: a multisensor perspective
Urs Böniger, Beat Ott, Peter Wellig, et al.
Based on the steadily growing use of mini-UAVs for numerous civilian and military applications, mini-UAVs have been recognized as an increasing potential threat. Therefore, counter-UAV solutions addressing the peculiarities of this class of UAVs have recently received a significant amount of attention. Reliable detection, localization, identification and tracking represents a fundamental prerequisite for such counter-UAV systems. In this paper, we focus on the assessment of different sensor technologies and their ability to detect mini-UAVs in a representative rural Swiss environment. We conducted a field trial in August 2015, using different, primarily short range, experimental sensor systems from armasuisse and selected research partners. After an introduction into the challenges for UAV detection in regions with strong topographic relief, we will introduce the experimental setup and describe the key results from this joint experiment.
High infrasonic goniometry applied to the detection of a helicopter in a high activity environment
Vincent Chritin, Eric Van Lancker, Peter Wellig, et al.
A current concern of armasuisse is the feasibility of a fixed or mobile acoustic surveillance and recognition network of sensors allowing to permanently monitor the noise immissions of a wide range of aerial activities such as civil or military aviation, and other possible acoustic events such as transient events, subsonic or sonic booms or other. This objective requires an ability to detect, localize and recognize a wide range of potential acoustic events of interest, among others possibly parasitic acoustic events (natural and industrial events on the ground for example), and possibly high background noise (for example close to urban or high activity areas). This article presents a general discussion and conclusion about this problem, based on 20 years of experience totalizing a dozen of research programs or internal researches by IAV, with an illustration through one central specific experimental case-study carried out within the framework of an armasuisse research program.
Numerical RCS and micro-Doppler investigations of a consumer UAV
Arne Schröder, Uwe Aulenbacher, Matthias Renker, et al.
This contribution gives an overview of recent investigations regarding the detection of a consumer market unmanned aerial vehicles (UAV). The steadily increasing number of such drones gives rise to the threat of UAVs interfering civil air traffic. Technologies for monitoring UAVs which are flying in restricted air space, i. e. close to airports or even over airports, are desperately needed. One promising way for tracking drones is to employ radar systems. For the detection and classification of UAVs, the knowledge about their radar cross section (RCS) and micro-Doppler signature is of particular importance. We have carried out numerical and experimental studies of the RCS and the micro-Doppler of an example commercial drone in order to study its detectability with radar systems.
Spurious RF signals emitted by mini-UAVs
Ric (H. M. A.) Schleijpen, Vincent Voogt, Peter Zwamborn, et al.
This paper presents experimental work on the detection of spurious RF emissions of mini Unmanned Aerial Vehicles (mini-UAV). Many recent events have shown that mini-UAVs can be considered as a potential threat for civil security. For this reason the detection of mini-UAVs has become of interest to the sensor community. The detection, classification and identification chain can take advantage of different sensor technologies. Apart from the signatures used by radar and electro-optical sensor systems, the UAV also emits RF signals. These RF signatures can be split in intentional signals for communication with the operator and un-intentional RF signals emitted by the UAV. These unintentional or spurious RF emissions are very weak but could be used to discriminate potential UAV detections from false alarms.

The goal of this research was to assess the potential of exploiting spurious emissions in the classification and identification chain of mini-UAVs. It was already known that spurious signals are very weak, but the focus was on the question whether the emission pattern could be correlated to the behaviour of the UAV. In this paper experimental examples of spurious RF emission for different types of mini-UAVs and their correlation with the electronic circuits in the UAVs will be shown
Near-infrared high-resolution real-time omnidirectional imaging platform for drone detection
Vladan Popovic, Beat Ott, Peter Wellig, et al.
Recent technological advancements in hardware systems have made higher quality cameras. State of the art panoramic systems use them to produce videos with a resolution of 9000 x 2400 pixels at a rate of 30 frames per second (fps).1 Many modern applications use object tracking to determine the speed and the path taken by each object moving through a scene. The detection requires detailed pixel analysis between two frames. In fields like surveillance systems or crowd analysis, this must be achieved in real time.2

In this paper, we focus on the system-level design of multi-camera sensor acquiring near-infrared (NIR) spectrum and its ability to detect mini-UAVs in a representative rural Swiss environment. The presented results show the UAV detection from the trial that we conducted during a field trial in August 2015.
Visual signature reduction of unmanned aerial vehicles
Z. W. Zhong, Z. X. Ma, Jayawijayaningtiyas, et al.
With the emergence of unmanned aerial vehicles (UAVs) in multiple tactical defence missions, there was a need for an efficient visual signature suppression system for a more efficient stealth operation. One of our studies experimentally investigated the visual signature reduction of UAVs achieved through an active camouflage system. A prototype was constructed with newly developed operating software, Cloak, to provide active camouflage to the UAV model. The reduction of visual signature was analysed. Tests of the devices mounted on UAVs were conducted in another study. A series of experiments involved testing of the concept as well as the prototype. The experiments were conducted both in the laboratory and under normal environmental conditions. Results showed certain degrees of blending with the sky to create a camouflage effect. A mini-UAV made mostly out of transparent plastic was also designed and fabricated. Because of the transparency of the plastic material, the visibility of this UAV in the air is very small, and therefore the UAV is difficult to be detected. After re-designs and tests, eventually a practical system to reduce the visibility of UAVs viewed by human observers from the ground was developed. The system was evaluated during various outdoor tests. The scene target-to-background lightness contrast and the scene target-to-background colour contrast of the adaptive control system prototype were smaller than 10% at a stand-off viewing distance of 20-50 m.
Evaluation of experimental UAV video change detection
J. Bartelsen, G. Saur, C. Teutsch
During the last ten years, the availability of images acquired from unmanned aerial vehicles (UAVs) has been continuously increasing due to the improvements and economic success of flight and sensor systems. From our point of view, reliable and automatic image-based change detection may contribute to overcoming several challenging problems in military reconnaissance, civil security, and disaster management. Changes within a scene can be caused by functional activities, i.e., footprints or skid marks, excavations, or humidity penetration; these might be recognizable in aerial images, but are almost overlooked when change detection is executed manually. With respect to the circumstances, these kinds of changes may be an indication of sabotage, terroristic activity, or threatening natural disasters. Although image-based change detection is possible from both ground and aerial perspectives, in this paper we primarily address the latter. We have applied an extended approach to change detection as described by Saur and Kruger,1 and Saur et al.2 and have built upon the ideas of Saur and Bartelsen.3 The commercial simulation environment Virtual Battle Space 3 (VBS3) is used to simulate aerial "before" and "after" image acquisition concerning flight path, weather conditions and objects within the scene and to obtain synthetic videos. Video frames, which depict the same part of the scene, including "before" and "after" changes and not necessarily from the same perspective, are registered pixel-wise against each other by a photogrammetric concept, which is based on a homography. The pixel-wise registration is used to apply an automatic difference analysis, which, to a limited extent, is able to suppress typical errors caused by imprecise frame registration, sensor noise, vegetation and especially parallax effects. The primary concern of this paper is to seriously evaluate the possibilities and limitations of our current approach for image-based change detection with respect to the flight path, viewpoint change and parametrization. Hence, based on synthetic "before" and "after" videos of a simulated scene, we estimated the precision and recall of automatically detected changes. In addition and based on our approach, we illustrate the results showing the change detection in short, but real video sequences. Future work will improve the photogrammetric approach for frame registration, and extensive real video material, capable of change detection, will be acquired.
Camouflage Effectiveness
icon_mobile_dropdown
Disruptive coloration in woodland camouflage: evaluation of camouflage effectiveness due to minor disruptive patches
We present results from an observer based photosimulation study of generic camouflage patterns, intended for military uniforms, where three near-identical patterns have been compared. All the patterns were prepared with similar effective color, but were different in how the individual pattern patches were distributed throughout the target. We did this in order to test if high contrast (black) patches along the outline of the target would enhance the survivability when exposed to human observers. In the recent years it has been shown that disruptive coloration in the form of high contrast patches are capable of disturbing an observer by creating false edges of the target and consequently enhance target survivability. This effect has been shown in different forms in the Animal Kingdom, but not to the same extent in camouflaged military targets. The three patterns in this study were i) with no disruptive preference, ii) with a disruptive patch along the outline of the head and iii) with a disruptive patch on the outline of one of the shoulders. We used a high number of human observers to assess the three targets in 16 natural (woodland) backgrounds by showing images of one of the targets at the time on a high definition pc screen. We found that the two patterns that were thought to have a minor disruptive preference to the remaining pattern were more difficult to detect in some (though not all) of the 16 scenes and were also better in overall performance when all the scenes were accounted for.
Modelling vehicle colour and pattern for multiple deployment environments
Eric Liggins, Ian R. Moorhead, Daniel A. Pearce, et al.
Military land platforms are often deployed around the world in very different climate zones. Procuring vehicles in a large range of camouflage patterns and colour schemes is expensive and may limit the environments in which they can be effectively used. As such this paper reports a modelling approach for use in the optimisation and selection of a colour palette, to support operations in diverse environments and terrains. Three different techniques were considered based upon the differences between vehicle and background in L*a*b* colour space, to predict the optimum (initially single) colour to reduce the vehicle signature in the visible band. Calibrated digital imagery was used as backgrounds and a number of scenes were sampled. The three approaches used, and reported here are a) background averaging behind the vehicle b) background averaging in the area surrounding the vehicle and c) use of the spatial extension to CIE L*a*b*; S-CIELAB (Zhang and Wandell, Society for Information Display Symposium Technical Digest, vol. 27, pp. 731-734, 1996). Results are compared with natural scene colour statistics. The models used showed good agreement in the colour predictions for individual and multiple terrains or climate zones. A further development of the technique examines the effect of different patterns and colour combinations on the S-CIELAB spatial colour difference metric, when scaled for appropriate viewing ranges.
Camouflage in thermal IR: spectral design
Anna Pohl, Jan Fagerström, Hans Kariis, et al.
In this work a spectral designed coating from SPECTROGON is evaluated. Spectral design in this case means that the coating has a reflectivity equal to one at 3-5 and 8-12 microns were sensors operate and a much lower reflectivity in the other wave length regions. Three boxes are evaluated: one metallic, one black-body and one with a spectral designed surface, all with a 15 W radiator inside the box. It is shown that the box with the spectral designed surface can combine the two good characteristics of the other boxes: low signature from the metallic box and reasonable inside temperature from the black-body box. The measurements were verified with calculations using RadThermIR.
Multi-/Hyperspectral Signatures
icon_mobile_dropdown
Tasks and tools for battlefield reconnaissance
Sebastian Strecker
The continuous development on the field of electro-optics has certainly a big influence on the field of military vehicles. The same way it increases the own visual and thereby the operational range, it also increases the danger of detection by enemy forces. This conflict between the enhancement of sensor performance on one side and the minimization of vehicle signature by design on the other side is the major issue in the field of battlefield reconnaissance.

The understanding of the interaction between the theoretical sensor performance, its limitation caused by atmospheric effects and the constructive limitations in the vehicle’s signature minimization is mandatory for a realistic assessment of sensor systems. This paper describes the tasks and tools for battlefield reconnaissance at the Bundeswehr Technical Center for Weapons and Ammunition (WTD 91) in Meppen (DEU).
High dynamic range hyperspectral imaging for camouflage performance test and evaluation
D. Pearce, J. Feenan
This paper demonstrates the use of high dynamic range processing applied to the specific technique of hyper-spectral imaging with linescan spectrometers. The technique provides an improvement in signal to noise for reflectance estimation. This is demonstrated for field measurements of rural imagery collected from a ground-based linescan spectrometer of rural scenes. Once fully developed, the specific application is expected to improve the colour estimation approaches and consequently the test and evaluation accuracy of camouflage performance tests. Data are presented on both field and laboratory experiments that have been used to evaluate the improvements granted by the adoption of high dynamic range data acquisition in the field of hyperspectral imaging. High dynamic ranging imaging is well suited to the hyperspectral domain due to the large variation in solar irradiance across the visible and short wave infra-red (SWIR) spectrum coupled with the wavelength dependence of the nominal silicon detector response. Under field measurement conditions it is generally impractical to provide artificial illumination; consequently, an adaptation of the hyperspectral imaging and re ectance estimation process has been developed to accommodate the solar spectrum. This is shown to improve the signal to noise ratio for the re ectance estimation process of scene materials in the 400-500 nm and 700-900 nm regions.
Pixelated camouflage patterns from the perspective of hyperspectral imaging
František Racek, Adam Jobánek, Teodor Baláž, et al.
Pixelated camouflage patterns fulfill the role of both principles the matching and the disrupting that are exploited for blending the target into the background. It means that pixelated pattern should respect natural background in spectral and spatial characteristics embodied in micro and macro patterns. The HS imaging plays the similar, however the reverse role in the field of reconnaissance systems. The HS camera fundamentally records and extracts both the spectral and spatial information belonging to the recorded scenery. Therefore, the article deals with problems of hyperspectral (HS) imaging and subsequent processing of HS images of pixelated camouflage patterns which are among others characterized by their specific spatial frequency heterogeneity.
Determination of target detection limits in hyperspectral data using band selection and dimensionality reduction
W. Gross, J. Boehler, K. Twizer, et al.
Hyperspectral remote sensing data can be used for civil and military applications to robustly detect and classify target objects. High spectral resolution of hyperspectral data can compensate for the comparatively low spatial resolution, which allows for detection and classification of small targets, even below image resolution. Hyperspectral data sets are prone to considerable spectral redundancy, affecting and limiting data processing and algorithm performance. As a consequence, data reduction strategies become increasingly important, especially in view of near-real-time data analysis. The goal of this paper is to analyze different strategies for hyperspectral band selection algorithms and their effect on subpixel classification for different target and background materials. Airborne hyperspectral data is used in combination with linear target simulation procedures to create a representative amount of target-to-background ratios for evaluation of detection limits. Data from two different airborne hyperspectral sensors, AISA Eagle and Hawk, are used to evaluate transferability of band selection when using different sensors. The same target objects were recorded to compare the calculated detection limits. To determine subpixel classification results, pure pixels from the target materials are extracted and used to simulate mixed pixels with selected background materials. Target signatures are linearly combined with different background materials in varying ratios. The commonly used classification algorithms Adaptive Coherence Estimator (ACE) is used to compare the detection limit for the original data with several band selection and data reduction strategies. The evaluation of the classification results is done by assuming a fixed false alarm ratio and calculating the mean target-to-background ratio of correctly detected pixels. The results allow drawing conclusions about specific band combinations for certain target and background combinations. Additionally, generally useful wavelength ranges are determined and the optimal amount of principal components is analyzed.
Multiwaveband simulation-based signature analysis of camouflaged human dismounts in cluttered environments with TAIThermIR and MuSES
Corey D. Packard, Mark D. Klein, Timothy S. Viola, et al.
The ability to predict electro-optical (EO) signatures of diverse targets against cluttered backgrounds is paramount for signature evaluation and/or management. Knowledge of target and background signatures is essential for a variety of defense-related applications. While there is no substitute for measured target and background signatures to determine contrast and detection probability, the capability to simulate any mission scenario with desired environmental conditions is a tremendous asset for defense agencies. In this paper, a systematic process for the thermal and visible-through-infrared simulation of camouflaged human dismounts in cluttered outdoor environments is presented. This process, utilizing the thermal and EO/IR radiance simulation tool TAIThermIR (and MuSES), provides a repeatable and accurate approach for analyzing contrast, signature and detectability of humans in multiple wavebands. The engineering workflow required to combine natural weather boundary conditions and the human thermoregulatory module developed by ThermoAnalytics is summarized. The procedure includes human geometry creation, human segmental physiology description and transient physical temperature prediction using environmental boundary conditions and active thermoregulation. Radiance renderings, which use Sandford-Robertson BRDF optical surface property descriptions and are coupled with MODTRAN for the calculation of atmospheric effects, are demonstrated. Sensor effects such as optical blurring and photon noise can be optionally included, increasing the accuracy of detection probability outputs that accompany each rendering. This virtual evaluation procedure has been extensively validated and provides a flexible evaluation process that minimizes the difficulties inherent in human-subject field testing. Defense applications such as detection probability assessment, camouflage pattern evaluation, conspicuity tests and automatic target recognition are discussed.
Image Interpretation I
icon_mobile_dropdown
Multiscale image fusion through guided filtering
We introduce a multiscale image fusion scheme based on guided filtering. Guided filtering can effectively reduce noise while preserving detail boundaries. When applied in an iterative mode, guided filtering selectively eliminates small scale details while restoring larger scale edges. The proposed multi-scale image fusion scheme achieves optimal spatial consistency by using guided filtering both at the decomposition and at the recombination stage of the multiscale fusion process. First, size-selective iterative guided filtering is applied to decompose the source images into base and detail layers at multiple levels of resolution. Then, frequency-tuned filtering is used to compute saliency maps at successive levels of resolution. Next, at each resolution level a binary weighting map is obtained as the pixelwise maximum of corresponding source saliency maps. Guided filtering of the binary weighting maps with their corresponding source images as guidance images serves to reduce noise and to restore spatial consistency. The final fused image is obtained as the weighted recombination of the individual detail layers and the mean of the lowest resolution base layers. Application to multiband visual (intensified) and thermal infrared imagery demonstrates that the proposed method obtains state-ofthe- art performance for the fusion of multispectral nightvision images. The method has a simple implementation and is computationally efficient.
Asynchronous threat awareness by observer trials using crowd simulation
The last few years showed that a high risk of asynchronous threats is given in every day life. Especially in large crowds a high probability of asynchronous attacks is evident. High observational abilities to detect threats are desirable. Consequently highly trained security and observation personal is needed. This paper evaluates the effectiveness of a training methodology to enhance performance of observation personnel engaging in a specific target identification task. For this purpose a crowd simulation video is utilized. The study first provides a measurement of the base performance before the training sessions. Furthermore a training procedure will be performed. Base performance will then be compared to the after training performance in order to look for a training effect. A thorough evaluation of both the training sessions as well as the overall performance will be done in this paper. A specific hypotheses based metric is used. Results will be discussed in order to provide guidelines for the design of training for observational tasks.
Image Interpretation II
icon_mobile_dropdown
Computationally efficient target classification in multispectral image data with Deep Neural Networks
Lukas Cavigelli, Dominic Bernath, Michele Magno, et al.
Detecting and classifying targets in video streams from surveillance cameras is a cumbersome, error-prone and expensive task. Often, the incurred costs are prohibitive for real-time monitoring. This leads to data being stored locally or transmitted to a central storage site for post-incident examination. The required communication links and archiving of the video data are still expensive and this setup excludes preemptive actions to respond to imminent threats. An effective way to overcome these limitations is to build a smart camera that analyzes the data on-site, close to the sensor, and transmits alerts when relevant video sequences are detected.

Deep neural networks (DNNs) have come to outperform humans in visual classifications tasks and are also performing exceptionally well on other computer vision tasks. The concept of DNNs and Convolutional Networks (ConvNets) can easily be extended to make use of higher-dimensional input data such as multispectral data. We explore this opportunity in terms of achievable accuracy and required computational effort.

To analyze the precision of DNNs for scene labeling in an urban surveillance scenario we have created a dataset with 8 classes obtained in a field experiment. We combine an RGB camera with a 25-channel VIS-NIR snapshot sensor to assess the potential of multispectral image data for target classification. We evaluate several new DNNs, showing that the spectral information fused together with the RGB frames can be used to improve the accuracy of the system or to achieve similar accuracy with a 3x smaller computation effort. We achieve a very high per-pixel accuracy of 99.1%. Even for scarcely occurring, but particularly interesting classes, such as cars, 75% of the pixels are labeled correctly with errors occurring only around the border of the objects. This high accuracy was obtained with a training set of only 30 labeled images, paving the way for fast adaptation to various application scenarios.
Multi-agent system for line detection on images
Boris A. Alpatov, Pavel V. Babayan, Nikita Yu. Shubin
Lines are one of the most informative structure elements on any images. For this reason, objects detection and recognition problems are often reduced to edge detection task. One of the most popular approaches to detect lines is based on the Hough transform or Radon transform.

However, using both of transforms allows estimating the infinite lines parameters only. It is necessary to use additional approaches to estimate the ends of the detected lines. Moreover, Radon transform does not allow detecting non-straight curve shapes at all. This work is oriented to solve line detection problem using Radon transform and multi-agent approach. The results of the experimental researches that confirm the effectiveness of the proposed approach are given. The real full HD image sequences are used. The direction of further improvements is proposed.
Signature and Scene Modelling
icon_mobile_dropdown
Modelling and simulation of heat pipes with TAIThermIR (Conference Presentation)
Regarding thermal camouflage usually one has to reduce the surface temperature of an object. All vehicles and installations having a combustion engine usually produce a lot of heat with results on hot spots on the surface which are highly conspicuous. Using heat pipes to transfer this heat to another place on the surface more efficiently might be a way to reduce those hotspots and the overall conspicuity. In a first approach, a model for the Software TAIThermIR was developed to test which parameters of the heat pipes are relevant and what effects can be achieved. It will be shown, that the thermal resistivity of contact zones are quite relevant and the thermal coupling of the engine (source of heat) defines if the alteration of the thermal signature is large or not. Furthermore the impact of the use of heat pipes in relation to surface material is discussed. The influence of different weather scenarios on the change of signatures due to the use of heat pipes is of minor relevance and depends on the choice of the surface material. Finally application issues for real systems are discussed.
Utilising E-on Vue and Unity 3D scenes to generate synthetic images and videos for visible signature analysis
This paper investigates the ability to develop synthetic scenes in an image generation tool, E-on Vue, and a gaming engine, Unity 3D, which can be used to generate synthetic imagery of target objects across a variety of conditions in land environments. Developments within these tools and gaming engines have allowed the computer gaming industry to dramatically enhance the realism of the games they develop; however they utilise short cuts to ensure that the games run smoothly in real-time to create an immersive effect. Whilst these short cuts may have an impact upon the realism of the synthetic imagery, they do promise a much more time efficient method of developing imagery of different environmental conditions and to investigate the dynamic aspect of military operations that is currently not evaluated in signature analysis. The results presented investigate how some of the common image metrics used in target acquisition modelling, namely the Δμ1, Δμ2, Δμ3, RSS, and Doyle metrics, perform on the synthetic scenes generated by E-on Vue and Unity 3D compared to real imagery of similar scenes. An exploration of the time required to develop the various aspects of the scene to enhance its realism are included, along with an overview of the difficulties associated with trying to recreate specific locations as a virtual scene. This work is an important start towards utilising virtual worlds for visible signature evaluation, and evaluating how equivalent synthetic imagery is to real photographs.
Atmospheric visibility estimation and image contrast calibration
Patrik Hermansson, Klas Edstam
A method, referred to as contrast calibration, has been developed for transforming digital color photos of outdoor scenes from the atmospheric conditions, illumination and visibility, prevailing at the time of capturing the image to a corresponding image for other atmospheric conditions. A photo captured on a hazy day can, for instance, be converted to resemble a photo of the same scene for good visibility conditions. Converting digital color images to specified lightning and transmission conditions is useful for image based assessment of signature suppression solutions. The method uses "calibration objects" which are photographed at about the same time as the scene of interest. The calibration objects, which (indirectly) provide information on visibility and lightning conditions, consist of two flat boards, painted in different grayscale colors, and a commercial, neutral gray, reference card. Atmospheric extinction coefficient and sky intensity can be determined, in three wavelength bands, from image pixel values on the calibration objects and using this information the image can be converted to other atmospheric conditions. The image is transformed in contrast and color. For illustration, contrast calibration is applied to sample images of a scene acquired at different times. It is shown that contrast calibration of the images to the same reference values of extinction coefficient and sky intensity results in images that are more alike than the original images. It is also exemplified how images can be transformed to various other atmospheric weather conditions. Limitations of the method are discussed and possibilities for further development are suggested.
Development of an atmospheric infrared radiation model with high clouds for target detection
Christophe Bellisario, Claire Malherbe, Caroline Schweitzer, et al.
In the field of target detection, the simulation of the camera FOV (field of view) background is a significant issue. The presence of heterogeneous clouds might have a strong impact on a target detection algorithm. In order to address this issue, we present here the construction of the CERAMIC package (Cloudy Environment for RAdiance and MIcrophysics Computation) that combines cloud microphysical computation and 3D radiance computation to produce a 3D atmospheric infrared radiance in attendance of clouds.

The input of CERAMIC starts with an observer with a spatial position and a defined FOV (by the mean of a zenithal angle and an azimuthal angle). We introduce a 3D cloud generator provided by the French LaMP for statistical and simplified physics. The cloud generator is implemented with atmospheric profiles including heterogeneity factor for 3D fluctuations. CERAMIC also includes a cloud database from the French CNRM for a physical approach. We present here some statistics developed about the spatial and time evolution of the clouds. Molecular optical properties are provided by the model MATISSE (Modélisation Avancée de la Terre pour l’Imagerie et la Simulation des Scènes et de leur Environnement).

The 3D radiance is computed with the model LUCI (for LUminance de CIrrus). It takes into account 3D microphysics with a resolution of 5 cm-1 over a SWIR bandwidth. In order to have a fast computation time, most of the radiance contributors are calculated with analytical expressions. The multiple scattering phenomena are more difficult to model. Here a discrete ordinate method with correlated-K precision to compute the average radiance is used. We add a 3D fluctuations model (based on a behavioral model) taking into account microphysics variations. In fine, the following parameters are calculated: transmission, thermal radiance, single scattering radiance, radiance observed through the cloud and multiple scattering radiance.

Spatial images are produced, with a dimension of 10 km x 10 km and a resolution of 0.1 km with each contribution of the radiance separated. We present here the first results with examples of a typical scenarii. A 1D comparison in results is made with the use of the MATISSE model by separating each radiance calculated, in order to validate outputs. The code performance in 3D is shown by comparing LUCI to SHDOM model, referency code which uses the Spherical Harmonic Discrete Ordinate Method for 3D Atmospheric Radiative Transfer model. The results obtained by the different codes present a strong agreement and the sources of small differences are considered. An important gain in time is observed for LUCI versus SHDOM. We finally conclude on various scenarios for case analysis.
Poster Session
icon_mobile_dropdown
Thermal transmission of camouflage nets revisited
Johan Jersblad, Pieter Jacobs
In this article we derive, from first principles, the correct formula for thermal transmission of a camouflage net, based on the setup described in the US standard for lightweight camouflage nets. Furthermore, we compare the results and implications with the use of an incorrect formula that have been seen in several recent tenders. It is shown that the incorrect formulation not only gives rise to large errors, but the result also depends on the surrounding room temperature, which in the correct derivation cancels out. The theoretical results are compared with laboratory measurements. The theoretical results agree with the laboratory results for the correct derivation. To summarize we discuss the consequences for soldiers on the battlefield if incorrect standards and test methods are used in procurement processes.
A novel approach to simulate chest wall micro-motion for bio-radar life detection purpose
Qiang An, Zhao Li, Fulai Liang, et al.
Volunteers are often recruited to serve as the detection targets during the research process of bio-radar life detection technology, in which the experiment results are highly susceptible to the physical status of different individuals (shape, posture, etc.). In order to objectively evaluate the radar system performance and life detection algorithms, a standard detection target is urgently needed. The paper first proposed a parameter quantitatively controllable system to simulate the chest wall micro-motion caused mainly by breathing and heart beating. Then, the paper continued to analyze the material and size selection of the scattering body mounted on the simulation system from the perspective of back scattering energy. The computational electromagnetic method was employed to determine the exact scattering body. Finally, on-site experiments were carried out to verify the reliability of the simulation platform utilizing an IR UWB bioradar. Experimental result shows that the proposed system can simulate a real human target from three aspects: respiration frequency, amplitude and body surface scattering energy. Thus, it can be utilized as a substitute for a human target in radar based non-contact life detection research in various scenarios.
Front Matter: Volume 9997
icon_mobile_dropdown
Front Matter: Volume 9997
This PDF file contains the front matter associated with SPIE Proceedings Volume 9997, including the Title Page, Copyright information, Table of Contents, Introduction (if any), and Conference Committee listing.