Proceedings Volume 8897

Electro-Optical Remote Sensing, Photonic Technologies, and Applications VII; and Military Applications in Hyperspectral Imaging and High Spatial Resolution Sensing

cover
Proceedings Volume 8897

Electro-Optical Remote Sensing, Photonic Technologies, and Applications VII; and Military Applications in Hyperspectral Imaging and High Spatial Resolution Sensing

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 31 October 2013
Contents: 9 Sessions, 30 Papers, 0 Presentations
Conference: SPIE Security + Defence 2013
Volume Number: 8897

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 8897
  • Electro-Optical Systems and Applications
  • Active Systems I
  • Passive Systems and Processing I
  • Passive Systems and Processing II
  • Active Systems II
  • Active Systems and New Technologies
  • Poster Session
  • Military Applications in Hyperspectral Imaging and High Spatial Resolution Sensing
Front Matter: Volume 8897
icon_mobile_dropdown
Front Matter: Volume 8897
This PDF file contains the front matter associated with SPIE Proceedings Volume 8897, including the Title Page, Copyright Information, Table of Contents, and the Conference Committee listing.
Electro-Optical Systems and Applications
icon_mobile_dropdown
Future electro-optical sensors and processing in urban operations
Christina Grönwall, Piet B. Schwering, Jouni Rantakokko, et al.
In the electro-optical sensors and processing in urban operations (ESUO) study we pave the way for the European Defence Agency (EDA) group of Electro-Optics experts (IAP03) for a common understanding of the optimal distribution of processing functions between the different platforms. Combinations of local, distributed and centralized processing are proposed. In this way one can match processing functionality to the required power, and available communication systems data rates, to obtain the desired reaction times. In the study, three priority scenarios were defined. For these scenarios, present-day and future sensors and signal processing technologies were studied. The priority scenarios were camp protection, patrol and house search. A method for analyzing information quality in single and multi-sensor systems has been applied. A method for estimating reaction times for transmission of data through the chain of command has been proposed and used. These methods are documented and can be used to modify scenarios, or be applied to other scenarios. Present day data processing is organized mainly locally. Very limited exchange of information with other platforms is present; this is performed mainly at a high information level. Main issues that arose from the analysis of present-day systems and methodology are the slow reaction time due to the limited field of view of present-day sensors and the lack of robust automated processing. Efficient handover schemes between wide and narrow field of view sensors may however reduce the delay times. The main effort in the study was in forecasting the signal processing of EO-sensors in the next ten to twenty years. Distributed processing is proposed between hand-held and vehicle based sensors. This can be accompanied by cloud processing on board several vehicles. Additionally, to perform sensor fusion on sensor data originating from different platforms, and making full use of UAV imagery, a combination of distributed and centralized processing is essential. There is a central role for sensor fusion of heterogeneous sensors in future processing. The changes that occur in the urban operations of the future due to the application of these new technologies will be the improved quality of information, with shorter reaction time, and with lower operator load.
Experiments and models of active and thermal imaging under bad weather conditions
Erwan Bernard, Nicolas Riviere, Mathieu Renaudat, et al.
Thermal imaging cameras are widely used in military contexts for their night vision capabilities and their observation range; there are based on passive infrared sensors (e.g. MWIR or LWIR range). Under bad weather conditions or when the target is partially hidden (e.g. foliage, military camouflage) they are more and more complemented by active imaging systems, a key technology to perform target identification at long range. The 2D flash imaging technique is based on a high powered pulsed laser source that illuminates the entire scene and a fast gated camera as the imaging system. Both technologies are well experienced under clear meteorological conditions; models including atmospheric effects such as turbulence are able to predict accurately their performances. However, under bad weather conditions such as rain, haze or snow, these models are not relevant. This paper introduces new models to predict performances under bad weather conditions for both active and infrared imaging systems. We point out their effects on controlled physical parameters (extinction, transmission, spatial resolution, thermal background, speckle, turbulence). Then we develop physical models to describe their intrinsic characteristics and their impact on the imaging system performances. Finally, we approximate these models to have a “first order” model easy to deploy for industrial applications. This theoretical work will be validated on real active and infrared data.
Surveillance in long-distance turbulence-degraded videos
Surveillance in long-distance turbulence-degraded video is a difficult challenge because of the effects of the atmospheric turbulence that causes blur and random shifts in the image. As imaging distances increase, the degradation effects become more significant. This paper presents a method for surveillance in long-distance turbulence-degraded videos. This method is based on employing new criteria for discriminating true from false object detections. We employ an adaptive thresholding procedure for background subtraction, and implement new criteria for distinguishing true from false moving objects, that take into account the temporal consistency of both shape and motion properties. Results show successful detection also tracking of moving objects on challenging video sequences, which are significantly distorted with atmospheric turbulence. However, when the imaging distance is increased higher false alarms may occur. The method presented here is relatively efficient and has low complexity.
Measurements and analysis of active/passive multispectral imaging
This paper describes a data collection on passive and active imaging and the preliminary analysis. It is part of an ongoing work on active and passive imaging for target identification using different wavelength bands. We focus on data collection at NIR-SWIR wavelengths but we also include the visible and the thermal region. Active imaging in NIRSWIR will support the passive imaging by eliminating shadows during day-time and allow night operation. Among the applications that are most likely for active multispectral imaging, we focus on long range human target identification. We also study the combination of active and passive sensing. The target scenarios of interest include persons carrying different objects and their associated activities. We investigated laser imaging for target detection and classification up to 1 km assuming that another cueing sensor – passive EO and/or radar – is available for target acquisition and detection. Broadband or multispectral operation will reduce the effects of target speckle and atmospheric turbulence. Longer wavelengths will improve performance in low visibility conditions due to haze, clouds and fog. We are currently performing indoor and outdoor tests to further investigate the target/background phenomena that are emphasized in these wavelengths. We also investigate how these effects can be used for target identification and image fusion. Performed field tests and the results of preliminary data analysis are reported.
Active Systems I
icon_mobile_dropdown
Non-line-of-sight active imaging of scattered photons
Laser Gated Viewing is a prominent sensing technology for optical imaging in harsh environments and can be applied to the vision through fog, smoke and other degraded environmental conditions as well as to the vision through sea water in submarine operation. A direct imaging of non-scattered photons (or ballistic photons) is limited in range and performance by the free optical path length i.e. the length in which a photon can propagate without interaction with scattering particles or object surfaces. The imaging and analysis of scattered photons can overcome these classical limitations and it is possible to realize a non-line-of-sight imaging. The spatial and temporal distribution of scattered photons can be analyzed by means of computational optics and their information of the scenario can be restored. In the case of Lambertian scattering sources the scattered photons carry information of the complete environment. Especial the information outside the line of sight or outside the visibility range is of high interest. Here, we discuss approaches for non line of sight active imaging with different indirect and direct illumination concepts (point, surface and volume scattering sources).
Lidar/DIAL detection of bomb factories
Luca Fiorani, Adriana Puiu, Olga Rosa, et al.
One of the aims of the project BONAS (BOmb factory detection by Networks of Advanced Sensors) is to develop a lidar/DIAL (differential absorption lidar) to detect precursors employed in the manufacturing of improvised explosive devices (IEDs). At first, a spectroscopic study has been carried out: the infrared (IR) gas phase spectrum of acetone, one of the more important IED precursors, has been procured from available databases and checked with cell measurements. Then, the feasibility of a lidar/DIAL for the detection of acetone vapors has been shown in laboratory, simulating the experimental conditions of a field campaign. Eventually, having in mind measurements in a real scenario, an interferent study has been performed, looking for all known compounds that share with acetone IR absorption in the spectral band selected for its detection. Possible interfering species were investigated, simulating both urban and industrial atmospheres and limits of acetone detection in both environments were identified. This study confirmed that a lidar/DIAL can detect low concentration of acetone at considerable distances.
Range accuracy of a gated-viewing system as a function of the gate shift step size
Primarily, a Gated-Viewing (GV) system provides range gated imagery. By increasing the camera delay time from frame to frame, a so-called sliding gates sequence is obtained by which 3-D reconstruction is possible. An important parameter of a sliding gates sequence is the step size by which the gate is shifted. In order to reduce the total number of required images, this step size should be as large as possible without significantly degrading the range accuracy. In this paper we have studied the influence of the gate shift step size on the resulting range accuracy. Therefore, we have combined the Intevac Gated-Viewing detector M506 with a pulsed 1.57 μm illumination laser. The maximal laser pulse energy is 65 mJ. The target is a one-square-meter-plate at a distance of 500 m. The plate is laminated with a Spectralon layer having Lambertian reflection behavior with a homogeneous reflectance of 93 %. For the measurements, this plate was orientated diagonally to the line of sight of the sensor in order to provide a depth scenario. We have considered different combinations of the two parameters »gate length« (13.5 m, 23.25 m, 33 m) and »signal-to-noise ratio« (SNR) (2 dB, 3 dB, 5 dB, 6 dB, 7 dB, 8 dB). For each considered set of parameters, a sliding gates sequence of the target was recorded. Per range, 20 frames were collected. The gate shift step size was set to the minimal possible value, 75 cm. By skipping certain ranges, a sliding gates sequence with a larger gate shift step size is obtained. For example, skipping the ranges 2, 3, 5, 6, 8, 9,… (equivalent: taking the ranges 1, 4, 7, …) results in a gate shift step size of 2.25 m. Finally, the range accuracies were derived as a function of the gate shift step size. Additionally, the influence of frame averaging on these functions was studied.
Investigation of synthetic aperture ladar for land surveillance applications
Simon Turbide, Linda Marchese, Marc Terroux, et al.
Long-range land surveillance is a critical need in numerous military and civilian security applications, such as threat detection, terrain mapping and disaster prevention. A key technology for land surveillance, synthetic aperture radar (SAR) continues to provide high resolution radar images in all weather conditions from remote distances. Recently, Interferometric SAR (InSAR) and Differential Interferometric SAR (D-InSAR) have become powerful tools adding high resolution elevation and change detection measurements. State of the art SAR systems based on dual-use satellites are capable of providing ground resolutions of one meter; while their airborne counterparts obtain resolutions of 10 cm. DInSAR products based on these systems can produce cm-scale vertical resolution image products. Certain land surveillance applications such as land subsidence monitoring, landslide hazard prediction and tactical target tracking could benefit from improved resolution. The ultimate limitation to the achievable resolution of any imaging system is its wavelength. State-of-the art SAR systems are approaching this limit. The natural extension to improve resolution is to thus decrease the wavelength, i.e. design a synthetic aperture system in a different wavelength regime. One such system offering the potential for vastly improved resolution is Synthetic Aperture Ladar (SAL). This system operates at infrared wavelengths, ten thousand times smaller radar wavelengths. This paper discusses an initial investigation into a concept for an airborne SAL specifically aiming at land surveillance. The system would operate at 1.55 μm and would integrate an optronic processor on-board to allow for immediate transmission of the high resolution images to the end-user on the ground. Estimates of the size and weight, as well as the resolution and processing time are given.
Passive Systems and Processing I
icon_mobile_dropdown
Image processing in aerial surveillance and reconnaissance: from pixels to understanding
Judith Dijk, Adam W. M. van Eekeren, Olga Rajadell Rojas, et al.
Surveillance and reconnaissance tasks are currently often performed using an airborne platform such as a UAV. The airborne platform can carry different sensors. EO/IR cameras can be used to view a certain area from above. To support the task from the sensor analyst, different image processing techniques can be applied on the data, both in real-time or for forensic applications. These algorithms aim at improving the data acquired to be able to detect objects or events and make an interpretation of those detections. There is a wide range of techniques that tackle these challenges and we group them in classes according to the goal they pursue (image enhancement, modeling the world object information, situation assessment). An overview of these different techniques and different concepts of operations for these techniques are presented in this paper.
Segmentation and wake removal of seafaring vessels in optical satellite images
Henri Bouma, Rob J. Dekker, Robin M. Schoemaker, et al.
This paper aims at the segmentation of seafaring vessels in optical satellite images, which allows an accurate length estimation. In maritime situation awareness, vessel length is an important parameter to classify a vessel. The proposed segmentation system consists of robust foreground-background separation, wake detection and ship-wake separation, simultaneous position and profile clustering and a special module for small vessel segmentation. We compared our system with a baseline implementation on 53 vessels that were observed with GeoEye-1. The results show that the relative L1 error in the length estimation is reduced from 3.9 to 0.5, which is an improvement of 87%. We learned that the wake removal is an important element for the accurate segmentation and length estimation of ships.
Geometric calibration of thermal cameras
Philip Engström, Håkan Larsson, Joakim Rydell
There exist several tools and methods for camera resectioning, i.e. geometric calibration for the purpose of estimating intrinsic and extrinsic parameters. The intrinsic parameters represent the internal properties of the camera such as focal length, principal point and distortion coefficients. The extrinsic parameters relate the cameras position to the world, i.e. how is the camera positioned and oriented in the world. With both sets of parameters known it is possible to relate a pixel in one camera to the world or to another camera. This is important in many applications, for example in stereo vision. The existing methods work well for standard visual cameras in most situations. Intrinsic parameters are usually estimated by imaging a well-defined pattern from different angles and distances. Checkerboard patterns are very often used for calibration since it is a well-defined pattern with easily detectable features. The intersections between the black and white squares form high contrast points which can be estimated with sub pixel accuracy. Knowing the precise dimension and structure of the pattern makes enables calculation of the intrinsic parameters. Extrinsic calibration can be performed in a similar manner if the exact position and orientation of the pattern is known. A common method is to distribute markers in the scene and to measure their exact locations. The key to good calibration is well-defined points and accurate measurements. Thermal cameras are a subset of infrared cameras that work with long wavelengths, usually between 9 and 14 microns. At these wavelengths all objects above absolute zero temperature emit radiation making it ideal for passive imaging in complete darkness and widely used in military applications. The issue that arises when trying to perform a geometric calibration of a thermal camera is that the checkerboard emits more or less the same amount of radiation in the black squares as in the white. In other words, the calibration board that is optimal for calibration of visual cameras might be completely useless for thermal cameras. A calibration board for thermal cameras should ideally be a checkerboard with high contrast in thermal wavelengths. (It is of course possible to use other sorts of objects or patterns but since most tools and software expect a checkerboard pattern this is by far the most straightforward solution.) Depending on the application it should also be more or less portable and work booth in indoor and outdoor scenarios. In this paper we present several years of experience with calibration of thermal cameras in various scenarios. Checkerboards with high contrast both for indoor and outdoor scenarios are presented as well as different markers suitable for extrinsic calibration.
Multispectral and hyperspectral advanced characterization of soldier's camouflage equipment
The requirements for soldier camouflage in the context of modern warfare are becoming more complex and challenging given the emergence of novel infrared sensors. There is a pressing need for the development of adapted fabrics and soldier camouflage devices to provide efficient camouflage in both the visible and infrared spectral ranges. The Military University of Technology has conducted an intensive project to develop new materials and fabrics to further improve the camouflage efficiency of soldiers. The developed materials shall feature visible and infrared properties that make these unique and adapted to various military context needs. This paper presents the details of an advanced measurement campaign of those unique materials where the correlation between multispectral and hyperspectral infrared measurements is performed.
Passive Systems and Processing II
icon_mobile_dropdown
Image structural analysis in the tasks of automatic navigation of unmanned vehicles and inspection of Earth surface
Vadim Lutsiv, Igor Malyshev
The automatic analysis of images of terrain is urgent for several decades. On the one hand, such analysis is a base of automatic navigation of unmanned vehicles. On the other hand, the amount of information transferred to the Earth by modern video-sensors increases, thus a preliminary classification of such data by onboard computer becomes urgent. We developed an object-independent approach to structural analysis of images. While creating the methods of image structural description, we did our best to abstract away from the partial peculiarities of scenes. Only the most general limitations were taken into account, that were derived from the laws of organization of observable environment and from the properties of image formation systems. The practical application of this theoretic approach enables reliable matching the aerospace photographs acquired from differing aspect angles, in different day-time and seasons by sensors of differing types. The aerospace photographs can be matched even with the geographic maps. The developed approach enabled solving the tasks of automatic navigation of unmanned vehicles. The signs of changes and catastrophes can be detected by means of matching and comparison of aerospace photographs acquired at different time. We present the theoretical proofs of chosen strategy of structural description and matching of images. Several examples of matching of acquired images with template pictures and maps of terrain are shown within the frameworks of navigation of unmanned vehicles or detection of signs of disasters.
A signal-processing system of digital pixel binning based on bi-cubic filtering algorithm
Bin Bao, Ning Lei, Nina Peng, et al.
As the development of semiconductor technology, the manufacture process of CCD sensors has been improved continuously. In recent years some new technologies appear on CCD sensors. Analog pixel Binning technology is one of them. In this paper, a new signal-processing system of digital pixel binning based on bi-cubic filtering algorithm is designed .The system overcomes the shortcomings of losing some image details, caused by analog pixel binning which does simply summation of pixel signals. The signal-processing system of digital pixel binning can keep high frequency information through bi-cubic filtering algorithm, which can improve the contrast and MTF of the remote sensing images. This system can achieve pixel binning with lower computational complexity and also it has the good real-time capability for large-scale remote sensing images.
Optic flow aided navigation and 3D scene reconstruction
An important enabler for low cost airborne systems is the ability to exploit low cost inertial instruments. An Inertial Navigation System (INS) can provide a navigation solution, when GPS is denied, by integrating measurements from inertial sensors. However, the gyrometer and accelerometer biases of low cost inertial sensors cause compound errors in the integrated navigation solution. This paper describes experiments to establish whether (and to what extent) the navigation solution can be aided by fusing measurements from an on-board video camera with measurements from the inertial sensors. The primary aim of the work was to establish whether optic flow aided navigation is beneficial even when the 3D structure within the observed scene is unknown. A further aim was to investigate whether an INS can help to infer 3D scene content from video. Experiments with both real and synthetic data have been conducted. Real data was collected using an AR Parrot quadrotor. Empirical results illustrate that optic flow provides a useful aid to navigation even when the 3D structure of the observed scene is not known. With optic flow aiding of the INS, the computed trajectory is consistent with the true camera motion, whereas the unaided INS yields a rapidly increasing position error (the data represents ~40 seconds, after which the unaided INS is ~50 metres in error and has passed through the ground). The results of the Monte Carlo simulation concur with the empirical result. Position errors, which grow as a quadratic function of time when unaided, are substantially checked by the availability of optic flow measurements.
Robust motion filtering as an enabler to video stabilization for a tele-operated mobile robot
Romain Chereau, Toby P. Breckon
An increasing number of inspection and hazardous environment tasks use mobile robotic vehicles manually tele-operated via a live video feed from an on-board camera. The resulting video imagery frequently suffers from vibration artefacts compromising the accuracy and security of operation in addition to the viable duration for human tele-operation. Here we aim to automatically remove these unwanted visual effects using a novel real-time video stabilization approach. Prior work for hand-held and vehicle mounted cameras is ill-suited to the high-frequency, large magnitude (10-15% of image size) vibration encountered on the short wheelbase, non-suspended robotic platforms typically deployed for such tasks. Without prior knowledge of the robot ego-motion (or vibration characteristics) we develop a novel four stage filtering approach to identify robust Local Motion Vectors (LMV) for Global Motion Vector (GMV) estimation in successive video frames whilst preserving the required real-time responsiveness for tele-operation. Experimental results over a range of tele-operation scenarios show that the method provides both significant qualitative visual improvement and a quantitative reduction in measurable video image displacement (caused by vibration).
An automatic geo-spatial object recognition algorithm for high resolution satellite images
Mustafa Ergul, A. Aydın Alatan
This paper proposes a novel automatic geo-spatial object recognition algorithm for high resolution satellite imaging. The proposed algorithm consists of two main steps; a hypothesis generation step with a local feature-based algorithm and a verification step with a shape-based approach. In the hypothesis generation step, a set of hypothesis for possible object locations is generated, aiming lower missed detections and higher false-positives by using a Bag of Visual Words type approach. In the verification step, the foreground objects are first extracted by a semi-supervised image segmentation algorithm, utilizing detection results from the previous step, and then, the shape descriptors for segmented objects are utilized to prune out the false positives. Based on simulation results, it can be argued that the proposed algorithm achieves both high precision and high recall rates as a result of taking advantage of both the local feature-based and the shape-based object detection approaches. The superiority of the proposed method is due to the ability of minimization of false alarm rate and since most of the object shapes contain more characteristic and discriminative information about their identity and functionality.
Active Systems II
icon_mobile_dropdown
Dust-penetrating (DUSPEN) see-through lidar for helicopter situational awareness in DVE
James T. Murray, Jason Seely, Jeff Plath, et al.
Areté Associates recently developed and flight tested a next-generation low-latency near real-time dust-penetrating (DUSPEN) imaging lidar system. These tests were accomplished for Naval Air Warfare Center (NAWC) Aircraft Division (AD) 4.5.6 (EO/IR Sensor Division) under the Office of Naval Research (ONR) Future Naval Capability (FNC) Helicopter Low-Level Operations (HELO) Product 2 program. Areté’s DUSPEN system captures full lidar waveforms and uses sophisticated real-time detection and filtering algorithms to discriminate hard target returns from dust and other obscurants. Down-stream 3D image processing methods are used to enhance pilot visualization of threat objects and ground features during severe DVE conditions. This paper presents results from these recent flight tests in full brown-out conditions at Yuma Proving Grounds (YPG) from a CH-53E Super Stallion helicopter platform.
Image change detection using a SWIR active imaging system
Armin L. Schneider, David Monnin, Martin Laurenzis, et al.
We are currently developing a system consisting of a GPS receiver, a three-axis magnetic compass as well as a digital video camera in order to visualize changes occuring along a regularily used itinerary. This is done by comparing actual images with images from the same scene, which have been acquired during a previous measurement. The luminosity of images from two different passages however can be quite different (due to different meteorological conditions). Whereas the global luminosity can be adjusted using non-linear luminosity correction, the treatment of shadows is more di cult. Since meteorological conditions cannot be controlled, we are investigating the possibility of using a Laser Gated Viewing system in the SWIR domain to illuminate the scene. Using appropriate filters for the camera, we are completely independent of natural illumination and in addition, the system can also be used at night.
Questions about using of atmospheric attenuation calculating the nominal ocular hazard distance
The distance where risk for injuries or damage of an eye, when laser irradiance level exceeded the Maximum Permissible Exposure (MPE), is equal to the Nominal Ocular Hazard Distance (NOHD). The common way calculating the NOHD abandons the use of atmospheric attenuation as a lowering the laser safety ranges in the civil society. The NOHD for a typical designator laser (using e.g. Nd:YAG laser) with small divergence can be several tens of kilometers. One way handling those risk distances, which might be too long for ordinary firing ranges or embargoed areas, are probabilistic calculations of danger and including the atmospheric attenuation. For such long laser beam path the atmospheric transmission attenuation will be significant. The suppression of the risk distance can be substantial even for moderate extinctions coefficient if the atmospheric attenuation is included within the calculations of Ocular Hazard Distance (OHD). NOHD is compared to the OHD in a attempt to get an impression of the reduction of the distance as function of visibility or the extinction coefficient. A simple simulation shows that OHD might be reduced by 60% - 70% compared to NOHD at a visibility of 50 km. The contribution also discusses the use of Lambert W function compared to other methods accounting for atmospheric attenuation in laser safety range calculations.
Lidar imaging with on-the-fly adaptable spatial resolution
We present our work in the design and construction of a novel type of lidar device capable of measuring 3D range images with an spatial resolution which can be reconfigured through an on-the-fly configuration approach, adjustable by software and on the image area, and which can reach the 2Mpixel value. A double-patented novel concept of scanning system enables to change dynamically the image resolution depending on external information provided by the image captured in a previous cycle or on other sensors like greyscale or hyperspectral 2D imagers. A prototype of an imaging lidar system which can modify its spatial resolution on demand from one image to the next according to the target nature and state has been developed, and indoor and outdoor sample images showing its performance are presented. Applications in object detection, tracking and identification through a real-time adaptable scanning system for each situation and target behaviour are currently being pursued in different areas.
Active Systems and New Technologies
icon_mobile_dropdown
Investigation of late time response analysis for detection of multiple concealed objects
Simon Hutchinson, Michael Fernando, David Andrews, et al.
This paper investigates the use of Late Time Response (LTR) analysis for detecting multiple objects in concealed object detection. When a conductive object is illuminated by an ultra-wide band (UWB) frequency radar signal, the surface currents induced upon the object give rise to LTR signals. The LTR results from a number of different targets are presented. The distance between the targets within the same radar beam has been adjusted in increments of 5cm to determine the point at which the individual objects can be distinguished from each other. The experiment was performed using double ridged horn antennas in a pseudo-monostatic arrangement. Vector network analysers (VNA) are used to provide the UWB stepped frequency continuous wave radar signal. The distance between the transmitting antenna and the target object is kept at 50cm for all the experiments performed and the power level at the VNA was set to 2dBm. The targets in the experimental setup are suspended in isolation in a non-anechoic environment. To allow for the de-convolution of the signal and the removal of background clutter Matlab was used in post processing. The Fast Fourier Transform (FFT) and Continuous Wavelet Transform (CWT) are used to process the return signals and extract the LTR features from the noise clutter. A Generalized Pencil-of-Function (GPOF) method was then used to extract the complex poles of the signal. In the case of a single needle these poles can be found around 1.9GHz.
High-power multi-beam diode laser transmitter for a flash imaging lidar
Christer Holmlund, Petteri Aitta, Sini Kivi, et al.
VTT Technical Research Centre of Finland is developing the transmitter for the “Flash Optical Sensor for TErrain Relative NAVigation” (FOSTERNAV) multi-beam flash imaging lidar. FOSTERNAV is a concept demonstrator for new guidance, navigation and control (GNC) technologies to fulfil the requirements for landing and docking of spacecraft as well as for navigation of rovers. This paper presents the design, realisation and testing of the multi-beam continuous-wave (CW) laser transmitter to be used in a 256x256 pixel flash imaging lidar. Depending on the target distance, the lidar has three operation modes using either several beams with low divergence or one single beam with a large divergence. This paper describes the transmitter part of the flash imaging lidar with focus on the electronics and especially the laser diode drivers. The transmitter contains eight fibre coupled commercial diode laser modules with a total peak optical power of 32 W at 808 nm. The main requirement for the laser diode drivers was linear modulation up to a frequency of 20 MHz allowing, for example, low distortion chirps or pseudorandom binary sequences. The laser modules contain the laser diode, a monitoring photodiode, a thermo-electric cooler, and a thermistor. The modules, designed for non-modulated and low-frequency operation, set challenging demands on the design of the drivers. Measurement results are presented on frequency response, and eye diagrams for pseudo-random binary sequences.
Digital colour management system for colour parameters reconstruction
Karol Grudzinski, Piotr Lasmanowicz, Lucas M. N. Assis, et al.
Digital Colour Management System (DCMS) and its application to new adaptive camouflage system are presented in this paper. The DCMS is a digital colour rendering method which would allow for transformation of a real image into a set of colour pixels displayed on a computer monitor. Consequently, it can analyse pixels’ colour which comprise images of the environment such as desert, semi-desert, jungle, farmland or rocky mountain in order to prepare an adaptive camouflage pattern most suited for the terrain. This system is described in present work as well as the use the subtractive colours mixing method to construct the real time colour changing electrochromic window/pixel (ECD) for camouflage purpose. The ECD with glass/ITO/Prussian Blue(PB)/electrolyte/CeO2-TiO2/ITO/glass configuration was assembled and characterized. The ECD switched between green and yellow after ±1.5 V application and the colours have been controlled by Digital Colour Management System and described by CIE LAB parameters.
Poster Session
icon_mobile_dropdown
Polarization state imaging in long-wave infrared for object detection
Grzegorz Bieszczad, Sławomir Gogler, Michał Krupiński
The article discusses the use of modern imaging polarimetry from the visible range of the spectrum to the far infrared. The paper presents the analyzes the potential for imaging polarimetry in the far infrared for remote sensing applications. In article a description of measurement stand is presented for examination of polarization state in LWIR. The stand consists of: infrared detector array with electronic circuitry, polarizer plate and software enabling detection method. The article also describes first results of measurements in presented test bed. Based on these measurements it was possible to calculate some of the Stokes parameters of radiation from the scene. The analysis of the measurement results show that the measurement of polarization state can be used to detect certain types of objects. Measuring the degree of polarization may allow for the detection of objects on an infrared image, which are not detectable by other techniques, and in other spectral ranges. In order to at least partially characterize the polarization state of the scene it is required to measure radiation intensity in different configurations of the polarizing filter. Due to additional filtering elements in optical path of the camera, the NETD parameter of the camera with polarizer in proposed measurement stand was equal to about 240mK. In order to visualize the polarization characteristics of objects in the infrared image, a method of imaging measurement results imposing them on the thermal image. Imaging of measurement results of radiation polarization is made by adding color and saturation to black and white thermal image where brightness corresponds to the intensity of infrared radiation.
Military Applications in Hyperspectral Imaging and High Spatial Resolution Sensing
icon_mobile_dropdown
Snapshot imaging Mueller matrix instrument
A novel way to measure the Mueller matrix image enables a sample's diattenuation, retardance, and depolarization to be measured within a single camera integration period. Since the Mueller matrix components are modulated onto coincident carrier frequencies, the described technique provides unique solutions to image registration problems for moving objects. In this paper, a snapshot imaging Mueller matrix polarimeter is theoretically described, and preliminary results shows it to be a viable approach for use in surface characterization of moving objects.
Efficient implementations of hyperspectral chemical-detection algorithms
Cory J. C. Brett, Robert S. DiPietro, Dimitris G. Manolakis, et al.
Many military and civilian applications depend on the ability to remotely sense chemical clouds using hyperspectral imagers, from detecting small but lethal concentrations of chemical warfare agents to mapping plumes in the aftermath of natural disasters. Real-time operation is critical in these applications but becomes diffcult to achieve as the number of chemicals we search for increases. In this paper, we present efficient CPU and GPU implementations of matched-filter based algorithms so that real-time operation can be maintained with higher chemical-signature counts. The optimized C++ implementations show between 3x and 9x speedup over vectorized MATLAB implementations.
Combined airborne sensors in urban environment
Alwin Dimmeler, Hendrik Schilling, Michal Shimoni, et al.
Military operations in urban areas became more relevant in the past decades. Detailed situation awareness in these complex environments is crucial for successful operations. Within the EDA (European Defence Agency) project on “Detection in Urban scenario using Combined Airborne imaging Sensors” (DUCAS) an extensive data set of hyperspectral and high spatial resolution data as well as three dimensional (3D) laser data was generated in a common field trial in the city of Zeebrugge, Belgium, in the year 2011. In the frame of DUCAS, methods were developed at two levels of processing. In the first level, single sensor data were used for land cover mapping and the detection of targets of interest (i.e. personnel, vehicles and objects). In the second level, data fusion was applied at pixel level as well as information level to investigate the benefits of combining sensor systems in an operational context. Providing data for mission planning and mapping is an important task for aerial reconnaissance and it includes the creation or the update of high quality 2D and 3D maps. In DUCAS, semi-automatic methods and a wide range of sensor data (hyperspectral, LIDAR, high resolution orthophotos and video data) were used for the creation of highly detailed land cover maps as well as urban terrain models. Combining the diverse information gained by different sensors increases the information content and the quality of the extracted information. In this paper we will present advanced methods for the creation of 2D/3D maps, show results and the benefit of fusing multi-sensor data.
Concept and integration of an on-line quasi-operational airborne hyperspectral remote sensing system
Hendrik Schilling, Andreas Lenz, Wolfgang Gross, et al.
Modern mission characteristics require the use of advanced imaging sensors in reconnaissance. In particular, high spatial and high spectral resolution imaging provides promising data for many tasks such as classification and detecting objects of military relevance, such as camouflaged units or improvised explosive devices (IEDs). Especially in asymmetric warfare with highly mobile forces, intelligence, surveillance and reconnaissance (ISR) needs to be available close to real-time. This demands the use of unmanned aerial vehicles (UAVs) in combination with downlink capability. The system described in this contribution is integrated in a wing pod for ease of installation and calibration. It is designed for the real-time acquisition and analysis of hyperspectral data. The main component is a Specim AISA Eagle II hyperspectral sensor, covering the visible and near-infrared (VNIR) spectral range with a spectral resolution up to 1.2 nm and 1024 pixel across track, leading to a ground sampling distance below 1 m at typical altitudes. The push broom characteristic of the hyperspectral sensor demands an inertial navigation system (INS) for rectification and georeferencing of the image data. Additional sensors are a high resolution RGB (HR-RGB) frame camera and a thermal imaging camera. For on-line application, the data is preselected, compressed and transmitted to the ground control station (GCS) by an existing system in a second wing pod. The final result after data processing in the GCS is a hyperspectral orthorectified GeoTIFF, which is filed in the ERDAS APOLLO geographical information system. APOLLO allows remote access to the data and offers web-based analysis tools. The system is quasi-operational and was successfully tested in May 2013 in Bremerhaven, Germany.