Developments in electro-optical (EO) and infrared (IR) systems are key to providing the enhanced capability needed by military forces to meet the current and emerging challenges created through an increasingly difficult and complex range of operational conditions. Such enhanced operational capability must often be delivered against commercial demands for lower costs and reduced timescales together with operational requirements for size weight, and power (SWaP) reductions. This conference will address current and emergent sensor technology and system developments which will deliver the required future capability of EO/IR systems. It will consider a wide-range of applications across the maritime, land, and air domains together with a diverse range of platforms such as dismounted soldier sensors, UAVs and drones, robotic platforms, and multi-sensor systems. The performance challenges faced by future military systems will continue to evolve and grow. To address these challenges, EO/IR system designers will need to draw upon the ongoing developments in underpinning technologies such as new materials, focal plane arrays, image processing, data fusion, and emergent sensor concepts such as spectral processing, computational imaging, and polarimetry. Modelling and simulation is increasingly becoming an enabler for maximizing performance and optimizing operational adaptability and its interaction with trials and validation is a subject of topical concern.

EO and IR systems are likely to benefit from recent advances in material research, for example new carbon-based materials (including graphene), nano-materials and metamaterials. These new materials promise new EO properties that could significantly change the way EO and IR systems are designed and built, e.g. new detector systems with enhanced properties or negative refractive index materials which could radically change the way optics are designed.

Computational Imaging, e.g. Pupil Plane Encoding, Coded Aperture Imaging, Compressive Imaging, etc, is another family of emerging technologies that will radically alter the way sensor systems are designed. These techniques combine optics and processing to provide a useable output from the sensor and can provide functionality not possible or practical with conventional system designs. Computational Imaging will require developments in specialist sub-components, non-standard optics design and algorithm development to reconstruct the image.

Quantum techniques are also being investigated to assess their potential for sensing systems. Quantum Imaging and Ghost Imaging are examples of quantum techniques being investigated by different teams. Any Quantum system will require specialist components e.g. sources, optics, detectors, electronics and processing as well as providing scope for unconventional system design. Processing of sensor information has become a vital component of EO/IR sensor systems for display-driven, semi-autonomous, and autonomous applications. The timely extraction and presentation of pertinent information in a usable format is the ultimate goal in most developments, although the design flexibility to support hardware upgrades and meet emergent operational needs must be considered. Dual and multi-sensor system designs provide additional information and offer increased performance under a wider variety of conditions. The combination of such sensor information to provide both increased performance and robustness continues to present many design challenges despite the ongoing research into data fusion technology.

Advanced technology by itself is not sufficient to give new and/or advanced capabilities. Systems have to be designed and developed in a way that will enable their reliable and cost-effective manufacture. This will involve adopting rigorous development and system engineering techniques. These are as crucial for the successful exploitation of sensor technology as the detector, optics and electronics. The performance and required characteristics of sensor systems are critically dependent on the platform and the application. Many sensor payloads are now being fitted to autonomous vehicles and drones which present new challenges in design and integration. Applications areas that are currently receiving interest include target detection and tracking, area monitoring, mine and IED detection, environmental monitoring, and border security. There is also growing interest in wearable imaging devices which have their own unique challenges at the sensor design level, the exploitation of the sensor data, and the interconnection of multiple sensors.

The innovation required to meet these future challenges will be drawn from a broad spectrum of organisations ranging from government laboratories, through international companies to SMEs and research centres. This conference will provide a technology and applications forum for EO/IR research and development teams, academia, and business and government stakeholders. Contributions from a diverse range of disciplines covering areas using as sensor components and supporting technology, EO/IR systems engineering, optical materials and design, sensor manufacture and test, materials science, image processing algorithms design and associated software methodologies, and modelling and simulation are also sought. Presentations are encouraged on dual-use applications, and for active and passive technologies systems covering the wavebands from UV to LWIR.

Papers are solicited in the following specific areas:
  • advanced materials for EO/IR, e.g. metamaterials, nano-materials, carbon based materials and their application
  • focal plane array detector technologies, covering wavebands UV to LWIR including multi-band FPAs
  • detector packaging, temperature stabilization and integration technologies
  • passive imaging: technology, modelling, system design and hardware
  • active imaging: technology, modelling, system design and hardware
  • novel sensor technologies and their applications
  • integrated and miniaturized sensors – reduced SWaP+C for applications such as robotic and remote control vehicles and the dismounted soldier
  • computational imaging: techniques, components, designs and algorithms
  • optical domain processing methods
  • broadband, multiband and hyperspectral sensors
  • polarisation sensitive sensors
  • imaging through the atmosphere
  • signal and image processing
  • autonomous processing including detection, tracking and classification
  • data fusion technology including image fusion and sensor fusion concepts
  • modelling and analysis of EO/IR systems and sub-systems
  • test, verification, and validation techniques
  • compressive sensing in imaging systems
  • quantum sensing components and system designs: theory and implementation
  • defence and security applications of EO and IR sensor technology
  • sensor payloads for autonomous vehicles and drones
  • design and applications of wearable sensor systems
  • dual-use of military EO/IR sensor technology for environmental imaging and analysis (including ocean monitor)
  • border and area security including air to ground detection and tracking for applications such as drug trafficking
  • system integration design and development issues
  • sensor demonstrators and prototypes
  • sensor trials and performance evaluation
  • system engineering approaches.
  • ;
    In progress – view active session
    Conference 11866

    Electro-optical and Infrared Systems: Technology and Applications XVIII and Electro-Optical Remote Sensing XV

    On demand now
    View Session ∨
    • Remote Sensing Plenary Presentation I: Monday
    • Security+Defence Plenary Presentation
    • Remote Sensing Plenary Presentation II: Wednesday
    • Panel Discussion and Keynote Lecture: Laser Weapons and Lasers Used as Weapons Against Personnel
    • Welcome and Introduction
    • Sensors and Technology I
    • Sensors and Technology II
    • Image and Data Processing
    • Modelling and Simulation
    • Systems and Applications I
    • Systems and Applications II
    • Systems and Applications III
    • Laser Sensing
    • Poster Session
    Remote Sensing Plenary Presentation I: Monday
    Livestream: 13 September 2021 • 16:30 - 17:30 CEST
    11858-500
    Author(s): Pierluigi Silvestrin, European Space Research and Technology Ctr. (Netherlands)
    On demand | Presented Live 13 September 2021
    Show Abstract + Hide Abstract
    In recent years the Earth observation (EO) programmes of the European Space Agency (ESA) have been dramatically extended. They now include activities that cover the entire spectrum of the wide EO domain, encompassing both upstream and downstream developments, i.e. related to flight elements (e.g. sensors, satellites, supporting technologies) and to ground elements (e.g. operations, data exploitation, scientific applications and services for institutions, businesses and citizens). In the field of EO research missions, ESA continues the successful series of Earth Explorer (EE) missions. The last additions to this series include missions under definition, namely Harmony (the tenth EE) and four candidates for the 11th EE: CAIRT (Changing Atmosphere InfraRed Tomography Explorer), Nitrosat (reactive nitrogen at the landscape scale), SEASTAR (ocean submesoscale dynamics and atmosphere-ocean processes), WIVERN (Wind Velocity Radar Nephoscope). On the smaller programmatic scale of the Scout missions, ESA is also developing two new missions: ESP-MACCS (Earth System Processes Monitored in the Atmosphere by a Constellation of CubeSats) and HydroGNSS (hydrological climate variables from GNSS reflectometry). Another cubesat-scale mission of technological flavor is also being developed, Φ-sat-2. Furthermore, in collaboration with NASA, ESA is defining a Mass change and Geosciences International Constellation (MAGIC) for monitoring gravity variations on a spatio-temporal scale that enables applications at regional level, continuing - with vast enhancements - the successful series of gravity mapping missions flown in the last two decades. The key features of all these missions will be outlined, with emphasis on those relying on optical payloads. ESA is also developing a panoply of new missions for other European institutions, namely Eumetsat and the European Union, which will be briefly reviewed too. These operational-type missions rely on established EO techniques. Nonetheless some new technologies are applied to expand functional and performance envelopes. A brief resume’ of their main features will be provided, with emphasis on the new Sentinel missions for the EU Copernicus programme.
    Security+Defence Plenary Presentation
    Livestream: 14 September 2021 • 09:00 - 10:00 CEST
    11868-500
    Author(s): Patrick R. Body, Tecnobit (Spain)
    On demand | Presented Live 14 September 2021
    Show Abstract + Hide Abstract
    Optronic systems for the defence market are available from the UV to the LWIR wavelengths but the ideal band very much depends on the particular application and their environment. This lecture will cover some of the more important features of each type of optronic sensor and using examples from the experience gained over many years of system development by Tecnobit for Airborne, Navel and Land sectors, suggests some broad recommendations.
    Remote Sensing Plenary Presentation II: Wednesday
    Livestream: 15 September 2021 • 09:00 - 10:00 CEST
    11858-600
    Author(s): Adriano Camps, Institut d'Estudis Espacials de Catalunya (Spain)
    On demand | Presented Live 15 September 2021
    Show Abstract + Hide Abstract
    Today, space is experiencing a revolution: from large space agencies, multimillion dollar budgets, and big satellite missions to spin-off companies, moderate budgets, and fleets of small satellites. Some have called this the “democratization” of space, in the sense that it is now more accessible than it was just a few years ago. To a large extent, this revolution has been fostered on one side by the standardization of the platforms’ mechanical interfaces, and on the other side by the technology developments coming from mobile communications. Standard platform’s mechanical interfaces have led to standard orbital deployers, and new launching capabilities. The technology developed for cell phones has brought more computing resources, with less power consumption and volume. Small satellites are used as pure technology demonstrators, for targeted scientific missions, mostly Earth Observation, some for Astronomy, and they are starting to enter in the field of communications, as huge satellite constellations are now becoming more possible. In this lecture, the most widely used nano/microsats form factors, and its main applications will be presented. Then, the main Scientific Earth Observation and Astronomy missions suitable to be boarded in SmallSats will be discussed, also in the context of the rising Constellations of SmallSats for Communication. Finally, the nanosat program at the Universitat Politècnica de Catalunya (UPC) will be introduced, and the results of the FSSCAT mission will be presented.
    Panel Discussion and Keynote Lecture: Laser Weapons and Lasers Used as Weapons Against Personnel
    Livestream: 16 September 2021 • 14:30 - 16:00 CEST
    Welcome and Introduction
    Robert J. Grasso, NASA Goddard Space Flight Ctr. (United States)

    Keynote Lecture:
    Smoke as protection against high energy laser effects
    Ric Schleijpen, TNO (Netherlands)

    Panel Discussion
    Moderator:
    Robert J. Grasso, NASA Goddard Space Flight Ctr. (United States)

    Panelists:
    Ric Schelijpen,TNO. (Netherlands)
    Robert J. Grasso, NASA Goddard Space Flight Ctr. (United States)

    Since their inception lasers have become an omnipresent source on the battlefield. And are used in application of rangefinding to designation to remote sensing to countermeasures to weaponry. Hence, even a simple laser can be used to great affect as an anti-personnel weapon capable of simple visual disruption to complex target destruction. Within this omnipresent capacity how do we deal with the presence of lasers on the battlefield, more specifically, lasers used as weapons. And, what might be the practical, technical, logistical, political, and ethical issues associated. Please join us for this exciting and potentially contentious discussion.
    11867-1
    Author(s): Ric H. M. A. Schleijpen, Sven Binsbergen, Amir L. Vosteen, Karin de Groot-Trouw, Denise Meuken, Alexander VanEijk, TNO (Netherlands)
    On demand | Presented Live 16 September 2021
    Show Abstract + Hide Abstract
    This paper discusses the use of smoke obscurants as countermeasures against high energy lasers (HEL). Potential success of the smoke does not depend only the performance of the smoke. The transmission loss in the smoke is part of a chain of system components, including warning sensors, smoke launchers, etc.. The core of the paper deals with experimental work on the following research questions: - Does smoke attenuate an incoming beam of a HEL? - Does the HEL affect the smoke itself? The experimental set-up with the TNO 30kW HEL and the scale model for the smoke transmission path will be shown. Selected experimental results will be shown and discussed. Finally we will compare the results to theoretical calculations and we will analyse the properties of an ideal HEL attenuation smoke.
    Welcome and Introduction
    11866-800
    Author(s): Duncan L. Hickman, Tektonex Ltd. (United Kingdom); Helge Bürsing, Fraunhofer-Institut für Optronik, Systemtechnik und Bildauswertung IOSB (Germany)
    On demand
    Sensors and Technology I
    11866-1
    Author(s): Arnaud Crastes, Teledyne Imaging (France)
    On demand
    Show Abstract + Hide Abstract
    ABSTRACT: Using a state of the art fully digital ROIC – topped with an uncooled bolometer - an infrared imaging platform has been developed to meet the expectations of more and more demanding applications. Indeed, cameras based on uncooled IR bolometers are becoming more popular and have to be designed to properly meet the requirements of diverse markets, from machine vision applications to very lightweight, low power drones. Each application has its own unique requirements, which has led to specific interface developments (GigE with Precise Time Protocol / IEEE 1558 protocol, USB, MIPI CSI / DSI, and more) as well as developments in embedded image processing. In this paper, we will give an overview of the modular design process which leads to an easy to use modular interface. Of course, throughout this development real IR performance has to be taken into account - for example the capability to achieve a NETD of 50mK with a scene dynamic range higher than 1000°C without any adjustment settings. Image quality – with and without shutter – is also addressed paving the way to affordable, powerful thermal imaging modules and cameras. Keywords: Uncooled microbolometer, LWIR, camera module, digital interface
    11866-2
    Author(s): Maxime Bouschet, Univ. de Montpellier (France); Vignesh Arounassalame, ONERA (France); Rodolphe Alchaar, Clara Bataillon, Jean-Philippe Perez, Univ. de Montpellier (France); Nicolas Péré-Laperne, LYNRED (France); Isabelle Ribet-Mohamed, ONERA (France); Philippe Christol, Univ. de Montpellier (France)
    On demand
    Show Abstract + Hide Abstract
    InAs/InAsSb (Ga-free) type II strained layer superlattice (T2SL) barrier structure (so called XBn) has emerged as alternative technology for high performance midwave infrared (MWIR) imaging system with potential high operating temperature. The XBn device architecture relies on the presence of a specific barrier layer (BL) that allows transport of minority holes while blocking majority electrons but mainly control surface leakage currents. Indeed, the BL blocks the flow of electrons coming from surface states due to air-exposed semiconductor surfaces and there is no need of surface passivation treatments when the mesa device is fabricated by etching up to the BL (shallow etching). The objective of this communication is to study the influence of different etching depths on electrical and electro-optical properties of non-passivated T2SL XBn pixel detector having a cut-off wavelength of 5µm at 150K. Dark current density as low as 8E-6 A/cm² at operating bias is recorded but the study shows the strong influence of lateral diffusion length on the shallow etched pixel properties and therefore the need to perform deep etching process to fabricate high format MWIR focal plane arrays (FPAs).
    11866-3
    Author(s): Marvin Michel, Sebastian Blaeser, Elahe Zakizade, Sascha Weyers, Dirk Weiler, Fraunhofer-Institut für Mikroelektronische Schaltungen und Systeme IMS (Germany)
    On demand
    Show Abstract + Hide Abstract
    Besides nowadays challenges in contactless measurement of body temperature, the market for uncooled thermal imager continuously increased in the last years. The size of the camera core is a parameter, that needs to follow the miniaturization of the whole camera body. State-of-the-art value for pixel sizes of microbolometers in uncooled thermal imagers is 10 µm. Pushing the microbolometer size to the optical limit, Fraunhofer IMS provides a manufacturing process for FIR-imagers (uncooled thermal imagers) based on a scalable microbolometer technology. Taking this scalable technology as a basis, we are introducing a fully implemented uncooled thermal imager with 6 μm pixel size. The 6 μm microbolometers are made by Fraunhofer IMS’s manufacturing technology for a thermal MEMS isolation realized by vertical nanotubes. Performance of the 6 µm microbolometers is estimated by a 17 µm digital readout integrated circuit in QVGA resolution. Responsivity and number of electrical defect pixels as well as NETD are determined by an electro-optical characterization based on a test setup with a black body at two different temperatures. NETD of the 6 µm microbolometers is estimated to be at 611 mK. Supporting the quantitative measurements, FIR test images will be presented to demonstrate the microbolometer’s functionality in a fully implemented uncooled thermal imager. In summary, a fully implemented uncooled thermal imager with QVGA resolution based on a 6 µm nanotube-microbolometer detector is presented here. Compared with commercially available uncooled thermal imagers, the highly limited absorption area of our microbolometers with structure sizes below the target wavelength causes an accordingly higher NETD. Nevertheless, it can be stated that 6 µm pixel size still shows the capability of absorbing infrared radiation at wavelengths of approximately 10 µm in an uncooled thermal imager.
    11866-4
    Author(s): Vignesh Arounassalame, ONERA (France); Maxime Bouschet, Rodolphe Alchaar, Institut d'Électronique et des Systèmes (France), Univ. de Montpellier (France), CNRS (France); A. Ramiandrasoa, Sylvie Bernhardt, ONERA (France); Clara Bataillon, Jean-Philippe Perez, Institut d'Électronique et des Systèmes (France), Univ. de Montpellier (France), CNRS (France); Philippe Christol, Institut d'Électronique et des Systèmes (France), Univ. de Montpellier (France), Ctr. National de la Recherche Scientifique (France); Isabelle Ribet-Mohamed, ONERA (France)
    On demand
    Show Abstract + Hide Abstract
    In the past decade, Type-II superlattice (T2SL) infrared (IR) photodetectors based on a barrier design (called XBn) have seen significant improvements of their performance. Several years ago, it was observed that Ga-free InAs/InAsSb T2SL’s carrier lifetime was significantly longer than in Ga-containing InAs/GaSb one. This finding triggered intensive developments for IR detectors based on Ga-free T2SL. However, such T2SL system has also its own challenges which need to be investigated, in particular concerning the transport of minority carriers in the XBn device. The XBn design consists in a unipolar conduction band barrier layer (BL) inserted between n-type absorber and contact layers (AL and CL respectively). Quantum efficiency measurements are performed on Ga-free InAs/InAsSb T2SL XBn detector in order to better understand minority carrier transport properties in terms of lifetime and diffusion length. A quantum efficiency as high as 50% is found at 150K for a 3µm thick AL. Combined with lifetime measurements using time resolved photoluminescence (TRPL) technique, mobility is extracted by using a theoretical calculation of the quantum efficiency thanks to Hovel’s equations. This approach helps us to better understand the hole minority carrier transport in Ga-free T2SL MWIR XBn detector and therefore to improve its performance
    Sensors and Technology II
    11866-7
    Author(s): Patrik Langehanenberg, Jakob Studer, Kenan Berisa, TRIOPTICS GmbH (Germany)
    On demand
    Show Abstract + Hide Abstract
    Increasing demands in imaging quality of all assembled optical systems require the optimization of lateral and axial alignment of individual lenses. An economic possibility to improve mechanical alignment is the step-by-step centration testing of the topmost lens surface during the assembly process. Regardless of the material properties of the lens, measurement equipment that operates in the visible spectral range is suitable for this application. When the assembled lens does not meet the expected imaging performance, an in-depth analysis is needed. Focusing electronic autocollimators in combination with high-precision air bearing spindles are commonly used to analyze the centration of each optical surface and lens element. The well-established TRIOPTICS OptiCentric® family allows to determine the centration of inner surfaces using their powerful MultiLens technique. The obtained measurement data are processed to provide the shift and tilt of individual lenses or groups of lenses with respect to each other or a freely selectable datum. For IR lenses a wavelength that can penetrate the lens material is required. Autocollimators for MWIR or LWIR are to be combined with a VIS measurement head for all measurements on lens surfaces that are directly accessible from the outside. A measurement accuracy of 0.1µm is reached. The development of the new OptiCentric® IR autocollimation head, was mainly driven by optimizing the spot size and hence its accuracy. A centration measurement precision of below 0.25 micrometer for MWIR and LWIR wavelengths was obtained. For measurement of air spacings and center thicknesses through all IR lens materials the instrument incorporates a low coherence interferometer with an accuracy down to 0.15µm. The contribution describes how an IR lens assembly consisting of several lenses can be fully opto-mechanically characterized in a non-contact and non-destructive fashion. Optimized processes to effectively streamline processes are taken into consideration same as prerequisites like operator skills.
    11866-8
    Author(s): Fernando de León-Pérez, Ctr. Univ. de la Defensa Zaragoza (Spain)
    On demand
    Show Abstract + Hide Abstract
    Based on general properties of Maxwell equations, we develop simple design rules to modify the dispersion relation of plasmonic resonators fabricated with nanostructured metallic films, to tune its far field response, and to couple plasmons to phonon polaritons. Appling such rules, a plasmonic trench resonator is designed as an electro-optical biosensor. The resonator is fed by a nanometric slit that can be electrically biased. Light traversing the slit excites surface plasmon polaritons in the resonator that produces high-Q transmission peaks, which are employed for real-time biosensing. Applying and RF electrical bias across the slit, the trench resonator can simultaneously serve as a dielectrophoretic trap able to attract or repel analytes. Trapped analytes are detected in a label-free manner using refractive-index sensing, enabled by interference between surface-plasmon standing waves in the trench and light transmitted through the slit. This active sample concentration mechanism enables detection of nanoparticles and proteins at a concentration as low as 10 pM. The electrically biased split-trench resonator can potentially applied in optoelectronics and for signal processing applications, as well as to trap quantum emitters, paving the way to study strong light−matter interactions, cavity polaritonics, electrical carrier injection, and electroluminescence.
    Image and Data Processing
    11866-9
    Author(s): Alice Fontbonne, Hervé Sauer, François Goudail, Lab. Charles Fabry (France)
    On demand
    Show Abstract + Hide Abstract
    A phase mask in the aperture stop of an imaging system can enhance its depth of field (DoF). This DoF extension capacity can be maximized by jointly optimizing the phase mask and the digital processing algorithm used to deblur the acquired image. This method, introduced by Cathey & Dowski with a cubic phase mask, has been generalized to different mask models. Among them, annular binary phase masks are easy to manufacture, and can be co-optimized with a simple unique Wiener deconvolution filter. Their performance and robustness have been characterized theoretically and experimentally in the case of monochromatic illumination. We perform here a theoretical and experimental study of codesigned DoF enhancing binary phase masks in panchromatic imagers. At first glance, this configuration is not optimal for binary phase masks. Indeed, the binary phase masks are most often manufactured by binary etching of a dielectric plate, so dephasing depends on the wavelength. The π radians dephasing is reached for only one wavelength. How do phase masks optimized for a particular wavelength respond to a wide illumination spectrum? Is it possible to take into account the illumination spectrum in the co-optimization of phase masks? What impact does this have on the result? We analyze the behavior of DoF enhancing phase masks in panchromatic imagers in terms of Modulation Transfer Function and of final image quality. The results are experimentally validated with imaging experiments carried out with a commercial lens, a Vis-NIR CMOS sensor and co-optimized phase masks. We study different phase masks co-optimized for different spectrum of illumination. We show that masks specifically optimized for wide spectrum illumination perform better under this type of illumination than monochromatically optimized phase masks under monochromatic illumination, especially when the targeted DoF range is large.
    11866-10
    Author(s): Olivier Lévêque, Caroline Kulcsár, François Goudail, Institut d'Optique Graduate School (France)
    On demand
    Show Abstract + Hide Abstract
    Co-design consists in optimizing an imaging system by taking into account the scene and image formation model, the imaging system and the method of information extraction [Stork and Robinson 2008]. For several years, our team has co-designed phase masks to increase the depth-of-field of optical imaging systems where the end product is a restored image [Diaz et al. 2011, Burcklen et al. 2015, Falcón et al. 2017]. These masks produce a relatively blurred image which quality is independent of the axial position of the object. It is then possible to reconstruct the object at all depths by applying a unique deconvolution process. This co-optimization approach can be formulated by defining the optimization criterion of the phase function of the mask as the mean square difference between an ideal sharp image and the deconvolved image delivered by the system [Mirani et al. 2005, Robinson and Stork 2006, Mirani et al. 2008]. In general, it is preferable to optimize the masks using a closed-form criterion since it considerably accelerates optimization. That is the case if the deconvolution is carried out using a Wiener filter. However, nonlinear deconvolution algorithms are known to have better performance. The question therefore arises as to whether better imaging performance can be obtained by taking into account a nonlinear deconvolution algorithm instead of a linear one in the optimization criterion. To answer this question, we propose to compare the image qualities obtained with these two approaches. We show that the masks obtained by optimizing criteria based on linear and nonlinear algortihms are identical and propose a conjecture to explain this behavior [Lévêque et al. 2021]. This result is important since it justifies a frequent practice in co-design which consists in optimizing a system with a simple analytical criterion based on a linear deconvolution and restoring images with a more efficient nonlinear deconvolution algorithm [Portilla and Barbero 2018].
    11866-11
    Author(s): Markus Nordberg, Ludwig Hollmann, Lars Landström, David K. J. Gustafsson, FOI-Swedish Defence Research Agency (Sweden)
    On demand
    Show Abstract + Hide Abstract
    Raman spectroscopy is an attractive method for detection of explosive even in small quantities. A laser can be combined with a Compressive Coded Aperture Spectral Imaging (CASSI) system sensor to collect Raman spectra on a small surface at a large distance. The CASSI-system decrease the data collection time but instead increase reconstruction time for the Raman spectra. Reconstruction of Raman spectra from an ensemble of compressed sensing measurement using standard reconstruction methods such as total variation (TV) is rather time consuming and limit the application domain for the technique. Novel machine learning approaches such as deep learning has lately been applied to invers problem. We propose a deep learning based approach for reconstructing Raman spectra from an ensemble of measurements formulated as a regression problem. The deep learning network is trained by minimising a loss-function which is composed of two components: a reconstruction error and a re-projection error. The network architectures and the number of parameters required to be able to reconstruct the spectra are discussed and motivated using an encoder-decoder approach (similar to an Auto Encoder (AE) network). The proposed method is trained on simulated training data there the training data has been generated using a transfer function. The transfer function has been developed to mimic the optical properties of the CASSI-system. The deep learning network has been trained on different training set with different level of background noise, different number of material in the scene and different spatial configuration of the materials in the scene. The reconstruction results using the deep learning network has been qualitatively and quantitatively evaluated on simulated data. The results are compared with a TV based reconstruction algorithm in terms of reconstruction quality and computation time. The reconstruction time is improved in orders of magnitude without altering the quality of the reconstructed Raman spectra.
    11866-12
    Author(s): Evgeny A. Semenishchev, Viacheslav V. Voronin, Aleksander A. Zelensky, Moscow State Univ. of Technology "STANKIN" (Russian Federation); Andrey Alepko, Moscow State Univ. of Technology (Russian Federation); Sos S. Agaian, College of Staten Island (United States)
    On demand
    Show Abstract + Hide Abstract
    The use of data obtained by a group of sensors requires the formation of parallel channels. The use of each separate channel leads to the need to allocate additional computing resources and the appointment of time intervals (timing) for a single-processor analysis system. The formation of the decision rule and the subsequent decision-making based on such data requires forming a combined inter-block criterion. This criterion should consider both the possible intersection of data and their discrepancy associated with the use of different parameters when processing the same data. The formation of combined data reduces the computational costs at the decision-making stage, which will improve the efficiency of post-processing and visual control systems. In using combined stationary systems, it is possible to create template fields that allow you to form transformation matrices for a specific space. If it is impossible to use fixed cameras or combined systems in a single body, forming stitching images is complicated. Combining data into a single information field also allows you to increase the operator's work efficiency, allowing you to analyze the entire process as a whole and not its scattered parts. The paper proposes a technique for forming a stitching thermal imaging image based on combined data analysis. For the formation of anchor points for stitching images, primary analysis methods with combined processing parameters are used. Images obtained from the outputs of thermal imaging cameras are pre-processed by the filtering method. The method used is based on the application of a multicriteria function. It automatically divides the image into regions (boundaries, highly detailed, and locally stationary areas) to reduce noise while preserving the transition boundaries. To increase the processing speed, a simplification algorithm is applied while maintaining the shapes and geometry of objects. The operation includes absorbing small objects and averaging for the ranges of histograms of color gradients. Analysis of local features and the formation of anchor points is based on the use of correlation analysis. As a method of non-linear change in color balance, modified alpha-rooting methods are used. As test data, a series of images of one object obtained from different fixation points in: visible (camera with a resolution of 1920 x 1080 pixels, color depth 8 bits) and far-infrared (thermal images with a resolution of 320 x 240 pixels, grayscale). Images have at least 40% overlap area of one object. Applications for both industrial production and the analysis of objects in the open areas are considered.
    Modelling and Simulation
    11866-13
    Author(s): José Pérez, Fraunhofer-Institut für Optronik, Systemtechnik und Bildauswertung IOSB (Germany); Dov Steiner, IARD Sensing Solutions Ltd. (Israel); Stefan Keßler, Fraunhofer-Institut für Optronik, Systemtechnik und Bildauswertung IOSB (Germany)
    On demand
    Show Abstract + Hide Abstract
    Model-based performance assessment is a valuable approach in the process of designing or comparing electro-optical and infrared imagers since it alleviates the need for expensive field measurement campaigns. TRM4 serves this purpose and is primarily used to calculate the range performance based on parameters of the imaging system and the environmental conditions. It features a validated approach to consider aliasing in the performance assessment of sampled imagers. This paper highlights new features and major changes in TRM4.v3, which is to be released in autumn 2021. TRM4.v3 includes the calculation of an image quality metric based on the National Imagery Interpretability Rating Scale (NIIRS). The NIIRS value computation is based on the latest version of the General Image Quality Equation. This extends the performance assessment capability of TRM4 in particular to imagers used for aerial imaging. The three-dimensional target modelling was revised to cope with a wider range of scenarios: from ground imaging of aerial targets against a sky background to aerial imaging of ground targets, including ground-to-ground imaging. For imagers working in the visible to the SWIR spectral range, TRM4.v3 provides not only an improved comparison basis between lab measurements and modelling, but also allows direct integration of measured device data. This is achieved by introducing and computing (in analogy to the Minimum Temperature Difference Perceived used for thermal imagers) the so-called Minimum Contrast Perceived (MCP). This device figure of merit is similar to the Minimum Resolvable Contrast (MRC) but also applicable at frequencies above Nyquist frequency. Using measured MCP or MRC data, range performance can be calculated for devices such as cameras, telescopic sights and night vision goggles. In addition, the intensified camera module introduced in a previous publication was further elaborated and a comparison to laboratory measurement results is presented. Lastly, the graphical user interface was improved to provide a better user experience. Specifically, an interactive user assistance in form of tooltips was introduced.
    11866-14
    Author(s): Uwe Adomeit, Fraunhofer-Institut für Optronik, Systemtechnik und Bildauswertung IOSB (Germany)
    On demand
    Show Abstract + Hide Abstract
    Triangle Orientation Discrimination (TOD) developed by TNO Human Factors and Minimum Temperature Difference Perceived (MTDP) developed by Fraunhofer IOSB are competitive measurement methods for the assessment of well and under sampled thermal imagers. Key differences between the two methods are the different targets, bars for MTDP and equilateral triangles for TOD, and the measurement methodology. MTDP bases on a threshold measurement whereas TOD uses a psychophysical approach. All advantages and disadvantages of the methods trace back on these differences. The European Computer Model for Optronic System Performance Prediction (ECOMOS) includes range performance assessments according to both methods. This triggered work at Fraunhofer IOSB to do comparative TOD and MTDP measurements. Idea was checking if TOD- and MTDP-curves fall together when transferring the two target descriptive parameters, reciprocal angular subtense (one over triangle size expressed in angular units) and spatial frequency respectively, into each other using a conversion factor or function. Surprisingly, literature does not include such a measurement-based comparison to date. Extending IOSBs existing MTDP set-up with triangle targets and the associated turntable and shutter enabled the comparative measurements. The applied TOD measurement process uses the guidelines found in literature with some necessary adaptions. Both measurements included the same components (blackbody, collimator, monitor etc.) except the targets. Additionally the trained MTDP-observer also did the TOD measurements. Only the methods itself thus should cause differences in the results. Four thermal imagers with different magnitude of under sampling (MTF at Nyquist frequencies about 8 %, 14 %, 32 %, and 73 %) are the basis for the comparison. Their measurements allowed deriving a standard target for triangles according to the process known from target acquisition assessment. These calculations result in 1.5±0.2 line pairs on target. Multiplying reciprocal angular subtense with this factor gives corresponding MTDP and TOD curves when TOD is based 62.5 % instead of the standard 75 % probability. 62.5 % corrected for chance are 50 % probability and thus in correspondence with the threshold assumption of the MTDP. Deviations occur, when reciprocal angular subtense is near the cut-off because of unaccounted sampling effects. The proposed way to overcome this is normalizing spatial frequency and reciprocal angular subtense with camera line spread function full width at half maximum. A sigmoidal transition function is able to describe the resulting connection. This function could be valid for all thermal imagers, as indicated by the assessment of two additional ones. However, as the assessment bases only on six thermal imagers and one observer further comparative measurements by a larger number of observers or, alternatively, modeling is necessary.
    11866-15
    Author(s): Duncan L. Hickman, Tektonex Ltd. (United Kingdom)
    On demand
    Show Abstract + Hide Abstract
    Target detection is a fundamental task within many EO/IR systems, and it is generally modelled either as an unresolved or a fully-resolved imaging problem. However, the transition between these two situations has received less attention despite it being of importance for those applications where the sensor and target are approaching each other from a long distance, such as the case where a forward-looking sensor is mounted on an aircraft or drone which is flying towards a ground target. In this paper, the transition from a point target to an extended target will be considered, and methods for modelling the performance will be reviewed for 2D detector arrays. For the determination of key system performance parameters such as the Signal to Noise Ratio (SNR), it is the amount of power imaged on a specific pixel in the focal plane which is of importance This incident power is determined by a number of geometrical and sensor design factors including the size of the target image, the nature of the optical blur, and the relative position of the target to the centre of the pixel element. For a sensor-target engagement, one or more of the parameters can vary with a time constant of the order of the frame time, and this introduces a degree of randomness in the detection performance. To model this interactive process, a sampling efficiency term is introduced that can be used to correct the widely used range detection equation. It is shown that the actual sampling efficiency varies as a function of target range, image blur conditions, and the target’s position, which can result in large fluctuations in the detected signal. Several examples are given which illustrate the importance of the sampling efficiency correction and an approach is presented for including it in system-level models.
    11866-16
    Author(s): Gregor Franz, Daniel Wegner, José Pérez, Stefan Kessler, Fraunhofer-Institut für Optronik, Systemtechnik und Bildauswertung IOSB (Germany)
    On demand
    11866-17
    Author(s): Daniel Wegner, Fraunhofer-Institut für Optronik, Systemtechnik und Bildauswertung IOSB (Germany)
    On demand
    Show Abstract + Hide Abstract
    Air turbulence can be a major impairment source for long-range imaging applications. There is great interest in the assessment of turbulence mitigation techniques based on machine learning models. In general such models require lots of image data for robust training and validation. Experimental acquisition of image data in field trials is time-consuming and environmental conditions such as daytime and weather cannot be specifically controlled. Several methods for turbulence simulation have been proposed in recent years. Many of these are based on phase screens or models turbulent point spread functions (PSFs). Often simple turbulence models such as the Kolmogorov or Von Karman spectrum are used. Therefore these methods cannot provide insight in the influence and relevance of other turbulence parameters such as inner scale and (non-)Kolmogorov power slope. In this work a data fitting procedure for the determination of turbulence model parameters based on experimental data is shown. Hereby the Generalized modified Von Karman spectrum (GMVKS) is used. Differential tilt variances (DTV) are calculated from centroid displacements in video sequences of a recorded LED grid. Then the experimental data is fitted to theoretical expressions of DTV by numerical integration over the turbulence model. Image data was acquired in field trials on several days at the same location. Then a beam propagation method using Markov GMVKS phase screens with determined model parameters is used to generate a grid of PSF images which represent degradation for different viewing angles. For validation, DTVs based on centroid displacements of the simulated PSFs are calculated and compared with the corresponding measured data of LED centroid displacements and theoretical data. Cumulative distribution functions of the model parameters for all recording dates are provided to show the diversity of turbulence conditions. These can be used as prior knowledge for future turbulence simulations to include various model parameters and hence different conditions of image degradation. Finally the extensibility of the data fitting approach to other turbulence spectra, e.g. anisotropic spectra, is discussed.
    Systems and Applications I
    11866-20
    Towards cognitive IRST (Invited Paper)
    Author(s): Roberto Conte, Francesco Castaldo, Andrea Leoni, Flavio Stellino, Alessandro Andreoni, Luca Fortunato, Leonardo S.p.A. (Italy)
    On demand
    Show Abstract + Hide Abstract
    Cognitive radar systems adapt processing, receiver and transmitted waveform parameters by continuously learning and interacting with the operative environment. IRST systems are passive; as such no RF emission is involved. Nevertheless, the cognitive paradigm can be applied to passive sensors in order to optimize operational modes choice, platform and processing parameters on the fly. A cognitive based IRST, while enhancing the overall performance of the system, would also reduce the crew workload during the mission. In this paper, steps and challenge toward cognitive IRST are described, along with a proof-of-concept example of improved tracking capabilities using reinforcement learning methods.
    11866-21
    Author(s): Christian Günther, Michael Henrichsen, Stefan Keßler, Fraunhofer-Institut für Optronik, Systemtechnik und Bildauswertung IOSB (Germany)
    On demand
    Show Abstract + Hide Abstract
    Near-eye displays – displays positioned in close proximity to the observer’s eye – are a technology steadily gaining significance in industrial and defense applications, e.g. for augmented reality and digital night vision. In the light of the increasing use of such displays and their ongoing technological development, a specialized measurement setup is designed as a basis for evaluating these types of displays as part of the optoelectronic imaging chain. We developed a prototype measurement setup to analyze different properties of near-eye displays, with our primary focus on the Modulation Transfer Function (MTF) as a first step. The setup consists of an imaging system with a high-resolution CMOS camera and a motorized positioning system. It is intended to run different measurement procedures semi-automatically, performing the desired measurements at specified points on the display. This paper presents a comparison between different MTF measurement methods in terms of their applicability for different pixel structures. As a first step, the measurement setup’s imaging capabilities are determined using a slanted edge target. A commercial virtual reality headset is then used as a sample display to test and compare different standard MTF measurement methods, such as the slanted edge or bar target method. The results are discussed with the goal to find the best measurement procedures for our setup.
    11866-22
    Author(s): Stuart Jackson, Raymond M. Sova, Michael E. Thomas, Johns Hopkins Univ. Applied Physics Lab., LLC (United States)
    On demand
    Show Abstract + Hide Abstract
    Optical window characterization is performed with a CO2 laser heating the material to understand the optical effects, thermal effects, and temperature dependence of the index of refraction. Distortions in the optical window caused by operating in challenging aerothermal environments can impact an imager’s performance. Uneven heating of the window will induce a temperature gradient, which when coupled with the temperature dependence of the refractive index, causes a flat sapphire window to act as an imperfect lens. Experimental capability allows multiple sensors and diagnostic equipment to collect synchronized data. A long-wave infrared (LWIR) camera images the sample’s front and back surfaces to measure temperatures and temperature gradients. A transmitted laser probe beam is captured simultaneously by a visible imager and wavefront sensor. The visible imager captures how a point source appears observed through the window. A transmitted wavefront is reconstructed from the wavefront sensor. The reconstructed wavefront includes effects from both dn/dT and mechanical deformation of the window. Using the reconstructed wavefront and imager optics in Zemax, the point spread function (PSF) of the imager looking through the heated window is generated and compared with the experimentally measured PSF.
    Systems and Applications II
    11866-23
    Author(s): Aude Martin, Luc Leviandier, Thales Research & Technology (France); Matthieu Boffety, Lab. Charles Fabry (France), Institut d'Optique Graduate School (France), Univ. Paris-Sud (France); Hervé Sauer, Jan Dupont, Stéphane Roussel, Benjamin Le Teurnier, François Goudail, Lab. Charles Fabry (France); Vincent Noguier, Pierre Potet, New Imaging Technologies (France); Nicolas Vannier, Thales LAS France SAS (France); Patrick Goguillon, Thales Communications & Security S.A.S. (France)
    On demand
    Show Abstract + Hide Abstract
    Active polarimetric imaging systems are expected to bring substantial advantages for detection/classification of materials in defense (decamouflage, route opening) and civilian (bio-medical imaging, industrial control) applications because they reveal contrasts that do not appear in classic, panchromatic or multispectral images. An active polarimetric imager involves illuminating the scene with light having a controlled state of polarization and analyzing the state of polarization of the backscattered light from the scene. In a previous project, an imager completely agile in polarization states, was developed and has allowed to show that for most outdoor scenarios encountered, the Mueller matrices of the scenes are nearly diagonal, corresponding to mainly depolarizing effects. Besides, the importance of image normalization to reveal polarimetric contrasts was highlighted. We were thus able to show that in the case of detection of manufactured objects in a natural environment (plant or mineral), orthogonal state contrast images (OSC) leading to depolarization coefficients provide the most relevant information. We report here on the development of a SWIR bistatic imager with controlled and arbitrary emitted polarization states and the simultaneous acquisition of both the co-polarized and cross-polarized state images on two sensors at the reception, allowing the proper acquisition of real time OSC videos of dynamic scenes. It has been operated during field trials aiming at the detection of man-made objects (arms, vehicles) in complex scenes with various atmospheric conditions and at up to a few hundreds of meters. In this talk, we will quantitatively discuss the enhancement of conspicuity of different objects, possibly hidden, in complex scenes with distance ranging from 70 m to 600 m for various emitted polarization states (Horizontal, +45° and left circular).
    11866-24
    Author(s): Andreas Peckhaus, Patrick Kuhne, Deutsches Zentrum für Luft- und Raumfahrt e.V. (Germany); Maike Neuland, Deutsches Zentrum für Luft- und Raumfahrt eV. (Germany); Thomas Hall, Deutsches Zentrum für Luft- und Raumfahrt e.V. (Germany); Carsten Pargmann, Hochschule Heilbronn (Germany); Frank Duschek, Deutsches Zentrum für Luft- und Raumfahrt e.V. (Germany)
    On demand
    Show Abstract + Hide Abstract
    The operation of lasers in free space involves the potential risk of unintentionally exposing the human eye and skin to radiation. In addition to direct exposure, indirect scattered radiation of high-power lasers may pose a threat to operators, working personnel, and third parties. Hazard assessments are usually performed based on laser safety standards. However, these standards would have to be extended for outdoor environments and therefore it is advisable to substantiate models and safety calculations with measurements of the absolute scattered radiant flux under realistic conditions. For the quantification of scattered radiation, a radiometric sensor has been developed. The sensor consists of an optical, electronic, and mechanical unit. Two realizations of the optical detection unit with a side-on photomultiplier (PMT) and a photodiode amplifier (PDA) have been built according to German safety policies. The different detector types facilitate the detection of scattered radiation over a wide power range. The electronic unit includes the data acquisition and processing of the optical detection unit and peripheral devices (i.e. environmental sensors and GPS module). A lock-in amplifier is used to reduce the contribution of background radiation. The optical and electronic units are housed separately in a weather-resistant case on a tripod and a mobile container, respectively. Radiometric calibration is performed for each optical detection unit. The calibration involves a two-step procedure allowing for a direct conversion of the output voltage of the lock-in amplifier into an absolute scattered power considering the detector area and collection solid angle of the optical detection unit. Goniometer-based reflection measurements of solid surface samples are used for the characterization of the performance of the optical detection unit in terms of dynamic range, the influence of background noise, accuracy, and repeatability and contribute to a better understanding of the sensor in future field deployment.
    11866-26
    Author(s): Thai Luu Van, Ban Nguyen Dang, Thuan Vu Duc, Viettel High Technology Industry Corp. (Vietnam)
    On demand
    Show Abstract + Hide Abstract
    Backscaning step and stare imaging is a popular method for widening the coverage of Electro-Optical/Infra-Red (EO/IR) system. Meanwhile the high fresh rate of thermal panoramic scene is a significant factor for the false alarm rate of system in early warning, the precise motion of steering optical component in this short time as some micro seconds is a challenge, especially in high disturbance environment. The synchronization between optical movement and the integrating point demands the strict time and fine motion for speed-up, back-can and fly-back process of fast steering mirror (FSM). The paper presents the high-order trajectory scanning profile design method for FSM, thereby optimizing the required performance of systems component such as voice coil motor and also improving the overall quality of fly-back process. Next, the linearization feedback controller controller and linear state observer is used to control the system in both tracking follow trajectory and rejecting disturbance. Finally, a prototype version of fast steering system is built to apply our new controller algorithm. The simulation and experimental results show the good agreement that RMS error of Line of Sight (LOS) can be achieved smaller than one hundred micro radian during back-scan process.
    Systems and Applications III
    11866-28
    Author(s): Benjamin Le Teurnier, Institut d'Optique Graduate School (France); Ning Li, Northwestern Polytechnical Univ. (China); Xiaobo Li, Tianjin Univ. (China); Matthieu Boffety, Institut d'Optique Graduate School (France); Haofeng Hu, Tianjin Univ. (China); François Goudail, Institut d'Optique Graduate School (France)
    On demand
    Show Abstract + Hide Abstract
    Polarimetric imaging can be done with a division of focal plane (DoFP) camera. This type of camera uses a grid of superpixels. Each superpixel consists of four neighbor pixels with four polarizers having different orientations in front of them. Thus, this kind of camera enables to estimate the linear Stokes vector in a single acquisition. Full stokes polarimetric imaging can be realized by adding a retarder in front of the DoFP camera and performing at least two acquisitions with two different values of retarder orientation. The effective retardance of the retarder depends on several parameters such as temperature and wavelength, which are not always controlled when using such a camera on the field. Therefore, this retardance may not be known precisely, and using a retardance value different from the true one will lead to a bias in estimating the Stokes parameter S3, which contains the information about circular polarization. This bias may become greater than the estimation standard deviation due to noise and thus have a significant impact on estimation. We demonstrate that thanks to measurement redundancy, it is possible to calibrate this retardance directly from the measurements, provided that three acquisitions instead of two are performed and the signal to noise ratio is sufficient. This autocalibration totally cancels the bias and yields a Stokes vector estimation variance identical to that obtained with the true value of the retardance. We study the practical conditions under which this method can be applied, perform experimental validation of its performance, and propose a criterion to decide if it can be applied depending on the acquired measurements.
    11866-29
    Author(s): Francesco Castaldo, Rossella Giordano, Marco D'Auria, Fabio Fargnoli, Alessandro Cenci, Leonardo S.p.A. (Italy)
    On demand
    Show Abstract + Hide Abstract
    IRST systems development for aircraft is based on strong theoretical foundations about IR physics, on accurate management of each component and on advanced signal and data processing. Although the expected performance can be analytically estimated using detector and optics data, atmospheric models and algorithms simulations, the check in real environment remains a must for the assessment of system behaviors. In this paper, we propose an IRST product cycle named M3T which guides the system development up to the final desired performance. The process goes from theory and models to the gathering of real data during flight trials, which are used to tune the signal processing routines and test the system from all the angles. The labeling and organisation of recorded data, the calculation of metrics and the design of tools for replicating the real system behavior on ground all contribute to minimize the number of flights necessary to get the requested level of performance. Moreover, the approach described in this paper can be tailored to the user needs, driving to a proactive collaboration between industry and customers.
    Laser Sensing
    Session Chair: Ove Steinvall, FOI-Swedish Defence Research Agency (Sweden)
    11866-32
    Author(s): Ove Steinvall, FOI-Swedish Defence Research Agency (Sweden)
    On demand
    Show Abstract + Hide Abstract
    Unmanned aerial vehicles have become an increasing threat in both civilian and military arenas. While military UAVs often are relatively large and complex, the supply in the civilian hobby market is characterized by small, cheap and simple systems with the capacity to stream high-definition video, carry a variety of other sensors and transport critical goods (eg food or medicine) to hard-to-reach places. The criminal world has quickly realized how UAVs can be used to smuggle weapons or drugs, for example. Militarily, UAVs are established for reconnaissance, fire control and electronic warfare operations etc. Laser-guided weapons from a UAV, is an example of a widely used system for precision operations during later conflicts. This paper examines and summarizes various laser functions and their role for detecting, recognizing, tracking and combating a UAV. The laser can be used as a support sensor to others like radar or IR to detect end recognise and track the UAV and it can also dazzle and destroy its optical sensors. A laser may also be used to sense the atmospheric attenuation and turbulence in slant paths, which are critical to the performance of a high power laser weapon aimed to destroy the UAV. Several of these functions can be combined using a common laser and telescope. This is Part 1 of the paper which deals with the laser as a sensor for detection, tracking and recogniiotn of UAV:s.
    11866-33
    Author(s): Marcus Hammer, Björn Borgmann, Marcus Hebel, Michael Arens, Fraunhofer-Institut für Optronik, Systemtechnik und Bildauswertung IOSB (Germany)
    On demand
    Show Abstract + Hide Abstract
    Sensor-based monitoring of the surroundings of civilian vehicles is primarily relevant for driver assistance in road traffic, whereas in military vehicles, far-reaching reconnaissance of the environment is crucial for accomplishing the respective mission. Modern military vehicles are typically equipped with electro-optical sensor systems for such observation or surveillance purposes. However, especially when the line-of-sight to the onward route is obscured or visibility conditions are generally limited, more enhanced methods for reconnaissance are needed. The obvious benefit of micro-drones (UAVs) for remote reconnaissance is well known. The spatial mobility of UAVs can provide additional information that cannot be obtained on the vehicle itself. For example, the UAV could keep a fixed position in front and above the vehicle to gather information about the area ahead, or it could fly above or around obstacles to clear hidden areas. In a military context, this is usually referred to as manned-unmanned teaming (MUM-T). In this paper, we propose the use of vehicle-based electro-optical sensors as an alternative way for automatic control of (cooperative) UAVs in the vehicle’s vicinity. In its most automated form, the external control of the UAV only requires a 3D nominal position relative to the vehicle or in absolute geocoordinates. The flight path there and the maintaining of this position including obstacle avoidance are automatically calculated on-board the vehicle and permanently communicated to the UAV as control commands. We show first results of an implementation of this approach using 360° scanning LiDAR sensors mounted on a mobile sensor unit. The control loop of detection, tracking and guidance of a cooperative UAV in the local environment is demonstrated by two experiments. We show the automatic LiDAR-controlled navigation of a UAV from a starting point A to a destination point B. with and without an obstacle between A and B. The obstacle in the direct path is detected and an alternative flight route is calculated and used.
    11866-34
    Author(s): Enno Peters, Jendrik Schmidt, Matthias Mischung, Susanne Wollgarten, David Brandt, Marco Berger, David Heuskin, Maurice Stephan, Deutsches Zentrum für Luft- und Raumfahrt e.V. (Germany)
    On demand
    Show Abstract + Hide Abstract
    Two Gated-Viewing instruments of different design, but similar mean optical power, were compared during a field test: The TRAGVIS sensor is an experimental, scientific development which was designed for particular needs of maritime search and rescue operations. The instrument uses pulsed VCSELs in the NIR, and a CMOS camera in multi-integration mode. As designed for distances < 400 m, a fixed focal length (wide angular FOV of ca 9°) is used, and the repetition rate is high, while the pulse energy is low. The MODAR is a commercial multi-sensor platform comprising a Gated-Viewing instrument designed for security operations (e.g. police) both on sea and on land. Aiming at distances up to several kilometers, both camera and laser illumination are equipped with zoom optics, and the repetition rate is small, while the pulse energy is high. In contrast to TRAGVIS, an image intensifier is used. TRAGVIS and MODAR were compared in terms of signal-to-noise ratio (SNR) and image contrast using Lambertian reflectors at different distances. TRAGVIS was found to perform better than MODAR at distances < 350 m, but its performance decreases with distance while MODAR’s performance stays constant as a result of the laser and camera zoom. When used in ungated (continuous exposure) mode, TRAGVIS shows >5 times larger SNR than in gated mode, and almost one order of magnitude larger SNR than MODAR due to the lack of an image intensifier. This demonstrates the instrument’s ability to be used for both, Gated-Viewing as well as simple active illumination mode. However, for the same reason (image intensifier) MODAR's shutter suppression, which is crucial for reducing the back-scatter signal and therefore vision enhancement, was found to be at least 5-6 times better than that of TRAGVIS.
    11866-35
    Author(s): Jendrik Schmidt, Matthias Mischung, Enno Peters, Susanne Wollgarten, Maurice Stephan, Deutsches Zentrum für Luft- und Raumfahrt e.V. (Germany)
    On demand
    Show Abstract + Hide Abstract
    Maritime search and rescue operations (SAR) are highly affected by harsh environmental conditions and darkness (night time operation). Especially at low visibility and high humidity scenarios like fog, mist or sea spray, gated-viewing offers an active-imaging solution to effectively suppress atmospheric back-scatter and enhance target contrast. The presented TRAGVIS gated-viewing system is designed to fill the needs in SAR operations: at least 185 m detection range at a minimum FOV of 7°x6° and operates in the NIR at 804 nm emission wavelength, combining a high repetition rate VCSEL illuminator with an accumulation mode CMOS camera. The performance of the demonstrator in a wide range of different visibility fog events and different sets of system parameters has been evaluated by analysing the target signal, contrast and signal to noise ratio SNR as a function of the optical depth OD, which was measured by an atmospheric visibility sensor. As the back-scattered signal (suppressed by the camera shutter) overcomes the target signal of a 41% reflectivity target at OD > 4, it was found, together with a low target signal, to be the major reason for the drop of contrast after a vision enhancement up to OD ≈ 3. A limitation of the system to approximately OD = 5:3 is estimated, as the image shows a decent contrast of 10%, but at an SNR of only ∼ 2.2. The highest potential for improvements was found in an optimised placement of the illuminator with respect to the receiver and scene geometry.
    11866-36
    Author(s): Egil Bae, Norwegian Defence Research Establishment (Norway)
    On demand
    Show Abstract + Hide Abstract
    A line scanning ladar can generate detailed three-dimensional images of a scene, so-called point clouds, by emitting individual laser pulses in quick succession in various directions and measuring the time before arrival of return pulses. As a typical mode of operation, the pulses are emitted along horizontal lines, starting from bottom of the field of view, before gradually increasing the elevation angles of subsequent scanning lines. This paper aims to address an inherent problem with object recognition within point clouds acquired by a line scanning ladar. If some of the scene objects are moving, their position will change slightly between each sweep of a horizontal scanning line. This causes the shape of the moving objects to deform in the resulting point cloud. The problem becomes more severe for wide view angles and faster moving objects. An object recognition algorithm is proposed that corrects for shape deformations caused by the delay between individual point measurements. In addition, the algorithm is able to estimate the velocity of the recognized object. The algorithm matches observed objects against a 3D model of the object of interest, by optimally aligning them with each other while simultaneously estimating the optimal shape deformation caused by motion during acquisition. If the observed object and 3D model align sufficiently well, according to a certain recognition confidence measure, the observed object is regarded as recognized and its velocity is induced from the estimated shape deformation. To solve the underlying optimization problem, the “Iterative Closest Point” (ICP) algorithm is modified by incorporating an additional substep, where the shape deformation – and thereby the corresponding velocity - is updated incrementally each iteration. Experiments on simulated and real world data indicate that moving objects can be recognized with high confidence and their velocities can be estimated with high accuracy.
    11866-37
    Author(s): Farzin Amzajerdian, NASA Langley Research Ctr. (United States); Diego F. Pierrottet, Coherent Applications, Inc. (United States); Aram Gragossian, Glenn D. Hines, Bruce W. Barnes, Lance L. Proctor, Nathan A. Dostart, NASA Langley Research Ctr. (United States)
    On demand
    Show Abstract + Hide Abstract
    The operation of a coherent Doppler lidar, developed by NASA for missions to planetary bodies, is analyzed and its projected performance is described. The lidar transmits three laser beams at different but fixed directions and measures line-of-sight range and velocity along each beam. The three line-of-sight measurements are then combined in order to determine the three components of the vehicle velocity vector and its altitude relative to the ground. Operating from over five kilometers altitude, the NDL provides velocity and range data with a few cm/sec and a few meters precision, respectively, depending on the vehicle dynamics. This paper explains the sources of measurements error and analyzes the impacts of vehicle dynamics on the lidar performance.
    Poster Session
    11866-38
    Author(s): Teemu Kääriäinen, Mikhail V. Mekhrengin, Timo Donsberg, VTT Technical Research Ctr. of Finland Ltd. (Finland)
    On demand
    Show Abstract + Hide Abstract
    There is a need for sensor technologies capable of identifying illegal border crossings through foliage. In this work, we study the use of a novel active hyperspectral sensor for remote identification of persons and vehicles through foliage. The sensor uses a tunable broadband near-infrared supercontinuum light source. The wavelength transmission band of the source is tuned by using a microelectromechanical Fabry-Perot interferometer. Real-time spectral detection algorithms are used to identify targets based on the spectral content of the back-scattered light. Preliminary results are presented from both in-lab and outdoors.
    11866-39
    Author(s): Paweł Hłosta, Wojciech Olpinski, Waldemar Świderski, Military Institute of Armament Technology (Poland)
    On demand
    Show Abstract + Hide Abstract
    In the case of weapons, the inner surface of the barrel bore, due to difficult operating conditions, such as high pressure, high temperature and chemically aggressive products of propellant combustion, are exposed to wear and damage. The paper presents an analysis using both a numerical simulation and experimental tests, conducted in order to check capability of non-destructive testing of gun barrels using eddy current thermography. The obtained results have confirmed the capability of detecting defects on the barrel bore surface by means of this method.
    11866-40
    Author(s): Issac Niwas Swamidoss, Abdulla Al Saadi Al Mansoori, Abdulrahman Almarzooqi, Slim Sayadi, Tawazun Technology and Innovation (United Arab Emirates)
    On demand
    Show Abstract + Hide Abstract
    One of the main challenging issues in video analysis is the recovery of the original video frames from the annotated and text marked in the guided user interface (GUI) tool, particularly in cases where the original video is not available. Removing annotation from video frames is essential for any kind of algorithm development, such as noise removal, dehazing, object detection, recognition, identification in the video and tracking specific objects in the maritime environment, and further testing process. In this research work, we developed the algorithm for the removal of all annotations from any portion of the video frame, without affecting the integrity of the original video content. Here, we present a novel technique to remove unnecessary annotations and markers using a progressive switching median filter with wavelet thresholding. Experimental studies have shown that the annotation-free images generated from the proposed method can be used for the development of any basic algorithm.
    11866-42
    Author(s): Nadezhda D. Tolstoba, Maksim S. Gorelik, ITMO Univ. (Russian Federation)
    On demand
    Show Abstract + Hide Abstract
    Remote sensing is widely used in various spheres of human activity, such as surface monitoring, geodetic surveys, and agriculture. The hyperspectral camera provides complete information about the spatial and spectral structure of the object of observation. Unmanned aerial vehicles are used for regular monitoring. The aim of this work is to develop a design for a hyperspectral camera. The design should provide the ability to install the system on a quadcopter.
    11866-43
    Author(s): Valentina Hristova, Space Research and Technology Institute (Bulgaria), Todor Kableshkov Univ. of Transport (Bulgaria); Galina Cherneva, Todor Kableshkov Univ. of Transport (Bulgaria); Denitsa Borisova, Space Research and Technology Institute (Bulgaria)
    On demand
    Show Abstract + Hide Abstract
    A characteristic feature of the modern level of development of radio communication systems is the problem of increasing their security and resilience in transmitting information. There are increasing requirements for the secrecy of transmitted information, both for military and civilian radiocommunication systems. The protection of information from unauthorized access and the security of the connection are based on many different methods of hiding messages so that they are incomprehensible to a eavesdropper who has intercepted a hidden message. The paper presents a method of an improved information protection against non-allowed access by scrambling and descrambling on two levels. On the first level this process directly concerns the primary signal, which is a carrier of information in digital kind. The second level has controlling functions in regard to the random sequences. They change in time according to a given dependency defined by a code combination. The input information spectrum expands by switching a number of pseudo random sequences. Switching is controlled by another similar sequence, the symbols of which last much longer. The proposed method is characterized by advantages in relation to the already-known methods of protection against the non-allowed access with information transmitting in the expanded spectrum systems. The disadvantage of the system is the necessity of elements for coordinating and synchronizing the information and control components of the system. The method can be applied to the modern telecommunication systems of expanded spectrum with high requirements related to the protection of information against non-allowed access. One possible application is in remote sensing in terms the acquired data requiring protection against non-allowed access. This work is supported by Bulgarian National Science Fund under Contract number KP-06-M27/2 (КП-06-М27/2).
    Conference Chair
    Tektonex Ltd. (United Kingdom)
    Conference Chair
    Helge Bürsing
    Fraunhofer-Institut für Optronik, Systemtechnik und Bildauswertung (Germany)
    Conference Chair
    Argo AI, LLC (United States)
    Conference Chair
    FOI-Swedish Defence Research Agency (Sweden)
    Program Committee
    Leonardo (Italy)
    Program Committee
    TNO Defence, Security and Safety (Netherlands)
    Program Committee
    AIM INFRAROT-MODULE GmbH (Germany)
    Program Committee
    TNO Defence, Security and Safety (Netherlands)
    Program Committee
    Bernd Eberle
    Fraunhofer-Institut für Optronik, Systemtechnik und Bildauswertung (Germany)
    Program Committee
    Ben-Gurion Univ. of the Negev (Israel)
    Program Committee
    SELEX ES (United Kingdom)
    Program Committee
    Defence Research and Development Canada, Valcartier (Canada)
    Program Committee
    Gino Putrino
    The Univ. of Western Australia (Australia)
    Program Committee
    Ben-Gurion Univ. of the Negev (Israel)
    Program Committee
    Fraunhofer-Institut für Angewandte Festkörperphysik IAF (Germany)
    Program Committee
    Philip J. Soan
    Defence Science and Technology Lab. (United Kingdom)