The main application of military imaging systems is situational awareness: knowing who and what is in the vicinity and gaining an understanding of their behavior. Image analysis techniques support the key tasks that enable situational awareness: detection, tracking, classification, identification and behavior recognition of targets or objects, while avoiding too many false alarms or missed detections. Artificial Intelligence and Machine Learning are increasingly used to assist in these tasks, as the amount of sensor data increases while there are fewer analysts and camera operators available.

This conference will focus on technology development in artificial intelligence and machine learning techniques for automatic and machine assisted image and video analysis for defense applications, including enhancement, target detection, classification/recognition, identification, tracking and threat assessment. Both model-based approaches and data-driven methods such as neural nets are considered. Sensors considered will include EO/IR, SAR, multi- and hyper-spectral imagers.

As in civil applications algorithms must be able to deal with noisy data and varying conditions. One of the additional challenges encountered, compared to civilian/commercial applications, relates to the fact that for defense applications only limited operational data is available for training, testing and evaluation. This is especially the case for event detection, where interesting events rarely occur. For defense applications, the technology will ideally be robust to inputs that are adversarial examples, i.e., inputs that are intentionally designed to cause the model to make a mistake. The processing should also be able to detect, classify and identify camouflaged objects. Evaluation and performance prediction of these algorithms for varying circumstances is also part of this conference.

Original papers are solicited in, but not limited to, the following topical areas:

Image Analysis Techniques
  • automatic target detection, classification, recognition and identification
  • automatic tracking
  • computational imaging
  • image enhancement (denoising, super-resolution , filtering etc)
  • inverse problems
  • sensor fusion
  • colorization.

  • Artificial Intelligence and Machine learning
  • machine learning and deep learning for image and video processing systems
  • transfer learning
  • alternate learning strategies such as semi-supervised learning and generative adversarial learning
  • hyper-parameter selection
  • the use of synthetic data for training
  • edge processing: low power (wattage) processing.

  • Robustness, Evaluation and Performance Prediction
  • robustness of algorithms to extended operating conditions
  • robustness of algorithms against adversarial examples
  • transparency and explainability of algorithms.

  • Defence Applications for these Types of Techniques
  • maritime situational awareness
  • unmanned sensor systems: UAVs, UGVs, UUVs
  • unattended sensors and systems
  • compound security and force protection
  • border protection
  • route clearance
  • reconnaissance and surveillance
  • vehicle situation awareness
  • route planning
  • improved visualization.
  • ;
    In progress – view active session
    Conference 11870

    Artificial Intelligence and Machine Learning in Defense Applications III

    On demand now
    View Session ∨
    • Remote Sensing Plenary Presentation I: Monday
    • Security+Defence Plenary Presentation
    • Remote Sensing Plenary Presentation II: Wednesday
    • Networking Session
    • Panel Discussion and Keynote Lecture: Laser Weapons and Lasers Used as Weapons Against Personnel
    • AI Applications I
    • AI Applications II
    • Simulations and Datasets
    • Tracking and Localisation
    • Detection and Classification
    • Poster Session
    Remote Sensing Plenary Presentation I: Monday
    Livestream: 13 September 2021 • 16:30 - 17:30 CEST
    11858-500
    Author(s): Pierluigi Silvestrin, European Space Research and Technology Ctr. (Netherlands)
    On demand | Presented Live 13 September 2021
    Show Abstract + Hide Abstract
    In recent years the Earth observation (EO) programmes of the European Space Agency (ESA) have been dramatically extended. They now include activities that cover the entire spectrum of the wide EO domain, encompassing both upstream and downstream developments, i.e. related to flight elements (e.g. sensors, satellites, supporting technologies) and to ground elements (e.g. operations, data exploitation, scientific applications and services for institutions, businesses and citizens). In the field of EO research missions, ESA continues the successful series of Earth Explorer (EE) missions. The last additions to this series include missions under definition, namely Harmony (the tenth EE) and four candidates for the 11th EE: CAIRT (Changing Atmosphere InfraRed Tomography Explorer), Nitrosat (reactive nitrogen at the landscape scale), SEASTAR (ocean submesoscale dynamics and atmosphere-ocean processes), WIVERN (Wind Velocity Radar Nephoscope). On the smaller programmatic scale of the Scout missions, ESA is also developing two new missions: ESP-MACCS (Earth System Processes Monitored in the Atmosphere by a Constellation of CubeSats) and HydroGNSS (hydrological climate variables from GNSS reflectometry). Another cubesat-scale mission of technological flavor is also being developed, Φ-sat-2. Furthermore, in collaboration with NASA, ESA is defining a Mass change and Geosciences International Constellation (MAGIC) for monitoring gravity variations on a spatio-temporal scale that enables applications at regional level, continuing - with vast enhancements - the successful series of gravity mapping missions flown in the last two decades. The key features of all these missions will be outlined, with emphasis on those relying on optical payloads. ESA is also developing a panoply of new missions for other European institutions, namely Eumetsat and the European Union, which will be briefly reviewed too. These operational-type missions rely on established EO techniques. Nonetheless some new technologies are applied to expand functional and performance envelopes. A brief resume’ of their main features will be provided, with emphasis on the new Sentinel missions for the EU Copernicus programme.
    Security+Defence Plenary Presentation
    Livestream: 14 September 2021 • 09:00 - 10:00 CEST
    11868-500
    Author(s): Patrick R. Body, Tecnobit (Spain)
    On demand | Presented Live 14 September 2021
    Show Abstract + Hide Abstract
    Optronic systems for the defence market are available from the UV to the LWIR wavelengths but the ideal band very much depends on the particular application and their environment. This lecture will cover some of the more important features of each type of optronic sensor and using examples from the experience gained over many years of system development by Tecnobit for Airborne, Navel and Land sectors, suggests some broad recommendations.
    Remote Sensing Plenary Presentation II: Wednesday
    Livestream: 15 September 2021 • 09:00 - 10:00 CEST
    11858-600
    Author(s): Adriano Camps, Institut d'Estudis Espacials de Catalunya (Spain)
    On demand | Presented Live 15 September 2021
    Show Abstract + Hide Abstract
    Today, space is experiencing a revolution: from large space agencies, multimillion dollar budgets, and big satellite missions to spin-off companies, moderate budgets, and fleets of small satellites. Some have called this the “democratization” of space, in the sense that it is now more accessible than it was just a few years ago. To a large extent, this revolution has been fostered on one side by the standardization of the platforms’ mechanical interfaces, and on the other side by the technology developments coming from mobile communications. Standard platform’s mechanical interfaces have led to standard orbital deployers, and new launching capabilities. The technology developed for cell phones has brought more computing resources, with less power consumption and volume. Small satellites are used as pure technology demonstrators, for targeted scientific missions, mostly Earth Observation, some for Astronomy, and they are starting to enter in the field of communications, as huge satellite constellations are now becoming more possible. In this lecture, the most widely used nano/microsats form factors, and its main applications will be presented. Then, the main Scientific Earth Observation and Astronomy missions suitable to be boarded in SmallSats will be discussed, also in the context of the rising Constellations of SmallSats for Communication. Finally, the nanosat program at the Universitat Politècnica de Catalunya (UPC) will be introduced, and the results of the FSSCAT mission will be presented.
    Networking Session
    Livestream: 16 September 2021 • 09:00 - 10:30 CEST
    11870-700
    16 September 2021 • 09:00 - 10:30 CEST
    Panel Discussion and Keynote Lecture: Laser Weapons and Lasers Used as Weapons Against Personnel
    Livestream: 16 September 2021 • 14:30 - 16:00 CEST
    Welcome and Introduction
    Robert J. Grasso, NASA Goddard Space Flight Ctr. (United States)

    Keynote Lecture:
    Smoke as protection against high energy laser effects
    Ric Schleijpen, TNO (Netherlands)

    Panel Discussion
    Moderator:
    Robert J. Grasso, NASA Goddard Space Flight Ctr. (United States)

    Panelists:
    Ric Schelijpen,TNO. (Netherlands)
    Robert J. Grasso, NASA Goddard Space Flight Ctr. (United States)

    Since their inception lasers have become an omnipresent source on the battlefield. And are used in application of rangefinding to designation to remote sensing to countermeasures to weaponry. Hence, even a simple laser can be used to great affect as an anti-personnel weapon capable of simple visual disruption to complex target destruction. Within this omnipresent capacity how do we deal with the presence of lasers on the battlefield, more specifically, lasers used as weapons. And, what might be the practical, technical, logistical, political, and ethical issues associated. Please join us for this exciting and potentially contentious discussion.
    11867-1
    Author(s): Ric H. M. A. Schleijpen, Sven Binsbergen, Amir L. Vosteen, Karin de Groot-Trouw, Denise Meuken, Alexander VanEijk, TNO (Netherlands)
    On demand | Presented Live 16 September 2021
    Show Abstract + Hide Abstract
    This paper discusses the use of smoke obscurants as countermeasures against high energy lasers (HEL). Potential success of the smoke does not depend only the performance of the smoke. The transmission loss in the smoke is part of a chain of system components, including warning sensors, smoke launchers, etc.. The core of the paper deals with experimental work on the following research questions: - Does smoke attenuate an incoming beam of a HEL? - Does the HEL affect the smoke itself? The experimental set-up with the TNO 30kW HEL and the scale model for the smoke transmission path will be shown. Selected experimental results will be shown and discussed. Finally we will compare the results to theoretical calculations and we will analyse the properties of an ideal HEL attenuation smoke.
    AI Applications I
    11870-1
    Author(s): Lars W. Sommer, Arne Schumann, Fraunhofer-Institut für Optronik, Systemtechnik und Bildauswertung IOSB (Germany)
    On demand
    Show Abstract + Hide Abstract
    The increased availability of unmanned aerial vehicles offers potential for numerous fields of application, but also can pose security and public safety threats. Thus, the demand for automated UAV detection systems to generate early warnings of possible threats is growing. Employing electro optical imagery as a main modality in such systems allows the direct interpretability by human operators and the straightforward applicability of deep learning based methods. Besides UAV detection, classifying the UAV type is an important task to categorize the potential threat. In this work, we propose a three-staged approach to address UAV type classification in video data. In the first stage, we apply recent deep learning based detection methods to locate UAVs in each frame. We assess the impact of best practices for object detection models, such as recent backbone architectures and data augmentation techniques, in order to improve the detection accuracy. Next, tracks are generated for each UAV. For this purpose, we evaluate different tracking approaches, i.e. Deep SORT and Intersection-over-Union tracker. Errors caused by the detection stage as well as misclassified detections due to similar appearances of different UAV types under specific perspectives decrease the classification accuracy. To address these issues, we determine a UAV type confidence score based on the entire track considering the confidence scores for single frames, the size of the corresponding detections and the maximum detection confidence score. We assess a number of different CNN based classification approaches by varying the backbone architecture and the input size to improve the classification accuracy on the single frames. Furthermore, ablation experiments are conducted to analyze the impact of the UAV size on the classification accuracy. We perform our experiments on publicly available and self-recorded data, including several UAV types.
    11870-4
    Author(s): Wyke Pereboom-Huizinga, Michel van Lier, Maarten Kruithof, Judith Dijk, TNO (Netherlands)
    On demand
    Show Abstract + Hide Abstract
    Early threat assessment of vessels is an important surveillance task during naval operations. Whether a vessel is a threat depends on a number of aspects. Amongst those are the vessel class, the closest point of approach (CPA), the speed and direction of the vessel and the presence of possible threatening items on board the vessel such as weapons. Currently, most of these aspects are observed by operators viewing the camera imagery. Whether a vessel is a potential threat will depend on the final assessment of the operator. Automated analysis of electro-optical (EO) imagery for aspects of potential threats during surveillance can support the operator during observation. This can release the operator from continuous guard and provide him with the tools to provide a better overview of possible threats in the surroundings during a surveillance task. In this work, we apply different processing algorithms, including detection, tracking and classification, on recorded multi-band EO imagery in a harbor environment with many small vessels. With the results we aim to automatically determine the vessel’s CPA, number of people on board and the presence of possibly threatening items on board of the vessel. Hereby we show that our algorithms can support the operator in assessing whether a vessel poses a threat or not.
    AI Applications II
    11870-5
    Author(s): Judith Dijk, Sebastiaan P. van den Broek, Richard J. M. den Hollander, Jan Baan, Johan-Martijn ten Hove, Dirk Oorbeek, TNO (Netherlands)
    On demand
    Show Abstract + Hide Abstract
    Surveillance is an important task during naval operations. This task can be performed with a combination of different sensors, including camera systems and radar. To obtain a consistent operational picture of possible threats in the vicinity of a ship, the information from the different sensors need to be combined into one overview image, in which all information related to one object is assigned to this object. In this paper, we present a new dataset for maritime surveillance applications and show two examples of combining information from different sensors. We have recorded data with several camera systems, automatic identification system (AIS) and radar in the Rotterdam Harbor. From all sensors we can obtain tracking information from the different objects. We present a method to associate the tracks and describe how snippets of the ships in the cameras can be used to enrich the information of the objects. Next to that, we show the combined information from AIS and imagery.
    11870-7
    Author(s): Xavier Dallaire, ImmerVision (Canada); Julie Buquet, ImmerVision Inc. (Canada), Univ. Laval (Canada); Patrice Roulet, Jocelyn Parent, Pierre Konen, ImmerVision (Canada); Jean-François Lalonde, Univ. Laval (Canada); Simon Thibault, Univ. Laval (Canada), ImmerVision (Canada)
    On demand
    Show Abstract + Hide Abstract
    The new generation of sUAS (small Unmanned Aircraft Systems) aims to extend the range of scenarios in which sense-and-avoid functionality and autonomous operation can be used. Relying on navigation cameras, having a wide field of view can increase the coverage of the drone surroundings, allowing ideal fly path, optimal dynamic route planning and full situational awareness. The first part of this paper will discuss the trade-off space for camera hardware solution to improve vision performance. Severe constraints on size and weight, a situation common to all sUAS components, compete with low-light capabilities and pixel resolution. The second part will explore the benefits and impacts of specific wide-angle lens designs and of wide-angle images rectification (dewarping) on deep-learning methods. We show that distortion can be used to bring more information from the scene and how this extra information can increase the accuracy of learning-based computer vision algorithm. Finally, we present a study that aims at estimating the link between optical design criteria degradation (MTF) and neural network accuracy in the context of wide-angle lens, showing that higher MTF is not always linked to better results, thus helping to set better design targets for navigation lenses.
    11870-8
    Author(s): Jonathan Tucker, Joshua Haley, Brandon Kessler, Trisha Fish, Elbit Systems of America (United States)
    On demand
    Show Abstract + Hide Abstract
    Machine Learning (ML) and Artificial intelligence (AI) have led to an increase in automation potential within applications such as border protection, compound security, and surveillance applications. Recent academic advances in deep learning aided computer vision have yielded impressive results on object detection and recognition, necessary capabilities to increase automation in defense applications. These advances are often open-sourced, enabling the opportunistic integration of state-of-the-art (SOTA) algorithms into real systems. However, these academic achievements do not translate easily to engineered systems. Academics often are looking at a single capability with metrics such as accuracy or F1 score without consideration of system-level performance and how these algorithms must integrate or what level of computational performance is required. An engineered system is developed as a system of algorithms that must work in conjunction with each other with deployment constraints. This paper describes a system, called Rapid Algorithm Design & Deployment for Artificial Intelligence (RADD-AITM), developed to enable the rapid development of systems of algorithms incorporating these advances in a modular fashion using networked Application Programming Interfaces (APIs). The inherent modularity mitigates the assumption of monolithic integration within a single ecosystem that creates vendor lock. This monolith assumption does not account for the reality that frameworks are usually targeted toward different types of problems and learning vs inference capabilities. RADD-AI makes no such assumption. If a different framework solves subsets of the system more eloquently, they can be integrated into the larger pipeline. RADD-AI enables the integration of state-of-the-art ML into deployed systems while also supporting the necessary ML engineering tasks, such as transfer learning, to operationalize academic achievements. We detail how this system is used to implement a defense application implemented and developed within RADD-AI, utilizing several SOTA models and traditional algorithms within multiple frameworks bridging the gap from academic achievement to fielded system.
    11870-9
    Author(s): Cornelia Nita, Marijke Vandewal, Royal Military Academy (Belgium)
    On demand
    Show Abstract + Hide Abstract
    In view of the increase in illicit maritime activities like piracy, sea robbery, trafficking of narcotics, immigration and illegal fishing, an enhance of accuracy in surveillance is essential in order to ensure safer, cleaner and more secure maritime and inland waterways. Recently, the field of deep learning technology has received a considerable attention for integration into the security systems and devices. Convolutional Neural Networks (CNN) are commonly used in application of object detection, segmentation and classification. In addition, they are used for text detection and recognition, mainly applied to automatic license plate recognition for the highway monitoring, rarely to the maritime situational awareness. In the current study, we propose to analyse the practical feasibility of applying an automatic text detection and recognition algorithm on ship images. We consider a two-stage procedure that localizes the text region and then decodes the prediction into a machine-readable format. In the first stage the text region in the scene is localized with computer-vision based algorithms and EAST model, whereas in the second stage the predicted region is decoded by the Tesseract Optical Character Recognition (OCR) engine. Our results demonstrate that the integration of such a feature into a vessel information system will most likely improve the overall situational awareness.
    Simulations and Datasets
    11870-10
    Author(s): Christopher J. Willis, BAE Systems (United Kingdom)
    On demand
    Show Abstract + Hide Abstract
    Deep learning has revolutionized the performance of many computer vision processes in recent years. In particular, Deep Convolutional Neural Networks have demonstrated ground-breaking performance in object classification from imagery. These deep learning techniques typically require sizable volumes of training imagery in order to derive the large number of parameters that characterize their solution. However, in many situations training data for the object types of interest are unavailable. One solution to this problem is to use an initial training volume of imagery of objects which have properties similar to those of the entities of interest, but is available with a large number of labelled examples. These can be used for initial network training and the resulting partially-learned solution subsequently tuned using a smaller sample of the actual target objects. This type of approach, transfer learning, has shown considerable success in conventional imaging domains. Unfortunately, for Synthetic Aperture Radar imaging sensors, large volumes of labelled training samples of any type are hard to come by. The challenge is exacerbated when variations in imaging geometry and sensor configuration are taken into account. This paper examines the use of simulated SAR imagery in pre-training a deep neural network. The simulated imagery is generated using a straightforward process which has the capability to generate sufficient volumes of training exemplars in a modest amount of time. The samples so generated are used to train a deep neural network which is then retrained using a comparatively small volume of MSTAR SAR imagery. The value of such a pre-training process is assessed. The assessment highlights some interesting aspects of the MSTAR SAR image set.
    11870-11
    Author(s): Yann Giry-Fouquet, Thales DMS France SAS (France), Univ. de Technologie de Troyes (France); Alexandre Baussard, Univ. de Technologie Troyes (France); Cyrille Enderli, Tristan Porges, Thales DMS France SAS (France)
    On demand
    Show Abstract + Hide Abstract
    Deep learning has reached excellent results in various applications of computer vision, especially in image classification, segmentation or object detection. However, due to the lack of labeled data, it is not always possible to fully exploit the potential of this approach for SAR (Synthetic Aperture Radar) image analysis. For example, in the specific case of target recognition, most of the time, the targets cannot be available for all azimuth angles and incidence angles. Moreover, unlike in computer vision, common data augmentation cannot be considered because of the specificities of the physical mechanisms arising in SAR imaging. To overcome these difficulties, we can use simulators based on physical models. Unfortunately, these models are either too simplified to generate realistic SAR images or require too much calculation time (several weeks for a target, for example). Moreover, even the most accurate model cannot include all physical phenomena. Thus, fine-tuning or domain adaptation approaches must be considered. Another way, considered in this paper, consists of using Generative Adversarial Networks (GAN) to generate synthetic SAR images. To complete the missing azimuth angles in the dataset we propose to use conditional GANs to generate data for these directions. However, training conditional GANs from a small dataset is a challenging problem. Thus, we will report some solutions to overcome it. Evaluations using different scenarios, including unbalanced datasets, show that this data augmentation approach, based on conditional GANs, improves the performance of classifiers. We will also provide some comments and feedbacks by comparing several conditional GAN approaches.
    11870-12
    Author(s): Xu Zhu, Hiroki Mori, Toshiba Corp. (Japan)
    On demand
    Show Abstract + Hide Abstract
    Artificial intelligence (AI)-based methods for automatic target detection have been a research hotspot in the field of millimeter-wave security. That is, using artificial intelligence to determine if the results of millimeter-wave imaging include dangerous items, and to communicate the results to security personnel. This will not only avoid the leakage of private information, but also reduce the workload of security personnel and improve the efficiency during the security process. Existing deep learning networks require a large number of training dataset to optimize the network parameters. However, there are few datasets in the field of millimeter-wave imaging. In addition, due to local legal restrictions, researchers often do not have access to a large number of dangerous goods samples for the training of millimeter-wave imaging, which greatly limits the performance and applications of automatic classification in millimeter-wave security. In this paper, a method is proposed which uses style transfer techniques to combine a small number of millimeter-wave images with a large number of optical images to generate a library of millimeter-wave-like images. Specifically, the style transfer method combines the style features of a millimeter-wave image with the content features of an optical image to generate a new image. By combining different style images and content images, a large number of new images can be generated. The above generated images are then used to train any deep network for classification. The performance of proposed method is compared with a conventional method of data augmentation. The comparison results show that the method proposed in this paper effectively improves the accuracy of automatic classification in SAR automatic target classification.
    11870-13
    Author(s): Hanna Hamrell, Jörgen Karlholm, FOI-Swedish Defence Research Agency (Sweden)
    On demand
    Show Abstract + Hide Abstract
    Training data is an essential ingredient within supervised learning, alas time consuming, expensive and for some applications impossible to retrieve. A possible solution is to use synthetic training data. However, the domain shift of synthetic data makes it challenging to obtain good results when used as training data for deep learning models. It is therefore of interest to refine synthetic data, e.g. using image-to-image translation, to improve results. The aim of this work is to compare different methods to do image-to-image translation of synthetic training data of thermal IR-images using GANs. Translation is done both using synthetic thermal IR-images alone, as well as including pixelwise depth and/or semantic information. To evaluate, we propose a new measure based on the Frechét Inception Distance, adapted to work for thermal IR-images. We show that by adapting a GAN model to also include corresponding pixelwise depth data to each synthetic IR-image, the performance is improved compared to using only IR-images.
    11870-14
    Author(s): Jacob Rodriguez, Justin Mauger, Shibin Parameswaran, Riley Zeller-Townson, Galen Cauble, Naval Information Warfare Ctr. Pacific (United States)
    On demand
    Show Abstract + Hide Abstract
    Event cameras utilize novel imaging sensors patterned after visual pathways in the brain that are responsive to low contrast, transient events. Specifically, the pixels of dynamic vision sensors (DVS) react independently and asynchronously to changes in light intensity, creating a stream of time-stamped events encoding the pixels’ (x, y) location in the sensor array and the sign of the brightness change. In contrast with conventional cameras that sample every pixel at a fixed rate, DVS pixels produce output only when the change in intensity has surpassed a set threshold, which leads to reduced power consumption in scenes with relatively little motion. Furthermore, compared to conventional CMOS imaging pixels, DVS pixels have extremely high dynamic range, low latency, and low motion blur. Taken together, these characteristics make event cameras uniquely qualified for persistent surveillance. In particular, we have been investigating their use in port surveillance applications. Such an application of DVS presents the need for automated pattern recognition and object tracking algorithms which can process event data. Due to the fundamentally different nature of the output relative to conventional frame-based cameras, traditional methods of machine learning for computer vision cannot be directly applied. Anticipating this need, this work details data collection and collation efforts to facilitate development of object detection and tracking algorithms in this modality. We have assembled a maritime dataset capturing several moving objects including sail boats, motor boats, large ships, etc.; as well as incidentally captured objects. The data was collected with lenses of various focal lengths and aperture settings to provide data variability and avoid unwanted bias to specific sensor parameters. In addition, the captured data was recorded with the camera in both static and dynamic states. These different states can be used to mimic potential behavior and help understand how this movement can affect the algorithms being developed for automated ship detection and tracking. We will describe the data captured, effects of hardware settings and lenses, as well as how lighting conditions and sensor movement contributed to the quality of the event data recorded. Finally, we will detail future data collection efforts.
    Tracking and Localisation
    11870-15
    Author(s): Aleksander Zelenskii, Moscow State Univ. of Technology "STANKIN" (Russian Federation); Nikolay Gapon, Moscow State Univ. of Technology (Russian Federation); Viacheslav Voronin, Evgeny A. Semenishchev, Moscow State Univ. of Technology "STANKIN" (Russian Federation); I. Khadidullin, Moscow State Univ. of Technology "STANKIN" (Russian Federation), Don State Technical Univ. (Russian Federation); Yang Cen, Beijing Jiaotong University (China)
    On demand
    Show Abstract + Hide Abstract
    In modern mobile robots, technologies are used to build the most optimal path for its movement. This uses simultaneous navigation and display techniques known as SLAM. A problem with all depth mapping methods is the presence of lost areas. This problem occurs due to poor lighting, mirrored surfaces of objects, or the fine-grained surface of materials, making it impossible to measure depth information. As a result, the effect of overlapping objects appears. It is impossible to distinguish one object from another, or an increase in the object's boundaries (obstacles) occurs. This problem can be solved using image reconstruction techniques. This article presents an approach based on a modified algorithm for finding similar blocks using a neural network. The proposed algorithm also uses the concept of a sparse representation of quaternions, which uses a new gradient to compute the priority function by integrating the quaternion structure with a saliency map. Compared to current technologies, the proposed algorithm provides a plausible reconstruction of the depth map from multimodal images, making it a promising tool for navigation in robot applications. Analysis of the processing results shows that the proposed method allows you to correctly restore the boundaries of objects in the image and depth map, which is a prerequisite for improving the accuracy of the robot's navigation.
    11870-17
    Author(s): Wanlin Xie, Jaime Ide, Daniel Izadi, Sean Banger, Thayne Walker, Ryan Ceresani, Dylan Spagnuolo, Christopher Guagliano, Henry Diaz, Jason Twedt, Lockheed Martin Corp. (United States)
    On demand
    Show Abstract + Hide Abstract
    Multi-object tracking (MOT) is a crucial component of situational awareness in military defense applications. With the growing use of unmanned aerial systems (UASs), MOT methods for aerial surveillance is in high demand. Application of MOT in UAS presents specific challenges such as moving sensor, changing zoom levels, dynamic background, illumination changes, obscurations and small objects. In this work, we present a robust object tracking architecture aimed to accommodate for the noise in real-time situations. Our work is based on the tracking-by-detection paradigm where an independent object detector is first applied to isolate all potential detections and an object tracking model is applied afterwards to link unique objects between frames. Object trajectories are constructed using multiple hypothesis tracking (MHT) framework that produces the best hypothesis based on the kinematic and visual scorings. We propose a kinematic prediction model, called Deep Extended Kalman Filter (DeepEKF), in which a sequence-to-sequence architecture is used to predict entity trajectories in latent space. DeepEKF utilizes a learned image embedding along with an attention mechanism trained to weight the importance of areas in an image to predict future states. For the visual scoring, we experiment with different similarity measures to calculate distance based on entity appearances, including a convolutional neural network (CNN) encoder, pre-trained using Siamese networks. In initial evaluation experiments, we show that our method, combining scoring structure of the kinematic and visual models within a MHT framework, has improved performance especially in edge cases where entity motion is unpredictable, or the data presents frames with significant gaps.
    Detection and Classification
    11870-18
    Author(s): Chelsea Mediavilla, Lena Nans, Diego Marez, Shibin Parameswaran, Naval Information Warfare Ctr. Pacific (United States)
    On demand
    Show Abstract + Hide Abstract
    Standard object detectors are trained on a wide array of commonplace objects and work out-of-the-box for numerous every-day applications. Training data for these detectors tends to have objects of interest that appear prominently in the scene making them easy to identify. Unfortunately, objects seen by camera sensors in the real-world scenarios typically do not always appear large, in-focus, or towards the center of an image. In the face of these problems, the performance of many detectors lags behind the necessary thresholds for their successful implementation in uncontrolled environments. Specialized applications necessitate additional training data to be reliable in-situ, especially when small objects are likely to appear in the scene. In this paper, we present an object detection dataset consisting of videos that depict helicopter exercises recorded in an unconstrained, maritime environment. Special consideration was taken to emphasize small instances of helicopters relative to the field-of-view and therefore provides a more even ratio of small-, medium-, and large-sized object appearances for training more robust detectors in this specific domain. We use the COCO evaluation metric to benchmark multiple detectors on our data as well as the WOSDETC (Drone Vs. Bird) dataset; and, we compare a variety of augmentation techniques to improve detection accuracy and precision in this setting. These comparisons yield important lessons learned as we adapt standard object detectors to process data with non-iconic views from field-specific applications.
    11870-19
    Author(s): Jannick Kuester, Wolfgang Gross, Wolfgang Middelmann, Fraunhofer-Institut für Optronik, Systemtechnik und Bildauswertung IOSB (Germany)
    On demand
    Show Abstract + Hide Abstract
    The rising availability of hyperspectral data has increased the attention of anomaly detection for various applications. Anomaly detection aims to find a small number of pixels in the hyperspectral data for which the spectral signatures differ significantly from the background. However, for anomalies like camouflage objects in a rural area, the spectral signatures distinguish only by small features. For this purpose, we use a 1D-Convolutional Autoencoder, which extracts the background spectra's most specific features to reconstruct the spectral signature by minimizing the loss function's error. The difference between the original and the reconstructed data can be exploited for anomaly detection. Since the loss function is minimized based on predominant background spectra, areas with anomalies exhibit higher error values. The proposed anomaly detection method's performance is tested on hyperspectral data in the range of 1000 to 2500 nm. The data was recorded with a drone-based Headwall sensor at approximately 80 m over a rural area near Greding, Germany. The anomalies consist mainly of camouflage materials and vehicles. We compare the performance of a 1D-Convolutional Autoencoder trained on a data set without the target anomalies for different models. This is done to quantify the number of anomalies in the data set before they inhibit the detection process. Additionally, the detection results are compared to the state-of-the-art Reed-Xiaoli anomaly detector. We present the results by counting the correct detections in relation to the false positives with the receiver operating characteristic and discuss more suitable evaluation approaches for small targets. We show that the 1D-CAE outperforms the Reed-Xiaoli anomaly detector for a false alarm rate of 0.1% by reconstructing the background with a low error and the anomalies with a higher error. The 1D-CAE is suitable for camouflage anomaly detection.
    11870-20
    Author(s): Deena Francis, Technical Univ. of Denmark (Denmark); Milan Laustsen, CrimTrack ApS (Denmark); Hamid Babamoradi, Technical Univ. of Denmark (Denmark); Jesper Mogensen, Danish Emergency Management Agency (Denmark); Eleftheria Dossi, Cranfield Univ. (United Kingdom); Mogens Jakobsen, Tommy Alstrøm, Technical Univ. of Denmark (Denmark)
    On demand
    Show Abstract + Hide Abstract
    Colorimetric sensors are widely used as pH indicators, medical diagnostic devices and detection devices. The colorimetric sensor captures the color changes of a chromic chemical (dye) or array of chromic chemicals when exposed to a target substance (analyte). Sensing is typically carried out using the difference in dye color before and after exposure. This approach neglects the kinetic response, that is, the temporal evolution of the dye, which potentially contains additional information. We investigate the importance of the kinetic response by collecting a sequence of images over time. We applied end-to-end learning using three different convolution neural networks (CNN) and a recurrent network. We compared the performance to logistic regression, k-nearest-neighbor and random forest, where these methods only use the difference color from start to end as feature vector. We found that the CNNs were able to extract features from the kinetic response profiles that significantly improves the accuracy of the sensor. Thus, we conclude that the kinetic responses indeed improves the accuracy, which paves the way for new and better chemical sensors based on colorimetric responses.
    11870-22
    Author(s): Richard J. M. den Hollander, Sabina B. van Rooij, Sebastiaan P. van den Broek, Judith Dijk, TNO (Netherlands)
    On demand
    Show Abstract + Hide Abstract
    An important surveillance task during naval military operations is early risk assessment of vessels. The potential risk that a vessel poses will depend on the vessel type, and vessel classification is therefore a basic technique in risk assessment. Although automatic identification by AIS is widely available, the AIS transponders can potentially be spoofed or disabled to prevent identification. A possible complementary approach is the use of automatic classification based on camera imagery. The dominant approach for visual object classification is the use of deep neural networks (DNNs), which has shown to give unparalleled performance when sufficiently large annotated training data sets are available. However, within the scenario of naval operations there are several challenges that need to be addressed. First, the number and types of classes should be defined in such a way that they are relevant for risk assessment while allowing sufficiently large training sets per class type. Second, early risk assessment in real-life conditions is vital and vessel type classification should work on long range target imagery having low-resolution and being potentially degraded. In this paper, we investigate the performance of DNNs for vessel classification under the aforementioned challenges. We evaluate different class groupings for the MARVEL vessel data set, both from an accuracy perspective and the relevancy for risk assessment. Furthermore, we investigate the impact of real-life conditions on classification by manually downsizing and reducing contrast of the MARVEL imagery, as well as evaluating on EO/IR recordings from Rotterdam harbor which has been collected for several weeks under varying weather conditions.
    Poster Session
    11870-24
    Author(s): Greig Richmond, Arlene Cole-Rhodes, Morgan State Univ. (United States)
    On demand
    Show Abstract + Hide Abstract
    In this work, we describe the compression of an image restoration neural network using principal component analysis (PCA). We compress the SRN-Deblur network that was developed by Tao et al.1 and we evaluate the deblurring performance at various levels of compression quantitatively and qualitatively. A baseline network is obtained by training the network using the GOPRO training dataset9. The performance of the compressed network is then evaluated when deblurring images from the Kohler8, Kernel Fusion13 and GOPRO datasets, as well as from a customized evaluation dataset. We note that after a short retraining step, the compressed network behaves as expected, i.e. deblurring performance slowly decreases as the level of compression increases. We show that the SRN-Deblur network can be compressed by up to 40% without significant reduction in deblurring capabilities and without significant reduction of quality in the recovered image. Keywords: Blind Deconvolution, Neural Network Compression, Principal Component Analysis
    11870-25
    Author(s): İsmail Gül, Istanbul Technical Univ. (Turkey), ASELSAN A.S. (Turkey); Isin Erer, Istanbul Technical Univ. (Turkey)
    On demand
    Show Abstract + Hide Abstract
    EW can be divided into three major areas: Electronic Attack, Electronic Protection and Electronic Support. The main purpose of an Electronic Support (ES) system can be considered as to be able to intercept Radar signals. In the modern ESM systems, there are mainly two types of receivers can be used, which are wide bandwidth receivers such as Digital Instantaneous Frequency Receivers (IFM) and narrow bandwidth receivers such as Superheterodyne Receivers (SHR). ES receivers with narrow bandwidth, even has higher sensitivity than wideband receivers they cannot sense whole frequency spectrum simultaneously. So that, the frequency spectrum has to be scanned over time by re-tuning central frequency to different bands. This sensor scheduling optimization problem can be investigated with deterministic or stochastic approaches. In this study, we propose a new method of learning frequency scanning strategy via Robust principal component analysis (RPCA). With this method, it is possible to determine a search strategy even if the parameters of the Radar signals are not gathered. The interception of Radar signals from a narrow-band ES receiver problem is modeled with transformed predictive state representations (TPSR) and subspace identification of the system is done via Robust Principal Component Analysis (RPCA). Contrary to low rank decomposition methods, by using RPCA, situations that are less likely to occur are not excluded from the system definition. Thus, states that rarely occur can be detected at a higher rate instead of being considered as noise.
    Conference Chair
    TNO Defence, Security and Safety (Netherlands)
    Program Committee
    Defence Science and Technology Lab. (United Kingdom)
    Program Committee
    Fabrizio Berizzi
    European Defence Agency (Belgium)
    Program Committee
    FOI-Swedish Defence Research Agency (Sweden)
    Program Committee
    Michel Honlet
    HENSOLDT Sensors GmbH (Germany)
    Program Committee
    ONERA (France)
    Program Committee
    i4-Flame OÜ (LLC) (Estonia)
    Program Committee
    BAE Systems (United Kingdom)