• Defense + Commercial Sensing
    Benefits of attending
    Anaheim 2017
    Technical Overview
    Defense + Security
    Commercial + Scientific Sensing and Imaging
    Agriculture
    Fiber Optic Sensors
    Unmanned Autonomous Systems
    Rising Researchers
    Courses
    Special Events
    Exhibition
    Industry Events
    Sponsors
    Hotels
    Travel to Anaheim
    Registration
    Proceedings
    For Authors and Presenters
    For Chairs and Committees
    For Exhibitors
Anaheim Convention Center
Anaheim, California, United States
9 - 13 April 2017
Search Program:  go
Print PageEmail Page

Sensing, imaging, and photonics technologies for UAS applications

SPIE Defense + Commerical Sensing 2017 UAS

See the latest research that can be used to enhance air, ground, and underwater UAS such as LiDAR, infrared, multispectral and hyperspectral imaging, and more.

Subscribe to receive event updates


Attend free industry sessions

Focused on UAS applications, these sessions take place in the exhibition hall. See the full list of industry sessions here.

LiDAR for Autonomous Vehicles: The Future of 3D Sensing and Perception
Miniaturized and Mobile Spectroscopy and Optical Sensor Applications

Come to the free expo

Review the 2017 companies at the event offering agricultural solutions. Click the links below to visit the company's online exhibition listing.

 • American Infrared Solutions (AIRS)  • Princeton Lightwave Inc.
 • FLIR Systems, Inc.  • Reynard Corp
 • Headwall Photonics, Inc.  • Sierra-Olympic Technologies, Inc.
 • Imperx  • StingRay Optics
 • Kappa optronics, Inc.  • Telops
 • Ocean Optics, Inc.  • TrackGen Solutions Inc.
 • Opto-Knowledge Systems, Inc.
 •  •

Technical presentations

SPIE Defense + Commercial Sensing features 1,700 technical papers. Below are conferences containing content that may be of interest to those interested in Unmanned Autonomous Systems. In addition, listed below are 90+ papers that may be of particular interest.

2017 Defense + Security
Unmanned Systems Technology
Infrared Technology and Applications
Infrared Imaging Systems:  Design, Analysis, Modeling and Testing
Detection and Sensing of Mines, Explosive Objects, and Obscured Target
Chemical Biological Radiological Nuclear and Explosives (CBRNE)
Ground/Air Multisensor Interoperability, Integration, and Networking for Persistent ISR
Laser Radar Technology and Applications
Laser Technology for Defense and Security
Sensors and Systems for Space Applications
Degraded Environments: Sensing, Processing, and Display 2017
Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery
Geospatial Informatics, Motion Imagery, and Network Analytics
Long-range Imaging
Open Architecture/Open Business Model Net-Centric Systems and Defense Transformation
Next Generation Analyst
Computational Intelligence for Cyber Intelligence, Surveillance and Reconnaissance
2017 Commercial + Scientific Sensing and Imaging
Hyperspectral Imaging Sensors:  Innovative Applications and Sensor Standards
Thermosense:  Thermal Infrared Applications
Autonomous Air and Ground Sensing Systems for Agricultural Optimization and Phenotyping

Browse the 90+ papers below; papers listed by conference and paper number.

Infrared hyperspectral imaging miniaturized for UAV applications
Paper 10177-15

Author(s):  Michele Hinnrichs, Pacific Advanced Technology, Inc. (United States), et al.
Conference 10177: Infrared Technology and Applications XLIII
Session 3: IR in Air and Space

Using a micro-optics approach to infrared hyperspectral imaging we have developed a camera small enough to serve as a payload on miniature unmanned aerial vehicles. This technology has been developed into both a MWIR and LWIR hyperspectral cameras. The optical system has been integrated into the cold-shield of the sensor.


Testing of next-generation nonlinear calibration based non-uniformity correction techniques using SWIR devices
Paper 10178-2

Author(s):  McKenna R. Lovejoy, Univ. of Colorado at Colorado Springs (United States), et al.
Conference 10178: Infrared Imaging Systems: Design, Analysis, Modeling, and Testing XXVIII
Session 1: Testing

This study highlights the results of testing higher order polynomial NUC methods targeted at SWIR imagers. Using data collected from SWIR cameras, the nonlinear techniques and corresponding performance metrics are compared with current linear methods, e.g., one and two-point NUC algorithms. Machine learning is investigated for dealing with bad pixels. The data is analyzed and the impact of hardware implementation is discussed. To maximize overall performance, a trade off analysis on polynomial order and coefficient precision is performed. Comprehensive testing, across multiple data sets, provides next generation model validation and performance benchmarks for higher order polynomial NUC methods.


In-flight optical performance measurement of high-resolution airborne imagery
Paper 10178-5

Author(s):  Richard Gueler, L-3 Sonoma EO (United States), et al.
Conference 10178: Infrared Imaging Systems: Design, Analysis, Modeling, and Testing XXVIII
Session 1: Testing

In-flight measurement of the Modulation Transfer Function (MTF) is essential in understanding the optical performance of high resolution airborne imaging systems and verifying they meet resolution requirements. The use of slant edge targets to measure MTF of the entire in-flight imaging chain are investigated including atmospheric effects and aircraft jitter. In addition, through the use of slant edge targets, the relative edge response (RER) can be extracted to calculate NIIRS. These results are compared to theoretical calculations and laboratory measurements to validate optical performance models and identify areas of concern. By understanding the effects of the imaging chain on MTF, opportunities for improvement can be identified.


Linear variable narrow bandpass optical filters in the far infrared
Paper 10181-21

Author(s):  Thomas D. Rahmlow, Omega Optical, Inc. (United States), et al.
Conference 10181: Advanced Optics for Defense Applications: UV through LWIR II
Session 5: Coatings and Filters

The bandpass wavelength of a linear variable filter (LVF) changes along one axis while remaining constant along the orthogonal axis. These filters have a number of applications including order sorting filters for spectrometers, replacement of the spectrometer and hyperspectral imaging using a push broom imaging technique. Reduction of weight and size drive the physical dimensions of the filter down to the dimensions of the imagers’ focal plane. Fabrication results for LVF bandpass and long pass filters in the far-IR are presented. Filters with a wavelength variance of 0.5 to 0.9 microns per millimeter are presented with an orthogonal variance of less than 1% across the filter width of 12 to 24 millimeters. The active area of the filters are in the range of 8 to 15 mm by 12 to 24 mm. Off band rejection exceeds OD 3. A trade-off analysis of spot size verses wavelength slope is presented along with other design considerations.


Progress on high-performance rapid prototype aluminum mirrors
Paper 10181-26

Author(s):  Kenneth S. Woodard, Corning Incorporated (United States), et al.
Conference 10181: Advanced Optics for Defense Applications: UV through LWIR II
Session 6: Materials and Manufacturing

Near net shape mirror blanks can be produced using some very old processes (investment casting) and the relatively new direct metal laser sintering process (DMLS). These processes have significant advantages for complex lightweighting and costs but are not inherently suited for producing high performance mirrors. The DMLS process can provide extremely complex lightweight structures but the high residual stresses left in the material results in unstable mirror figure retention. Although not to the extreme intricacy of DMLS, investment casting can also provide complex lightweight structures at considerably lower costs than DMLS and even conventional wrought mirror blanks but the less than 100% density for casting (and also DMLS) limits finishing quality.


Two node vector acoustics applications for off-board passive identification and localization of individual Red Hind Grouper
Paper 10182-23

Author(s):  Cameron Matthews, Naval Surface Warfare Ctr. Panama City Div. (United States), et al.
Conference 10182: Detection and Sensing of Mines, Explosive Objects, and Obscured Targets XXII
Session 5: Sonar and Side-scan Technologies II

Littoral regions typically offer prime breeding habitats for fish. Some fish in these regions, in particular Epinephelus Guttatus or more commonly the red hind grouper emit relatively narrowband tones in low frequencies to communicate with conspecifics in agonistic and courtship situations. The ability to track such fish for purposes of biomass measurement and conservation, particularly of interest to regulatory agencies in charge of setting catch limits is considered from the perspective of implementing individual point Acoustic Vector Sensors (AVS) for detection, bearing and elevation estimates of individual vocalizing red Hind grouper. A special two node case is presented and studied allowing for the derivation of range, track and speed estimates of shoaling or individual fish.


Leveraging ROC adjustments for optimizing UUV risk-based search planning
Paper 10182-24

Author(s):  John G. Baylog, Naval Undersea Warfare Ctr. (United States), et al.
Conference 10182: Detection and Sensing of Mines, Explosive Objects, and Obscured Targets XXII
Session 5: Sonar and Side-scan Technologies II

A Bayesian risk objective function is developed for search planning using a receiver operator characteristic (ROC) to determine detection probabilities and validation criteria. When ROC operating points are fixed, the objective function exhibits supermodularity only when the criterion does not change. This compromises the scheduling process. In this paper we consider an expanded optimization process whereby ROC operating points are determined simultaneously for all criteria and pass counts. An analysis of this new permuted optimization process is presented. Details of its application within a broader search planning context are discussed and numerical results are provided to demonstrate effectiveness.


Advanced wireless mobile collaborative sensing network for tactical and strategic missions
Paper 10184-26

Author(s):  Hao Xu, Univ. of Nevada, Reno (United States), et al.
Conference 10184: Sensors, and Command, Control, Communications, and Intelligence (C3I) Technologies for Homeland Security, Defense, and Law Enforcement Applications XVI
Session 6: Surveillance, Nav Systems, and Technologies II

In this paper, an advanced wireless mobile collaborative sensing network will be developed. Through properly combining wireless sensor network, emerging mobile robots and multi-antenna sensing/communication techniques, we could demonstrate superiority of developed sensing network. To be concrete, heterogeneous mobile robots including UAV and UGV are equipped with multi-model sensors and wireless transceiver antennas. Through real-time collaborative formation control, multiple mobile robots can team the best formation that can provide most accurate sensing results. Also, formatting multiple mobile robots can also construct a multiple-input multiple-output (MIMO) communication system that can provide a reliable and high performance communication network.


Advanced Doppler radar physiological sensing technique for drone detection
Paper 10188-28

Author(s):  Ji Hwan Yoon, Univ. of Nevada, Reno (United States), et al.
Conference 10188: Radar Sensor Technology XXI
Session 6: MicroDoppler

A 24 GHz medium-range human-detecting sensor, which can also detect UAVs, is currently under development for potential rescue and anti-drone applications using Doppler Radar Physiological Sensing (DRPS) technique. DRPS systems are specifically designed to remotely monitor small movements (> few hundreds micrometers) of non-metallic human tissues by cardiopulmonary activity and respiration. Once optimized, the unique capabilities of DRPS could detect UAVs. Initial measurements showed that DRPS technology is suitable to detect moving and stationary humans as well as a largely non-metallic quarter-copter. Further data processing will incorporate pattern recognitions to detect multiple signatures (motor movement and hovering pattern) of UAVs.


A novel remote RF sensor for search and rescue and through-wall 3D imaging
Paper 10188-36

Author(s):  Hossein Ghaffari Nik, George Mason Univ. (United States), et al.
Conference 10188: Radar Sensor Technology XXI
Session 7: Programs and Systems I

RF reflections can be leveraged to detect movement behind walls and through rubble. A potential application of such technology is the detection of buried or concealed humans and their vital signs in search and rescue missions. We propose a novel rotary single transmitter-receiver pair radar sensor for mobile robots that uses multiple antenna polarities with spatial antenna displacement due to rotation. Our system achieves similar accuracy and detection capabilities of multiple targets and localization in 3D as other multi-antenna arrays at a smaller footprint without complex and expensive circuitry.


"Does this interface make my sensor look bad?" Basic principles for designing usable, useful interfaces for sensor technology operators
Paper 10190-22

Author(s):  Laura A. McNamara, Sandia National Labs. (United States), et al.
Conference 10190: Ground/Air Multisensor Interoperability, Integration, and Networking for Persistent ISR VIII
Session 5: Human-machine Interface and Machine Learning Approaches II

Even as remote sensing technology has advanced in leaps and bounds over the past decade, the remote sensing community lacks interfaces and interaction models that facilitate effective human operation of our sensor platforms. Interfaces that make great sense to electrical engineers and flight test crews can be anxiety-inducing to operational users who lack professional experience in the design and testing of sophisticated remote sensing platforms. This paper reflects on several years’ worth of design and evaluation projects to identify and describe major issues that frustrate sensor operators and to explain their impact on the efficiency and effectiveness of sensor tasking, collections, exploitation and production in high-consequence workflows. Drawing on basic principles from cognitive and perceptual psychology and interaction design, we provide simple, easily learned guidance for minimizing common barriers to system learnability, memorability, and user engagement.


Novel machine learning methods to enhance the detection and classification of objects in multi-spectral multi-resolution synthetic aperture radar images
Paper 10190-26

Author(s):  Flavio Bergamaschi, IBM United Kingdom Ltd. (United Kingdom), et al.
Conference 10190: Ground/Air Multisensor Interoperability, Integration, and Networking for Persistent ISR VIII
Session 6: Detection, Tracking and Localization for Persistent Surveillance

Maritime security and surveillance are key to ensuring that maritime activities, such as logistics and fishing, are compliant with relevant existing legislation. Monitoring the movement and localization of vessels on the seas is a great challenge not only because it is very difficult to monitor when vessels are far from land, but also because the various tracking system, such as the Automatic Identification System (AIS), can be hacked and made to distribute false information or even switched off completely. Overall, illegal activities in the maritime sector and illegal activities that are supported through them have cost the global economy several billion dollars each year. As a result, it is important for law enforcement agencies to have access to data that do not have geographic or weather restrictions, are of a high enough resolution to allow the detection of ships of various sizes, and, importantly, are easily accessible and available very soon after an acquisition is made. Synthetic Aperture Radar (SAR) imagery is ideal for this purpose, as it can be acquired at any time of day and it is not affected by cloud cover, unlike optical imagery, and it is available in a range of resolutions. For this reason, SAR imagery using satellites has, in recent years, become an indispensable tool in applications that require the detection and tracking of marine vessels. This paper presents a novel machine learning methods to enhance the detection and classification of objects in multi-spectral and multi-resolution Synthetic Aperture Radar (SAR) images taken from the European Space Agency's (ESA) Sentinel 1 satellites. The work presented in this paper focus on the detection and classification of vessels on the surface of the sea. Traditional target detection in SAR image analysis mostly rely on Constant False Alarm Rate (CFAR) detection which doesn't alway provide the necessary accuracy and are cumbersome to tune for the variety of locations on Earth. We present a novel technique in which we extract feature sets from the SAR data before it is heavily processed into an image and take advantage of the cross-polarization information combined with multi-resolution data sets combined with novel machine learning method to greatly increase the capability to detect and classify vessels on the surface of the sea.


Evaluating the integration of operations tasks while optimizing ISR activities
Paper 10190-34

Author(s):  Moises Sudit, Univ. at Buffalo (United States), et al.
Conference 10190: Ground/Air Multisensor Interoperability, Integration, and Networking for Persistent ISR VIII
Session 8: Optimization of Utility of ISR Assets

Current decision making processes separate the Intelligence tasks from the Operations tasks. This creates a system that is reactive rather than proactive, leaving potential gains in the timeliness and quality of responding to a situation of interest. In this paper we will present a new optimization paradigm that combines the tasking of Intelligence, Surveillance, and Reconnaissance (ISR) assets with the tasks and needs of Operational assets. Some of the collection assets will be dedicated for one function or another, while a third category that could perform both will also be considered. We will use a scenario to demonstrate the value of the merger by presenting the impact on a number of Intelligence and Operations measures of performance and effectiveness (MOPS/MOEs). Using this framework, mission readiness and execution assessment for a simulated Humanitarian Assistance/Disaster Relief (HADR) mission is monitored for tasks on intelligence gathering, distribution of supplies, and repair of vital lanes of transportation, during the relief effort. The innovative approach uses a combination of discrete optimization methods to obtain heuristic solutions effectively to an NP-Hard problem. Furthermore, the method is flexible to adapt to dynamic objective functions which will allow for changes in the environment or goals of a mission.


Distributed subterranean exploration and mapping with teams of UAVs
Paper 10190-42

Author(s):  Ryan Sherrill, Air Force Research Lab. (United States), et al.
Conference 10190: Ground/Air Multisensor Interoperability, Integration, and Networking for Persistent ISR VIII
Session 10: ISR Airborne Imaging/Sensing

Teams of small autonomous UAVs can be used to map and explore unknown environments which are inaccessible to teams of human operators in humanitarian assistance and disaster relief efforts (HA/DR). In addition to HA/DR applications, teams of small autonomous UAVs can enhance Warfighter capabilities and provide operational stand-off for military operations such as cordon and search, counter-WMD, and other intelligence, surveillance, and reconnaissance (ISR) operations. This paper will present a hardware platform and software architecture to enable distributed teams of heterogeneous UAVs to navigate, explore, and coordinate their activities to accomplish a search task in a previously unknown environment.


Robust drone detection for day/night Counter-UAV with static VIS and SWIR cameras
Paper 10190-43

Author(s):  Thomas Müller, Fraunhofer-Institut für Optronik, Systemtechnik und Bildauswertung (Germany), et al.
Conference 10190: Ground/Air Multisensor Interoperability, Integration, and Networking for Persistent ISR VIII
Session 10: ISR Airborne Imaging/Sensing

Recent progress in unmanned aerial vehicles (UAVs) has led to more and more situations in which Counter-UAV systems are required to early detect approaching, potentially threatening or misused drones. In this paper, an efficient and robust algorithm is presented for UAV detection using static VIS or SWIR cameras. Whereas VIS cameras enable to detect UAVs in daylight in further distances, surveillance at night is performed with SWIR. First, a background estimation and structural adaptive change detection process detects movements and other changes. Afterwards, their local density is computed used for background density learning and to build up the foreground model which are compared to finally get the alarm result. The density model is used to filter out noise effects and learn areas with moving scene parts like moving leaves in the wind or driving cars on a street. This learning is done automatically simply by processing without UAVs. The given results document the performance of the presented approach in VIS and SWIR in different situations.


Temperature estimation using thermal infrared imagery from a UAV for improved human detection in Africa
Paper 10190-44

Author(s):  Elizabeth Bondi, The Univ. of Southern California (United States), et al.
Conference 10190: Ground/Air Multisensor Interoperability, Integration, and Networking for Persistent ISR VIII
Session 10: ISR Airborne Imaging/Sensing


Tree detection in urban region from aerial imagery and DSM based on local maxima points
Paper 10190-45

Author(s):  Ozgur Korkmaz, Middle East Technical Univ. (Turkey), et al.
Conference 10190: Ground/Air Multisensor Interoperability, Integration, and Networking for Persistent ISR VIII
Session 10: ISR Airborne Imaging/Sensing

In this study, we propose an automatic approach for tree detection and classification in registered 3-band aerial images and associated digital surface models (DSM). The tree detection results can be used in 3D city modelling and urban planning. This problem is magnified when trees are in close proximity to each other or other objects such as rooftops in the scenes. This study presents a method for locating individual trees and estimation of crown size based on local maxima from DSM accompanied by color and texture information.


Multiple methods of yaw control for multi-rotor aircraft
Paper 10190-47

Author(s):  Harris Edge, U.S. Army Research Lab. (United States), et al.
Conference 10190: Ground/Air Multisensor Interoperability, Integration, and Networking for Persistent ISR VIII
Session 11: ISR Vehicle Steering/Control

Operating multi-rotor aircraft with precision and performing work in the near Earth environment produces a number of challenges which may benefit from increased options for controlling aircraft position, attitude, and yaw. Lightweight multi-rotor aircraft typically use conservation of angular momentum to control yaw. As payload increases and size of multi-rotors increase the ratio of the rotating motor and propeller mass to total platform mass may decrease to the point where the change of rotating mass momentum between the thrust generating motors and rotors may become decreasingly effective and increasingly inefficient. Alternative yaw control methods will be discussed with example implementations.


Particle flow filter-based airborne-SLAM
Paper 10190-50

Author(s):  Erol Duymaz, Turkish Air Force Academy (Turkey), et al.
Conference 10190: Ground/Air Multisensor Interoperability, Integration, and Networking for Persistent ISR VIII
Session 12: ISR Image Processing


Bathymetric depth sounder with novel echo signal analysis based on exponential decomposition
Paper 10191-13

Author(s):  Andreas Ullrich, RIEGL Laser Measurement Systems GmbH (Austria), et al.
Conference 10191: Laser Radar Technology and Applications XXII
Session 3: Systems and Applications III

We present a laser range finder specifically designed for bathymetric surveying tasks. The compact and lightweight instrument is capable of measuring through the water surface, ideally suited for generating profiles of waterbodies. It sends out laser pulses at a rate of 4 kHz. The echo signal for each laser pulse is digitized and recorded for the entire range gate. The waveforms are processed by an algorithm based on exponential decomposition which uses segments of an exponential function to model the backscatter cross-section of the target objects. This leads to high accuracy of the points and to automatic target classification.


Measuring laser reflection cross-section of small unmanned aerial vehicles for laser detection, ranging and tracking
Paper 10191-14

Author(s):  Martin Laurenzis, Institut Franco-Allemand de Recherches de Saint-Louis (France), et al.
Conference 10191: Laser Radar Technology and Applications XXII
Session 4: Signatures and Phenomenology

A systematic evaluation of different sensor technologies needs a fundamental knowledge about the nature of the target. In this paper, we focus on detection, tracking and identification by laser sensing technologies that are Laser Gated Viewing and scanning LiDAR. The application of laser detection and ranging systems is discussed in theory and first experimental results are presented. Further, fundamental physical properties of different UAVs are investigated with a special focus on laser reflection characteristics. Laser reflection cross-sections LSCS are determined and their impact on the system performance in means of detection, recognition and identification (DRI) ranges is discussed.


Airborne compatible kilowatt class ultra-low SWaP fiber-delivered laser diode pump sources for directed energy
Paper 10192-13

Author(s):  John Goings, Lasertel, Inc. (United States), et al.
Conference 10192: Laser Technology for Defense and Security XIII
Session 3: Laser Diode Development

Directed energy applications have placed a significant emphasis on improving diode efficiency and brightness of the laser diodes used as pumps in fiber lasers. Size and volume reductions are also placed as premium performance parameters. This paper will present a laser diode assembly that provides kilowatt level powers, fiber delivered, with a weight ratio < 0.3 grams/Watt and volume ratio < 0.2 cm3/Watt. Electrical to optical efficiency approaches 60% out of the fiber in a MIL qualified package. The device is capable of utilizing airborne-compatible cooling fluids such as alcohol mixtures and can be scaled for multi-kilowatt output powers.


Situation awareness-based agent transparency for human-autonomy teaming effectiveness
Paper 10194-68

Author(s):  Jessie Y. C. Chen, U.S. Army Research Lab. (United States), et al.
Conference 10194: Micro- and Nanotechnology Sensors, Systems, and Applications IX
Session 13: Advanced Sensor Systems for Human-Machine Teaming I: Joint session with conferences 10194 and 10195

We developed a model of agent transparency to support operator situation awareness of the mission environment involving the agent, the Situation awareness-based Agent Transparency (SAT) model, which includes the agent's current actions and plans (Level 1), its reasoning process (Level 2), and its projection of future outcomes (Level 3). Human-in-the-loop simulation experiments have been conducted (RoboLeader, Autonomous Squad Member, and IMPACT) to illustrate the utility of the model for human-autonomy team interface designs. Across studies, the results consistently showed that human operators’ task performance improved as the agents became more transparent. They also perceived transparent agents as more trustworthy.


Curious Partner: an autonomous system that proactively dialogues with human teammates
Paper 10194-69

Author(s):  J. Willard Curtis, Air Force Research Lab. (United States), et al.
Conference 10194: Micro- and Nanotechnology Sensors, Systems, and Applications IX
Session 13: Advanced Sensor Systems for Human-Machine Teaming I: Joint session with conferences 10194 and 10195


IMPACT machine learning efforts
Paper 10194-72

Author(s):  Douglas S. Lange, SPAWAR Systems Ctr. Pacific (United States), et al.
Conference 10194: Micro- and Nanotechnology Sensors, Systems, and Applications IX
Session 14: Advanced Sensor Systems for Human-Machine Teaming II: Joint session with conferences 10194 and 10195.

Amplifying human ability for controlling complex environments featuring autonomous units can be aided by learned models of human and system performance. In developing a command and control system that allows a small number of people to control a large number of autonomous teams, we employ an autonomics framework to manage the networks that represent mission plans and the networks that are composed of human controllers and their autonomous assistants. Machine learning allows us to build models of human and system performance useful for monitoring plans under intermittent communications and managing human attention and task loads.


Acting as a scalable team in unstructured environments
Paper 10194-73

Author(s):  Thomas Apker, U.S. Naval Research Lab. (United States), et al.
Conference 10194: Micro- and Nanotechnology Sensors, Systems, and Applications IX
Session 14: Advanced Sensor Systems for Human-Machine Teaming II: Joint session with conferences 10194 and 10195.


Decentralized asset management for collaborative sensing
Paper 10194-74

Author(s):  Raj Malhotra, Air Force Research Lab. (United States), et al.
Conference 10194: Micro- and Nanotechnology Sensors, Systems, and Applications IX
Session 14: Advanced Sensor Systems for Human-Machine Teaming II: Joint session with conferences 10194 and 10195.

Increased interest in leveraging numerous Small Unmanned Aerial Systems (SUAS) for collaborative sensing has motivated the development of decentralized approaches that can scale and are robust to realistic operating conditions. We introduce a decentralized approach based upon information-theory and distributed fusion which enable us to scale up to large numbers of collaborating SUAS platforms. Our simulation results further demonstrate that our approach out-performs more static management strategies employed by human operators and achieves similar results to a centralized optimization approach while being more scalable and robust. Finally, we describe the limitations of our approach and future directions for our research.


Tier-scalable reconnaissance: the future in autonomous C4ISR systems has arrived
Paper 10194-76

Author(s):  Wolfgang Fink, The Univ. of Arizona (United States), et al.
Conference 10194: Micro- and Nanotechnology Sensors, Systems, and Applications IX
Session 15: Autonomous C4ISR Systems of the Future: Joint session with conferences 10194 and 10205

Autonomous reconnaissance missions are called for in extreme environments, as well as in potentially hazardous or inaccessible operational areas. Such future missions will require increasing degrees of operational autonomy: (1) Automatic characterization of operational areas from different vantages; (2) automatic sensor deployment and data gathering; (3) automatic feature extraction including anomaly detection and region-of-interest identification; (4) automatic target prediction and prioritization; (5) and subsequent automatic deployment of robotic agents. This talk touches on several aspects of autonomous C4ISR systems, including: multi-tiered mission architectures, robotic platform development, and autonomous decision making based on sensor-data-fusion, anomaly detection, and target prioritization.


Assessment of RCTA research
Paper 10195-1

Author(s):  Craig Lennon, U.S. Army Research Lab. (United States), et al.
Conference 10195: Unmanned Systems Technology XIX
Session 1: Robotics CTA

The Army Research Laboratory’s Robotics Collaborative Technology Alliance (RCTA) is a program intended to change robots from tools that soldiers use into teammates with which soldiers can work. This requires the integration of fundamental and applied research in robotic perception, intelligence, manipulation, mobility, and human-robot interaction. Research that was assessed during 2015 and 2016 included technologies for learning applied to manipulation tasks, energy efficient planning, and the integration of perception and intelligence onto new platforms. We present details of these assessments and their results.


Using deep learning to bridge the gap between perception and intelligence
Paper 10195-2

Author(s):  Arne J. Suppe, Carnegie Mellon Univ. (United States), et al.
Conference 10195: Unmanned Systems Technology XIX
Session 1: Robotics CTA


Gait design and optimization for efficient running of a direct-drive quadrupedal robot
Paper 10195-3

Author(s):  Jonathan E. Clark, Florida State Univ. (United States), et al.
Conference 10195: Unmanned Systems Technology XIX
Session 1: Robotics CTA

Over the years legged robots have utilized a number of strategies for developing running gaits. Even within the family of diagonally- symmetric trotting gaits there are a large number of possible strategies for developing fast, stable and efficient running. While individual robots’ gaits have been optimized for various criteria, little work has been done to systematically compare fundamentally different strategies or to identify which features in the gait design result in optimal running. In this study we examine Minituar, a direct-drive quadrupedal robot. The hollow-core motors and 5-bar linkage peculiar to its design allow for the generation of high torques at very high speeds. One weakness of this design, however, is the lack of passive energy storage elements in the legs. The resulting gaits tend to suffer with respect to energetic efficiency. Furthermore the best hand-tuned gaits developed thus far for Minitaur result in running speeds of only 1.5m/s. Studies with the biologically-inspired reduced-order dynamic SLIP model predict that a robot this size should be able to run at up to 2.5m/s. While fast and stable gaits have been developed with single legged hoppers, in this study we focus on how well these approaches carry over to the whole-body dynamics associated with a quadruped. In particular, we compare trajectory optimization with a speed-weighted cost function to the SLIP-based Adaptive-Energy Removal (AER) strategy developed for maximizing stability over rough terrain. We examine the role of leg posture, stroke length, compliance, and frequency on the resulting running speed and efficiency.


Ground-based self-righting using inertial appendage methods
Paper 10195-4

Author(s):  James Dotterweich, Engility Corp. (United States), et al.
Conference 10195: Unmanned Systems Technology XIX
Session 1: Robotics CTA

The ability to recover from tip-over events is critical for robots operating autonomously in the real world. This work extends a previously developed self-righting framework to include dynamic righting solutions using appendages. It uses the zero moment point concept to generate momentum in the appendage as well as the transfer of that momentum from the appendage to the body. This can be done both for inducing desired tip-over and for controlling impact energy. Finally, the proposed methods are validated on a physical robot, and the improvement to its ability to right itself is quantified as compared with quasi-static solutions.


Experimental verification of distance and energy efficient motion planning on a skid steered platform
Paper 10195-5

Author(s):  James Pace, Florida State Univ. (United States), et al.
Conference 10195: Unmanned Systems Technology XIX
Session 2: Mobility and Navigation


Autonomous UAV search planning with possibilistic inputs
Paper 10195-7

Author(s):  Paul Elmore, U.S. Naval Research Lab. (United States), et al.
Conference 10195: Unmanned Systems Technology XIX
Session 2: Mobility and Navigation

We are considering methods for enhancing decision making processes with human information needed to inform probabilistic information used in an autonomous system. In particular we focus on path planning for autonomous systems, specifically deciding where to move in order to quickly complete a task, given human sources of information. We have developed a simulation platform on which to test the effectiveness of methods for informing a priori information in a search and rescue scenario. In particular we used possibility theory to represent the subjective information and applied possibilistic conditioning of the probability distribution.


Online location recognition for drift-free trajectory estimation and efficient autonomous navigation
Paper 10195-8

Author(s):  Deepak Khosla, HRL Labs., LLC (United States), et al.
Conference 10195: Unmanned Systems Technology XIX
Session 2: Mobility and Navigation

We present a novel online vision-based location recognition (OLR) algorithm for mobile robots, e.g., UAVs. The OLR is based on fast and efficient interest-point detection and feature-based keypoint matching. It incrementally constructs a database of visited locations and robustly recognizes revisited locations, even in complex and cluttered scenes. The OLR capability is quantitatively evaluated using a mobile robot setup in a multi-room office building environment. We further present and validate two applications of the OLR algorithm: (1) Drift-free trajectory estimation, (2) Efficient autonomous navigation. Our simulations and real-world experiments demonstrate the accuracy and efficiency of the proposed algorithm and applications.


Development of a small satellite primarily inertial autonomous self-correcting attitude determination and control system
Paper 10195-9

Author(s):  Mark McDonald, North Dakota State Univ. (United States), et al.
Conference 10195: Unmanned Systems Technology XIX
Session 2: Mobility and Navigation

This paper discusses the software for a satellite inertial autonomous self-correcting attitude determination and control system (ADCS). The mathematical models used and the challenges that have arisen are discussed. The ADCS has to automatically update its control profile after deployment, continuously, compensating for a potentially changing center of mass and removing the requirement to test the system in the orbital environment to develop the initial profile. Rough estimation is provided by the accelerometers and solar cells and refined using an outward facing star sensor. A limited functionality mode fallback, low computational power and environmental exploration mode has also been developed.


Rapid abstract perception to enable tactical unmanned system operations
Paper 10195-10

Author(s):  Stephen P. Buerger, Sandia National Labs. (United States), et al.
Conference 10195: Unmanned Systems Technology XIX
Session 3: Perception

As unmanned systems (UMS) proliferate for security and defense applications, autonomous control system capabilities that enable them to perform tactical operations are of increasing interest. We deconstruct the tactical autonomy problem, identify the key technical challenges, and place them into context with the state of the art. We present work in two key areas: we summarize our work to date in tactical reasoning, and we present initial results from a new research program focused on abstract perception in tactical environments.


A testbed for evaluating LIDAR as a sense and avoid sensor for autonomous aerial systems
Paper 10195-12

Author(s):  Wade W. Brown, Semaphore Scientific, Inc. (United States), et al.
Conference 10195: Unmanned Systems Technology XIX
Session 3: Perception

LIDAR is seen by many as a panacea as at least a partial solution for Sense and Avoid (SAA) sensing for autonomous vehicles both land, air, and space. Given the assumed explosion in numbers of UAS that will be flying in the NAS work to evaluate LIDAR’s efficacy as a SAA sensor needs to performed. We discuss a simulation built to use as a testbed for testing LIDAR for SAA evaluation.


A perception pipeline for expeditionary autonomous ground vehicles
Paper 10195-14

Author(s):  Josh Zapf, SPAWAR Systems Ctr. Pacific (United States), et al.
Conference 10195: Unmanned Systems Technology XIX
Session 6: ONR 30 Ground Vehicle Autonomy I

Expeditionary environments create special challenges for perception systems in autonomous ground vehicles. To address these challenges, a perception pipeline has been developed that fuses data from multiple sensors (color, thermal, LIDAR) with different sensing modalities and spatial resolutions. The paper begins with in-depth discussion of the multi-sensor calibration procedure. It then follows the flow of data through the perception pipeline, detailing the process by which the sensor data is combined in the world model representation. Topics of interest include stereo filtering, stereo and LIDAR ground segmentation, pixel classification, 3D occupancy grid aggregation, and cost map generation.


Domain adaptation for semantic segmentation in offroad driving scenarios
Paper 10195-15

Author(s):  Jeremie A. Papon, Jet Propulsion Lab. (United States), et al.
Conference 10195: Unmanned Systems Technology XIX
Session 6: ONR 30 Ground Vehicle Autonomy I

Recent research in perception for autonomous driving has focused on cities and their unique visual challenges. This has resulted in the compilation of large annotated datasets for training data-demanding deep neural networks. While this has proven to be effective in the city-driving domain, the significant cost of creating annotated datasets makes it prohibitive for other applications. Nevertheless, the inherent lack of structure in unimproved and off-road driving scenarios make them natural applications for deep networks. We address this gap by adapting existing datasets created for segmentation of urban driving data to off-road applications.


Wheel placement reasoning in complex terrain
Paper 10195-17

Author(s):  Jimmy S. Gill, Neya Systems LLC (United States), et al.
Conference 10195: Unmanned Systems Technology XIX
Session 6: ONR 30 Ground Vehicle Autonomy I

By employing selective reasoning for wheel placement on uneven terrain, planning systems generate safer trajectories that reduce overall mission risk and expand an autonomous vehicle’s mission portfolio by enabling them to navigate previously untraversable terrain. We show successful wheel placement reasoning in a diverse set of environments, including unimproved roads and expeditionary scenarios. We quantify the performance improvement by measuring the path and control deviation from the human-selected best route, ride safety measures (vertical acceleration, vibration and chassis power absorption), and measures for chassis roll and pitch relative to established thresholds for vehicle rollover.


Augmenting autonomous vehicle sensor processing with prior data
Paper 10195-18

Author(s):  Elliot Johnson, Southwest Research Institute (United States), et al.
Conference 10195: Unmanned Systems Technology XIX
Session 6: ONR 30 Ground Vehicle Autonomy I

Sensor augmentation with a priori data effectively expands the sensing range and adds new sensing modalities. High resolution elevation maps are registered to live estimates of the ground height to provide robust absolute position estimates and to avoid steep slopes which may be unobservable by the sensors. Live sensor data is fused into persistent maps that are continuously relaxed and updated with new information. The map provides a robust location estimate and can be fused with the current live costmap to fill in gaps beyond the sensing horizon, allowing the navigation system to effectively re-route vehicles over large distances.


Mission planning, execution and modeling for teams of unmanned vehicles
Paper 10195-19

Author(s):  Jean-Pierre de la Croix, Jet Propulsion Lab. (United States), et al.
Conference 10195: Unmanned Systems Technology XIX
Session 7: ONR 30 Ground Vehicle Autonomy II

Mission Planning, Execution and Modelling (MPEM) is user friendly graphical framework for mission design and execution. It extends a subset of the Business Process Modelling and Notation (BPMN) 2.0 for robotic applications. Hierarchical abstractions fundamental to BPMN allow the mission to be naturally decomposed into interdependent parallel sequences of BPMN elements. MPEM adapts these elements in a role based framework which uses collaborative control modalities as an atomic building block. Designed missions are able to consider situational data, external stimuli, and direct user interaction. Missions are directly executable using a resource manager and a ROS-based execution engine.


Adaptive formation control for route-following ground vehicles
Paper 10195-20

Author(s):  Greg N. Droge, SPAWAR Systems Ctr. Pacific (United States), et al.
Conference 10195: Unmanned Systems Technology XIX
Session 7: ONR 30 Ground Vehicle Autonomy II

For problems such as route inspection or snow removal, vehicles must coordinate spatially and temporally to ensure clearance of the route and avoid inter-vehicle collisions. The spatial relationship of vehicle paths will require adaptation to navigate around previously unknown obstacles and adjust for varying width of the route. Temporal constraints are important to ensure vehicles do not collide. The presented work decouples the spatial and temporal constraints to allow for rapid, coordinated planning. This is accomplished by combining receding horizon planning with a modified platooning approach to cooperatively adapt to the environment while satisfying the constraints.


Designing an operator control unit for cooperative autonomous unmanned systems
Paper 10195-21

Author(s):  Paul Candela, SPAWAR (United States), et al.
Conference 10195: Unmanned Systems Technology XIX
Session 7: ONR 30 Ground Vehicle Autonomy II

Interfacing with and supervising a team of heterogeneous robotic systems across various, dynamic mission sets poses significant demands on the design of an operator control unit (OCU). Access to a well-defined system architecture for external interfacing allows the OCU control over different levels of the system from low-level tasks like teleoperation to high-level coordination of multiple vehicles. A robust dynamic device discovery keeps up to date the array of capabilities and payloads. Lastly, a flexible front-end will convey situational awareness by visualizing this system and informing the user of pertinent information without obstructing their display.


A systematic approach to autonomous unmanned system experimentation
Paper 10195-22

Author(s):  Ryan Halterman, SPAWAR Systems Ctr. Pacific (United States), et al.
Conference 10195: Unmanned Systems Technology XIX
Session 7: ONR 30 Ground Vehicle Autonomy II


CARACaS multi-agent maritime autonomy for unmanned surface vehicles in the Swarm II harbor patrol demonstration
Paper 10195-24

Author(s):  Michael Wolf, Jet Propulsion Lab. (United States), et al.
Conference 10195: Unmanned Systems Technology XIX
Session 8: Self-organizing, Collaborative Unmanned Robotic Team: Joint session with conferences 10195 and 10205

This paper describes new autonomy technology that enabled a team of unmanned surface vehicles (USVs) to execute cooperative behaviors in the “USV Swarm II” harbor patrol demonstration, using the NASA Jet Propulsion Laboratory’s CARACaS autonomy architecture. In USV Swarm II, CARACaS demonstrated higher levels of autonomy and more complex cooperation than previous on-water exercises, using full-sized vehicles and real-world sensing and communication. Significantly, CARACaS not only executed tasks such as Patrol, Track, Inspect, and Trial safely and efficiently but also recognized what tasks needed to be accomplished based on a dynamic world model, eliminating the need for an operator in the loop.


Joint communications architecture for unmanned systems (JCAUS)
Paper 10195-26

Author(s):  Shad M. Reese, TWS, Office of the Under Secretary of Defense for Acquisition, Technology and Logistics (United States), et al.
Conference 10195: Unmanned Systems Technology XIX
Session 9: Communication Systems for Small Unmanned Vehicles

To streamline radio logistics and accelerate the pace of technology injection to DoD unmanned system communities, the AT&L Joint Ground Robotics Enterprise (JGRE) has developed a new generation communications architecture called Joint Communications Architecture for Unmanned Systems (JCAUS). A family of systems will be developed to address the capabilities such as frequency allocation, spectrum supportability, interoperability, waveform policy, information assurance, and environmental requirements. The purpose of this architecture is to enforce standards and modularity to yield benefits of reduced total ownership cost, accelerated transition, improved interoperability, and increased innovation. This paper provides the background and technology highlights of this effort.


Cybersecurity for unmanned systems
Paper 10195-28

Author(s):  John Yen, SPAWAR Systems Ctr. Pacific (United States), et al.
Conference 10195: Unmanned Systems Technology XIX
Session 9: Communication Systems for Small Unmanned Vehicles

CYBERSECURITY FOR UNMANNED SYSTEMS John Yen, John Smigal, Daljit Singh, Anjan Pradhan, Don Brower, Phil Barlow ABSTRACT Unmanned systems present unique challenges to Cybersecurity developers: • Need to secure these systems and protect classified information. • Very low Size, Weight, and Power (SWaP) consumption constraints. • Long and costly approval processes for deployment. Modularizing the communications architecture to take advantage of rapid technological advancements often results in a Cross Domain Solution (CDS) requirement. This paper describes work supporting unmanned systems Cybersecurity: • Investigating current approved cryptographic and CDS tactical products to assess their suitability for unmanned systems operations. • Investigating potential new technologies that have not yet reached the operational stage. • Proposing a way ahead.


Seamless cryptography and key management for secure and agile UAS communication
Paper 10195-29

Author(s):  Roger Khazan, MIT Lincoln Lab. (United States), et al.
Conference 10195: Unmanned Systems Technology XIX
Session 9: Communication Systems for Small Unmanned Vehicles

LOCKMA is a software component designed to significantly simplify the task of adding cryptographic protections and underlying key management to software applications and embedded devices, such as unmanned vehicles and sensors. In this paper, we are going to describe several UAS-inspired use-cases and show how LOCKMA could naturally support these use-cases to ensure security of UAS communications. LOCKMA stands for Lincoln Open Cryptographic Key Management Architecture.


Intelligent shared control of a small UGV
Paper 10195-32

Author(s):  Jared Giesbrecht, Defence Research and Development Canada, Suffield (Canada), et al.
Conference 10195: Unmanned Systems Technology XIX
Session 10: HRI

This paper overviews the development of a shared autonomy system for small unmanned ground vehicles operating in indoor environments, focused on driving assistance technologies to reduce the burden of performing low-level tasks when operating in difficult areas. The system also provides a safety layer to prevent the robot from becoming disabled due to operator error or environmental hazards. Examples include behaviours for obstacle avoidance, stair climbing, and retreat from communications loss. The software was integrated on a QNA Talon IV robot and tested by military operators in a relevant environment.


Unobtrusive assistance of obstacle avoidance to tele-operation of ground vehicles
Paper 10195-33

Author(s):  Mingfeng Zhang, MDA Corp. (Canada), et al.
Conference 10195: Unmanned Systems Technology XIX
Session 10: HRI

This paper presents a new obstacle avoidance algorithm that provides unobtrusive assistance to tele-operation of unmanned ground vehicles. By projecting an operator's steering commands into short-term trajectories of a tele-operated vehicle through an inversed kinematic model of the vehicle, it can determine whether the commands are safe in the presence of obstacles and can automatically adjust unsafe commands in an unobtrusive manner to avoid proximate obstacles to ensure the vehicle's safety. This algorithm has been successfully implemented on a military-grade security robot, and it has been tested and assessed by professional operators of security robots in realistic environments.


Assessing autonomy vulnerabilities in military vehicles
Paper 10195-34

Author(s):  Craig Lennon, U.S. Army Research Lab. (United States), et al.
Conference 10195: Unmanned Systems Technology XIX
Session 11: Special Topics

The Autonomous Ground Resupply (AGR) Program is a U.S. Army Tank Automotive Research Development and Engineering Center (TARDEC) effort to reduce the number of soldiers required for ground resupply. One objective of this program involves providing vehicles with the capability to operate unmanned in a variety of circumstances. Prior to fielding, this system is undergoing an assessment to identify possible vulnerabilities. We use this system to illustrate a general approach to the assessment of the autonomous decision making, describing a process which could be used to identify vulnerabilities in the autonomy of military systems without describing any actual vulnerabilities discovered in the AGR system.


The 25th Annual Intelligent Ground Vehicle Competition (IGVC): Building engineering students into roboticists
Paper 10195-35

Author(s):  Andrew Kosinski, U.S. Army Tank Automotive Research, Development and Engineering Ctr. (United States), et al.
Conference 10195: Unmanned Systems Technology XIX
Session 11: Special Topics

The IGVC is a college level autonomous unmanned ground vehicle (UGV) competition that encompasses a wide variety of engineering professions – mechanical, electrical, computer engineering and computer science. It requires engineering students from these varied professions to collaborate in order to develop a truly integrated engineering product, a fully autonomous UGV. Students must overcome a large variety of engineering technical challenges in control theory, power requirements/distribution, cognition, machine vision, vehicle electronics, sensors, systems integration, vehicle steering, fault tolerance/redundancy, noise filtering, PCB design/analysis/selection, vehicle engineering analysis, design, fabrication, field testing, lane-following, avoiding obstacles, vehicle simulation/virtual evaluation, GPS/waypoint navigation, safety design, etc.


Fast reinforcement learning based distributed optimal flocking control and network co-design for uncertain networked multi-UAV system
Paper 10195-38

Author(s):  Hao Xu, Univ. of Nevada, Reno (United States), et al.
Conference 10195: Unmanned Systems Technology XIX
Session 11: Special Topics

Military applications require networked multi-UAV system to perform practically, optimally and reliably under changing mission requirements. Lacking the effective control and communication algorithms is impeding the development of multi-UAV systems significantly. In this paper, distributed optimal flocking control and network co-design problem has been investigated for networked multi-UAV system with uncertain harsh environment and unknown dynamics. Adopting neuro dynamics programming and fast reinforcement learning techniques, proposed scheme can learn the optimal co-design under uncertain harsh environment, and also relax the requirement of networked multi-UAV dynamics. Both theoretical analysis and hardware-in-the-loop simulation results demonstrate the effectiveness of the proposed scheme.


Model-free adaptive controller for autonomous aerial transportation of suspended loads with unknown characteristics
Paper 10195-39

Author(s):  Luis Rodolfo Garcia Carrillo, Univ. of Nevada, Reno (United States), et al.
Conference 10195: Unmanned Systems Technology XIX
Session 11: Special Topics

We present a novel model-free adaptive wavenet PID-based controller for enabling UAS to transport suspended loads of unknown characteristics. The designed controller enables the UAS to perform a trajectory tracking task, based solely on the knowledge of the UAS position. We propose a novel structure, which identifies inverse error dynamics using a radial-basis neural network with daughter Mexican hat wavelets activation function. A real-time load transportation mission consisting of a UAS carrying a cable suspended load validates the effectiveness of the trajectory tracking control strategy, even when the mathematical model of the UAS and load dynamics are unknown.


UAV path planning in absence of GPS
Paper 10195-43

Author(s):  Hassan El-Sallabi, Qatar Air Force (Qatar), et al.
Conference 10195: Unmanned Systems Technology XIX
Session PTue: Posters-Tuesday

Un-manned aerial vehicles (UAVs) is growing technology that have a huge potential future applications in both military and civilian scenarios. UAVs use GPS signals for navigation and path planning. UAVs can be controlled by pilot remotely for guidance, navigation and control to determine its maneuvering during flying and traveling path. UAVs can also fly based on determined coordinates of destination and follow optimized path based on GPS signal to allow the vehicle to move along a predetermined guidance path developed from pre-programmed waypoints. However, there many situations where the GPS signal might not be available either due to interference, or lack of enough numbers of available satellites at the desired area. Inertial measurement unit is not an option due to its high cost. UAV needs to navigate with affordable aids for localization. In urban environment, the GPS signals may not be available due to high rise building. However, such environment is usually dense with cellular signals from largely distributed cellular tower all around the area. The cellular signals from different technologies such as CDMA and LTE are already available in addition to long time existing GSM technology. In this work, we will present how to use the cellular signals from different towers and sectors to provide localization information to the UAV in addition to environment canopy database where heights of building, trees, etc are used in path planning algorithm. The work is to show how to build three dimensional radio frequency maps for different modes of technologies and how these RF maps can be used by the guidance, navigation and control system of the UAV to position itself and determine trajectory selection in three dimension space of an urban environment.


Three dimensional scene construction from a two dimensional image
Paper 10199-9

Author(s):  Franz Parkins, The Univ. of Memphis (United States), et al.
Conference 10199: Geospatial Informatics, Motion Imagery, and Network Analytics VII
Session 2: Photogrammetry and Uncertainty Propagation

We propose a method of constructing a three dimensional scene from a two dimensional image for the purpose of developing and augmenting world models for autonomous navigation. This is achieved with the adaption of the Rother-Sapiro general framework which partitions 3D space into voxels and uses maximum likelihood estimation to infer about the pose of a single object. We extend the framework for multiple objects that comprise a scene by using object recognition and segmentation. The constraints imposed by autonomous navigation require an embeddable implementation. To that end we deploy our parallelized solution on the NVIDIA Jetson TX1.


Correlation-agnostic fusion for improved uncertainty estimation in multi-view geo-location from UAVs
Paper 10199-10

Author(s):  Clark N. Taylor, Air Force Research Lab. (United States), et al.
Conference 10199: Geospatial Informatics, Motion Imagery, and Network Analytics VII
Session 2: Photogrammetry and Uncertainty Propagation

When geo-locating ground objects from a UAV, multiple views of the same object can lead to improved geo-location accuracy. Of equal import to the location estimate, however, is the uncertainty estimate associated with that location. Standard methods for estimating uncertainty from multiple views generally assume that each view represents an independent measurement of the geo-location. Unfortunately, this assumption is often violated due to correlation between the location estimates. In this paper, we apply correlation-agnostic fusion techniques to the multi-view geo-location and analyze their effects on geo-location and predicted uncertainty accuracy.


Quantifying sea ice with unmanned aerial vehicles
Paper 10199-11

Author(s):  Johanna Hansen, The Univ. of Texas at San Antonio (United States), et al.
Conference 10199: Geospatial Informatics, Motion Imagery, and Network Analytics VII
Session 2: Photogrammetry and Uncertainty Propagation

This paper presents a framework for understanding sea ice observed from low altitude with a high-resolution camera mounted on an Unmanned Aerial Vehicle (UAV). The approach described is able to automatically build a map from a large set of sea ice images as well as describe the types and concentration of ice in the scene. This capability is crucial for scientists trying to understand polar environments and for ships navigating in ice-laden waters. We demonstrate advances made in determining image registration for near featureless scenes containing only ice. We also compare our ice classification techniques to previous work and demonstrate improvement in performance without the need for human guidance.


Radar quality effects on CPA calculation for non-cooperative A/C for unmanned detect and avoid systems
Paper 10200-31

Author(s):  Charles A. Rea, Naval Air Systems Command (United States), et al.
Conference 10200: Signal Processing, Sensor/Information Fusion, and Target Recognition XXVI
Session 6: Information Fusion Methodologies and Applications III

The paper will discuss the effects of various radar qualities on the accuracy and stability of the closest point of approach calculation for pilots of unmanned air vehicles encountering a non-cooperative aircraft.


The effect on covariance consistency from self-intoxication
Paper 10200-32

Author(s):  Charles A. Rea, Naval Air Systems Command (United States), et al.
Conference 10200: Signal Processing, Sensor/Information Fusion, and Target Recognition XXVI
Session 6: Information Fusion Methodologies and Applications III

This paper will investigate the effects of self-intoxication on covariance consistency. Two cases will be considered: angle only tracks to radar detections and radar tracks to angle only detections. Lastly, authors discuss methods to maintain a coherent tactical picture, while maximizing data use and minimizing self-intoxication among participants.


Associations metrics conditioned on associating same source data or different source data
Paper 10200-33

Author(s):  Charles A. Rea, Naval Air Systems Command (United States), et al.
Conference 10200: Signal Processing, Sensor/Information Fusion, and Target Recognition XXVI
Session 6: Information Fusion Methodologies and Applications III

The unconditioned probabilities of missed and incorrect association presented by Silbert and Agate fail to capture the true association performance when the update rates among the sources are different. This problem has been addressed in this paper by providing conditional probabilities of the missed and incorrect associations. We have proposed calculating conditional probabilities of missed association and probabilities of incorrect association to give greater visibility into the performance of a T2TA algorithm. In particular these conditional probabilities quantify the association performance when associating tracks from the same source and when associating tracks from different sources.


Contact detection and analysis system for image-based classification of vessels in the Swarm II Harbor Patrol demonstration
Paper 10200-44

Author(s):  Michael T. Wolf, Jet Propulsion Lab. (United States), et al.
Conference 10200: Signal Processing, Sensor/Information Fusion, and Target Recognition XXVI
Session 8: Signal and Image Processing, and Information Fusion Applications II

This paper describes new methods and associated integrated software as part of the Contact Detection and Analysis System (CDAS) to provide unmanned surface vehicles (USVs) with the capability to detect and identify unknown contacts from captured images. With the “SwampHammer” four-camera sensing platform, the system produces stereo detections and, when requested (i.e., cued), optical object detections and identifications. The system’s data products were used to inform high-level decision-making in the “USV Swarm II” harbor patrol demonstration, in which a swarm of autonomous USVs patrolled an area, identified unknown contacts, and executed autonomous behaviors according to CDAS’ identification results.


Multi-sensor field trials for detection and tracking of multiple small unmanned aerial vehicles flying at low altitude
Paper 10200-50

Author(s):  Martin Laurenzis, Institut Franco-Allemand de Recherches de Saint-Louis (France), et al.
Conference 10200: Signal Processing, Sensor/Information Fusion, and Target Recognition XXVI
Session 9: Signal and Image Processing, and Information Fusion Applications III

The detection and tracking of small UAV flying at low altitude and the detection of multiple UAV at the same time is a challenge for state of the art detection technologies. In this context, results are discussed which were obtained in field trials using a heterogeneous sensor network consisting of acoustic antennas, small FMCW RADAR systems and optical sensors. While acoustics and RADAR was applied to monitor a wide azimuthal area (360°) and to simultaneously track multiple UAV, optical sensors were used for sequentially identification with a very narrow field of view.


Airborne synthetic aperture radar online image display with georectification and geocoding
Paper 10201-2

Author(s):  Manikandan Samykannu, Defence Research and Development Organisation (India), et al.
Conference 10201: Algorithms for Synthetic Aperture Radar Imagery XXIV
Session 1: Phenomenology and Imaging

In civil and defense applications, airborne Synthetic Aperture Radar (SAR) faces major problem in the SAR image georectification and geocoding, when SAR is applied in large-scale scene and the airborne platform faces large changes due to non linearities. Conventional SAR image formation algorithms assume straight line flight with constant speed, and fixed antenna beam pointing which leads to more geocoding errors in real time. The different sources of errors include deviation of sensor flight path in both translational as well as rotational undesirable motion because of atmospheric turbulence, high altitude winds, or other causes. The translational motion error gives rise to unequal spatial sampling of the acquired SAR raw data, target-to-SAR range errors; which corresponds to the phase error in the SAR signal phase history. The rotational motion error contributes to the antenna beam pointing error. Across track and altitude velocity errors affect the rate of change of slant range and introduce azimuthal position errors. Hence, the geocoding of the SAR image will be become difficult and it is performed offline in most of the airborne SAR. The objective is to display the real time airborne SAR image with georectification and geocoding with reduced positional error irrespective of the platform either manned or unmanned. Sub Aperture based SAR processing, adaptive velocity calculation, Overlapping of pulses, Doppler centroiding, image mosaicing with respect to the aircraft heading, radar geometry, aircraft positional information are considered in this proposed technique to display the SAR image in real time on screen with reduced geocoding errors.


Combining high-speed support vector machines with convolutional neural network feature sncoding for real-time target recognition in high-definition video for ISR missions
Paper 10202-7

Author(s):  Christine Kroll, Airbus Defence and Space (Germany), et al.
Conference 10202: Automatic Target Recognition XXVII
Session 2: Learning Methods in ATR

For Intelligence, Surveillance and Reconnaissance missions of unmanned air systems typical electro-optical payloads provide high-definition video data which has to be exploited in real-time with respect to relevant ground targets. For this purpose we propose an automatic target recognition (ATR) method combining the latest advances of Deep Convolutional Neural Networks (CNNs) with a proprietary high-speed frequency-domain Support Vector Machine (SVM) allowing for a highly optimized implementation for General Purpose Computation on a Graphics Processing Unit (GPGPU). Results on relevant high-definition airborne video sequences reveal the real-time processing capabilities while gaining an impressive classification performance increase compared to legacy target recognition approaches.


Underwater visual odometry for passive surveillance and navigation
Paper 10202-16

Author(s):  Firooz A. Sadjadi, Lockheed Martin Corp. (United States), et al.
Conference 10202: Automatic Target Recognition XXVII
Session 4: Advanced Processing Methods for ATR II

Passive navigation is a critical issue in underwater surveillance. Underwater vehicles are usually equipped with sonar and Inertial Measurement Unit (IMU) - an integrated sensor package that combines multiple accelerometers and gyros to produce a three dimensional measurement of both specific force and angular rate with respect to an inertial reference frame for navigation. This paper summarizes the results of studies in underwater odometery using a video camera for estimating the velocity of an unmanned underwater vehicle (UUV). In this study, we investigate the use of odometry information obtainable from a video camera mounted on a UUV to extract vehicle velocity relative to the ocean floor. A key challenge with this process is the seemingly bland (i.e. featureless) nature of video data obtained underwater which could make conventional approaches to image-based motion estimation difficult. To address this problem, we perform image enhancement, followed by frame to frame image transformation, registration and mosaicking/stitching. With this approach the velocity components associated with the moving sensor (vehicle) are readily obtained from (i) the components of the transform matrix at each frame; (ii) information about the height of the vehicle above the seabed; and (iii) the sensor resolution. Preliminary results are presented.


UAV imagery analysis: challenges and opportunities
Paper 10204-5

Author(s):  Barbara G. Grant, Grant Drone Solutions, LLC (United States), et al.
Conference 10204: Long-Range Imaging II
Session 2: Applications

As UAV imaging continues to expand, so too do the opportunities for improvements in data analysis. These opportunities, in turn, present their own challenges including the need for real time radiometric and spectral calibration; the continued development of quality metrics facilitating exploitation of strategic and tactical imagery; and the need to correct for platform-induced artifacts in sensor data. This presentation will address these and related issues.


Supporting Army modular open system architecture, utilizing open standards (FACE and CS), extending legacy NAVSEA product line architecture, and engineering through model driven development
Paper 10205-8

Author(s):  Ronald W. Townsen, General Dynamics Mission Systems (United States), et al.
Conference 10205: Open Architecture/Open Business Model Net-Centric Systems and Defense Transformation 2017
Session 2: Open Architecture Systems II

The Army (PM UAS) with software support from General Dynamics Mission Systems (GDMS) is developing a Product Line approach for Unmanned Vehicle Control Systems governed by the Future Airborne Capability Environment (FACE™, a Modular Open System Architecture) standard, engineered through Model Driven Development (based on OMG standards of UML, SysML and UPDM – DoDAF tools) and leverages the Naval Sea Systems Command (NAVSEA) Shipboard Combat Systems Product Line Architecture. The approach employs the UAS Control Segment (UCS) design modularity to define and integrate modular processing sections. GDMS brings to the Army the expertise of developing the first two major modules for target tracking and base infrastructure design, referred to as the Component Frameworks (CF), in the NAVSEA Product Line to enable the DoD’s “Better Buying Power” initiative.


The virtual autonomous vavigation environment: a high-fidelity modeling and simulation tool for the design and development of unmanned ground vehicles
Paper 10206-19

Author(s):  Zachary T. Prevost, U.S. Army Engineer Research and Development Ctr. (United States), et al.
Conference 10206: Disruptive Technologies in Sensors and Sensor Systems
Session 7: Sensor M&S

Modeling and Simulation (M&S) play a critical role in the design and development of Unmanned Ground Vehicles (UGVs). Many of the existing M&S tools for UGVs fail in recreating the complex situations the real world has to offer. The U.S. Army Engineer Research and Development Center has developed a high-fidelity, fully physics-driven M&S tool which addresses the shortcomings of empirical tools. The open architecture of the VANE allows users to insert their own algorithms and models, including a variety of sensor models. In addition, the VANE contains a multi-body dynamics engine for simulating vehicle dynamics.


M and S supporting ynmanned autonomous systems (UAxS) concept development and experimentation
Paper 10206-20

Author(s):  James Sidoran, Air Force Research Lab. (United States), et al.
Conference 10206: Disruptive Technologies in Sensors and Sensor Systems
Session 7: Sensor M&S

A formal mathematical calculus has been developed called Mission-Aware Framework (MAF). It combines with other analysis methods can used to model risks holistically for cyber-physical systems in contested cyber operational environments.


Visualizing UAS-collected imagery using augmented reality
Paper 10207-11

Author(s):  Damon M. Conover, U.S. Army Research Lab. (United States), et al.
Conference 10207: Next-Generation Analyst V
Session 3: Human and Information Interaction

One of the areas where augmented reality will have an impact is in the visualization of geo-specific 3D data. 3D data has traditionally been viewed on a 2D screen, which has limited its utility. Augmented reality head-mounted displays make it possible to view 3D data overlaid on the real world. We will describe: 1) how UAS-collected imagery is used to create 3D terrain maps, 2) how those maps are deployed to the HoloLens, 3) how a user can view and manipulate the maps, and 4) how multiple users can view the same virtual 3D object at the same time.


Hemispherical focal plane arrays for wide field-of-view imaging
Paper 10209-14

Author(s):  Kyle Renshaw, CREOL, The College of Optics and Photonics, Univ. of Central Florida (United States), et al.
Conference 10209: Image Sensing Technologies: Materials, Devices, Systems, and Applications IV
Session 4: Novel Image Sensing Technologies: Devices I

There are enormous optical advantages to use a curved image sensor in place of conventional flat focal plane arrays (FPA) because optical systems intrinsically want to focus to a curved focal surface. Here, we introduce techniques to fabricate hemispherical focal plane arrays to enable the development of compact, wide FOV imaging systems. We have developed monolithically integratable flexible interconnects that can be integrated with planar, silicon FPAs. This process provides fully interconnected, flexible FPAs that can conform to a hemispherical surface after releasing from the rigid wafer form factor.


Autonomous electromechanical system for gas leaks odor detection
Paper 10209-33

Author(s):  Javier Andrey Moreno Guzmán, Univ. Tecnológica de Puebla (Mexico), et al.
Conference 10209: Image Sensing Technologies: Materials, Devices, Systems, and Applications IV
Session PWed: Posters-Wednesday

An autonomous system equipped with a gas sensor (ethanol) and controlled by a microcontroller and an algorithm designed to follow the trace of smell in terms of concentration that existed in the place of taking the reading by the sensor built without considering brownouts only taken the arrangement proposed by the sensor. In this paper the results of location system made in different workspaces, focusing primarily on the acquisition of sensor data with an analog-digital conversion 10 bits are presented whose resolution would be 4.8 mV per bit against the standard 19 mV commercial resolution. Experimental results and analytic explanation is showed below.


Image processing based bio-sensing electric system cancer cells detection
Paper 10209-41

Author(s):  Jenipher D. Gonzalez-Aponte, Univ. del Turabo (United States), et al.
Conference 10209: Image Sensing Technologies: Materials, Devices, Systems, and Applications IV
Session PWed: Posters-Wednesday

Would it be beneficial to have a system that provides preliminary results of whether a patient has cancer? While several techniques were developed for the detection of cancer, pathological analysis has been the gold standard. However, it may take a significant amount of time to provide definite results, which sometimes can be subjective to the Pathologist that did the analysis. In some cases, the results come back negative using an enormous amount of resources for the test. In other cases, the result may be positive after waiting for many weeks without the patient having started the treatment. This leaves the patient with a worse prognosis than if the treatment was started earlier. The use of image processing, together with fluorescence probes, can provide the diagnosis to the physician in just a few hours or less, which can be used to start an early treatment to improve the prognosis. The main purpose of this project is design and develop a system that will identify cancer cells from the image and quantify the amount of the pixels of the RGB spectrum around the cells using algorithms from image processing.


Embry-Riddle Aeronautical University multispectral sensor laboratory: A model for distributed research and education
Paper 10210-25

Author(s):  Sonya A. H. McMullen, Embry-Riddle Aeronautical Univ. (United States), et al.
Conference 10210: Next-Generation Spectroscopic Technologies X
Session 5: Smartphones, Data Fusion and Raman

The miniaturization of unmanned systems and spacecraft, computing and sensor technologies, opening new opportunities in remote sensing and multi-sensor data fusion for a variety of applications. Embry-Riddle Aeronautical University developed an advanced sensor and data fusion laboratory to research sensor capabilities and their employment on a wide-range of autonomous, robotic, and transportation systems. This lab is unique in being a traditional campus laboratory for students and faculty located around the globe to model and test sensors and scenarios, process multisensor data sets, analyze results, and a “virtual” modeling, testing, and teaching capability reaching beyond the physical confines of the facility.


The selectable hyperspectral airborne remote sensing kit (SHARK) used in UAS for precision agriculture
Paper 10213-3

Author(s):  Rick E. Holasek, Corning Inc. (United States), et al.
Conference 10213: Hyperspectral Imaging Sensors: Innovative Applications and Sensor Standards 2017
Session 1: Hyperspectral Sensing and Imaging Sensors I

Corning Incorporated has been developing and manufacturing HSI sensor systems and components for over a decade and a half. Corning designed and developed unique HSI spectrographs with an unprecedented combination of high performance, low cost and low Size, Weight, and Power (SWaP). This paper discusses use of the Corning patented monolithic Offner spectrograph design, the microHSITM, to build a highly compact visNIR HSI turn-key airborne remote sensing payload. This Selectable Hyperspectral Airborne Remote sensing Kit (SHARK) has industry leading SWaP enabling its deployment using limited payload platforms such as small unmanned aerial vehicles (UAVs), including low cost multi-rotor copters.


A brief review and application of the lamp-plaque method for the calibration of compact hyperspectral imagers
Paper 10213-5

Author(s):  David W. Allen, National Institute of Standards and Technology (United States), et al.
Conference 10213: Hyperspectral Imaging Sensors: Innovative Applications and Sensor Standards 2017
Session 1: Hyperspectral Sensing and Imaging Sensors I

Hyperspectral imagers (aka imaging spectrometers) are becoming more commonly used for a range of applications. Newer compact designs are facilitating use in environments ranging from the laboratory benchtop to unmanned aerial vehicles (UAVs). In many cases there is a need to compare results of different hyperspectral imagers at different places and times. This necessitates the need to relate the measurements to an absolute scale. Measurement units of radiance can be related to the signal output of the imager through spectral radiance responsivity. Of the several different methods that can be used to determine the spectral radiance responsivity, the lamp-plaque method provides a simple easy to understand setup. While simple in design, there are a number of details that need to be considered in order to reduce significant bias and error. This paper examines the practice, sources of error and illustrates the application using several commercial off-the-shelf hyperspectral imagers. The result of the exercise provides a calibration with estimated uncertainties specific to each imaging system. Additionally, characterization of the imager performance can be gauged by examination of products derived from the same method outlined.


Application of deep learning to UAS based real-time hyperspectral imaging for precision agriculture
Paper 10213-8

Author(s):  ZhiQiang Chen, Univ. of Missouri-Kansas City (United States), et al.
Conference 10213: Hyperspectral Imaging Sensors: Innovative Applications and Sensor Standards 2017
Session 2: Hyperspectral Sensing and Imaging Sensors II

In this paper, we propose a real-time hyperspectral imaging and learning system for agricultural field condition monitoring. The system is developed considering only recently commercially available ‘snapshot’ hyperspectral cameras for micro- to small UAV applications, which provide so-called real-time hyperspectral imagery data. Considering the potential scene complexity as a result of very high spatial resolution, hence a deep learning framework is proposed. The deep learning engine is implemented using NVidia’s Jetson TX1 platform, and potentially applicable for integration to a light-weight unmanned aerial system. Preliminary validation using real field images will be shown and recommendation will be presented.


Radiometric calibration of an ultra-compact microbolometer thermal imaging module
Paper 10214-44

Author(s):  Joseph A. Shaw, Montana State Univ. (United States), et al.
Conference 10214: Thermosense: Thermal Infrared Applications XXXIX
Session 12: Detectors, Imaging Systems and Calibration I

We report on radiometric calibration of a commercial ultra-compact, low-cost thermal imaging module with methods that were developed to achieve scientific-grade results from low-cost, uncooled IR imagers.


Nanophotonic interferometric immunosensor for label-free and real-time monitoring of chemical contaminants in marine environment
Paper 10215-2

Author(s):  Blanca Chocarro Ruiz, Institut Català de Nanociència i Nanotecnologia (ICN2) (Spain), et al.
Conference 10215: Advanced Environmental, Chemical, and Biological Sensing Technologies XIV
Session 1: Biosensing Systems

In the frame of the BRAAVOO (Biosensors, Reporters and Algal Autonomous Vessels for Ocean Operation) project, we are developing a nanoimmunosensor module for the on-site analysis of sea pollutants based on a silicon technology with a view to implement a complete lab-on-a-chip platform. The project funded by the European Union (Seventh Framework Programme) is dedicated to the development of biosensors for real-time monitoring of chemical contaminants of anthropogenic origin in the marine environment and its integration into ships and buoys. One of these pollutants is Irgarol 1051. We are developing a bimodal waveguide interferometer (BiMW) nano-immunosensor for real-time and label-free detection of Irgarol 1051.


A custom multi-modal sensor suite and data analysis pipeline for aerial field phenotyping
Paper 10218-3

Author(s):  Paul Bartlett, Near Earth Autonomy, Inc. (United States), et al.
Conference 10218: Autonomous Air and Ground Sensing Systems for Agricultural Optimization and Phenotyping II
Session 1: Field-based Phenotyping with Ground-based and Aerial Sensor Platforms

Our group has developed a custom, multi-modal sensor suite and data analysis pipeline to phenotype crops in the field using unpiloted aircraft systems (UAS). This approach to high-throughput field phenotyping is part of a research initiative intending to markedly accelerate the breeding process for refined energy sorghum varieties. The outcome of the work is a set of commercially available phenotyping technologies, including sensor suites, fully integrated UAS, and data analysis software. Concerted effort is also underway to transition these technologies to farm management users. Streamlined, lower cost sensor packages and intuitive software interfaces will facilitate the transition to these markets.


Automatic mission planning algorithms for aerial collection of imaging-specific tasks
Paper 10218-6

Author(s):  Paul Sponagle, Rochester Institute of Technology (United States), et al.
Conference 10218: Autonomous Air and Ground Sensing Systems for Agricultural Optimization and Phenotyping II
Session 2: Control Systems and Artificial Intelligence in Agricultural UAV Applications

The rapid advancement and availability of unmanned aircraft systems has led to many novel exploitation tasks utilizing the unique aerial image data that are captured. Algorithms to support structure from motion tasks are under construction to minimize occlusions are under development. Autonomous, periodic overflight of calibration panels permits more efficient data collection under varying cloud conditions. Collection of bidirectional reflectance distribution function measurements can be accomplished without disturbing soil or vegetation in a sensitive area of interest. These novel algorithms will provide imaging scientists additional tools to meet future imaging tasks.


Melon yield prediction using small unmanned aerial vehicles
Paper 10218-7

Author(s):  Tiebiao Zhao, Univ. of California, Merced (United States), et al.
Conference 10218: Autonomous Air and Ground Sensing Systems for Agricultural Optimization and Phenotyping II
Session 2: Control Systems and Artificial Intelligence in Agricultural UAV Applications

Thanks to the development of camera technologies, small unmanned aerial systems (sUAS), it is possible to collect aerial images of field with more flexible visit, higher resolution and much lower cost. Furthermore, the performance of objection detection based on deeply trained convolutional neural networks (CNNs) has been improved significantly. In this study, we applied these technologies in the melon production, where high-resolution aerial images were used to count melons in the field and predict the yield. Two CNN-based object detection frameworks-Faster R-CNN and Single Shot MultiBox Detector (SSD) are applied in the melon classification. Our results showed that sUAS plus CNNs were able to generate accurate melon yield prediction in the late harvest season.


Radiometric calibration approach for UAV-based remote sensing
Paper 10218-10

Author(s):  Yeyin Shi, Texas A&M AgriLife Research and Extension Ctr. (United States), et al.
Conference 10218: Autonomous Air and Ground Sensing Systems for Agricultural Optimization and Phenotyping II
Session 3: Practical Issues for Commercialization of UAVs in Agriculture


UAV remote sensing for phenotyping drought tolerance in peanut
Paper 10218-11

Author(s):  Maria Balota, Virginia Polytechnic Institute and State Univ. (United States), et al.
Conference 10218: Autonomous Air and Ground Sensing Systems for Agricultural Optimization and Phenotyping II
Session 3: Practical Issues for Commercialization of UAVs in Agriculture


Evaluating crop stress using thermal images at multiple spatial resolutions
Paper 10218-12

Author(s):  Gregory Rouze, Texas A&M AgriLife Research and Extension Ctr. (United States), et al.
Conference 10218: Autonomous Air and Ground Sensing Systems for Agricultural Optimization and Phenotyping II
Session 3: Practical Issues for Commercialization of UAVs in Agriculture


Estimating plant population with UAV-derived vegetation indices
Paper 10218-15

Author(s):  Joseph Oakes, Virginia Polytechnic Institute and State Univ. (United States), et al.
Conference 10218: Autonomous Air and Ground Sensing Systems for Agricultural Optimization and Phenotyping II
Session 4: Aerial and Ground-based Sensing of Critical Agricultural Phenotypes and Conditions


3-D reconstruction optimization using imagery captured by unmanned aerial vehicles
Paper 10218-17

Author(s):  Abby L. Bassie, Geosystems Research Institute (United States), et al.
Conference 10218: Autonomous Air and Ground Sensing Systems for Agricultural Optimization and Phenotyping II
Session 4: Aerial and Ground-based Sensing of Critical Agricultural Phenotypes and Conditions

Because UAVs are rising in popularity as an indispensable tool in precision agriculture, it is vitally important that researchers understand how to optimize the performance of UAVs and their associated camera payloads for 3-D reconstruction of surveyed areas. In this study, imagery captured by a Nikon RGB camera attached to a Precision Hawk Lancaster was used to survey an agricultural field from six different altitudes. 3-D point clouds of the field were generated from the images at each altitude, and the accuracies of linear measurements made on reference objects within each point cloud were compared.


Real-time smoke detection using a moving camera
Paper 10223-2

Author(s):  Ahmet E. Çetin, Bilkent Univ. (Turkey), et al.
Conference 10223: Real-Time Image and Video Processing 2017
Session 1: Real-time Algorithms and Systems

Early detection of a wildfire is extremely important to extinguish it as quick as possible before any significant damage [1]. Wildfires that cause major problems usually start in remote areas and can develop into catastrophic events. Drones can fly over uninhabited remote areas in a predetermined GPS based route and spot wildfires at an early stage. Smoke is clearly visible from long distances in wildfires and forest fires. Ordinary visible range cameras can detect smoke from long distances. However, current computer vision based smoke detection methods assume that the camera is stationary [1]. In this paper we will describe a real-time smoke detection algorithm that can be installed in mobile platforms. We use the color information and SIFT algorithm to detect smoke regions and match them in consecutive images. By tracking and analyzing the motion of a potential smoke region, we can determine whether it is smoke or not. The algorithm can be implemented in an Android platform in real-time at a rate of 4 to 5 fps.


Dual field combination for unmanned video surveillance systems
Paper 10223-11

Author(s):  Louise Sarrabezolles, Institut Franco-Allemand de Recherches de Saint-Louis (France), et al.
Conference 10223: Real-Time Image and Video Processing 2017
Session 3: Real-time Video Processing

A new automatic embedded vision system is proposed for long-duration outdoor surveillance. Inspired by the human early vision, it combines the information of two cameras: a fixed peripheral vision with low resolution and a mobile foveal sensor with high resolution. Combining a detection process, feature extractors and a classifier adapted to low power embedded systems, and factorizing computational primitives throughout its different functions, allows to drastically reduce the computational cost and therefore the energy consumption of our system. Furthermore, a close cooperation and information fusion between the two vision modes allows to improve the quality of the detection and recognition processes.


Important Author Dates

Author Notification (Rescheduled)
13 December 2016

Manuscripts Due
13 March 2017


Browse Defense, Security, and Sensing 2011 papers


Sign up for event e-alerts

 Subscribe


Join the conversation:

#SPIEDCS

Twitter @SPIEEvents Instagram @SPIEphotonics Facebook SPIE, the international society for optics and photonics SPIE Defense + Commercial Sensing LinkedIn Showcase page