Proceedings Volume 10640

Unmanned Systems Technology XX

Robert E. Karlsen, Douglas W. Gage, Charles M. Shoemaker, et al.
cover
Proceedings Volume 10640

Unmanned Systems Technology XX

Robert E. Karlsen, Douglas W. Gage, Charles M. Shoemaker, et al.
Purchase the printed version of this volume at proceedings.com or access the digital version at SPIE Digital Library.

Volume Details

Date Published: 23 July 2018
Contents: 7 Sessions, 23 Papers, 17 Presentations
Conference: SPIE Defense + Security 2018
Volume Number: 10640

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 10640
  • Perception
  • Special Topics
  • Robotics CTA
  • Navigation
  • Collaborative Robotic Teams: Joint Session with conferences 10640 and 10651
  • Poster Session
Front Matter: Volume 10640
icon_mobile_dropdown
Front Matter: Volume 10640
This PDF file contains the front matter associated with SPIE Proceedings Volume 10640, including the Title Page, Copyright information, Table of Contents, and Conference Committee listing.
Perception
icon_mobile_dropdown
Robust homing with stereovision
Visual Homing is a bioinspired approach to robot navigation which can be fast and uses few assumptions. However, visual homing in a cluttered and unstructured outdoor environment offers several challenges to homing methods that have been developed for primarily indoor environments. One issue is that any current image during homing may be tilted with respect to the home image. The second is that moving through a cluttered scene during homing may cause obstacles to interfere between the home scene and location and the current scene and location. In this paper, we introduce a robust method to improve a previous developed Homing with Stereo Vision (HSV) method for visual homing. HSV adds stereo information to the image information typically used in homing resulting in improved performance. The Robust Homing with Stereo Vision (RHSV) algorithm is modified to deal with current images taken at arbitrary pitch and roll values, and to handle homing and navigation through occluding obstacles. The results for several trials comparing HSV and RHSV are presented and the future direction of this work outlined.
Automated, near real-time inspection of commercial sUAS imagery using deep learning
Chris Kawatsu, Ben Purman, Aaron Zhao, et al.
Commercial small Unmanned Aerial Systems (sUAS) have become popular for real-time inspection tasks due to their cost-effectiveness at covering large areas quickly. They can produce vast amounts of image data at high resolution, with little user involvement. However, manual review of this information can’t possibly keep pace with data collection rates. For time-sensitive applications, automated tools are required to locate objects of interest. These tools must perform at very low false alarm rates to avoid overwhelming the user. We approach real-time inspection as a semi-automated problem where a single user can provide limited feedback to guide object detection algorithms.
Automated data interpretation, tasking, and coordination of UAS imaging (Conference Presentation)
Sandra M. Klute, Evan M. Lally, Christopher Dusold, et al.
As hardware platforms mature and evolve to contain higher compute capacity, Small Unmanned Aerial Systems (sUAS) are increasingly capable of operating as fully-integrated, cooperative inspection systems. A variety of lightweight sensing payloads are emerging for efficient multi-modal data collection. Deep learning algorithms applied to this sensor data significantly reduce the burden on system operators and enable the fusion of data from multiple sources for enhanced decision making. The Air Force Civil Engineer Center (AFCEC) and TORC Robotics are developing a Rapid Airfield Damage Assessment System (RADAS) that uses simultaneous data streams from multiple sUAS and ground sensors for computer-aided condition assessment and planning of airfield repair. Operators, aided by intelligent algorithms, remotely monitor incoming data and software tools to identify a Minimum Airfield Operating Surface (MAOS). Recent developments by AFCEC and TORC use deep learning algorithms to eliminate the bottleneck of human-in-the-loop interpretation of multiple simultaneous data sources. These advances provide a supervised autonomous workflow in: (a) identification of damages from multiple incoming sUAS video streams, (b) automated tasking of decisions based on that data, and (c) adjustment of decisions based on additional incoming information. Preliminary results demonstrate significant reduction in airfield assessment time, increased assessment accuracy, and remove humans from danger during the inspection process. This work is part of the RADAS program funded by the Air Force Civil Engineering Center (AFCEC).
Special Topics
icon_mobile_dropdown
A translation architecture for the Joint Architecture for Unmanned Systems (JAUS)
JAUS is an open architecture designed to support interoperability between unmanned vehicles, payloads and controllers.1 However, it competes against a plethora of other open architecture technologies and standards. In many cases, there is much to be gained by merging multiple open architecture components within a single system. One such case is with the Navy’s Common Control System (CCS) architecture utilized by the Multi-robot Operator Control Unit version 4 (MOCU4). CCS has primarily focused on Group 3-5 aircraft whereas MOCU4 is focused on ground and maritime unmanned vehicles. To utilize both of these architectures within a single system, a translation architecture called the SAE JAUS Vehicle Interface Service (VIS) has been designed and implemented by the MOCU4 team at the Space and Naval Warfare Systems (SPAWAR) Center Pacific (SSC Pacific). This paper will explore the design considerations and decisions of this VIS, as well as provide details of its implementation. It will also describe briefly how the VIS has been developed and utilized for the following projects: Navy Common Control System (CCS) integration with Large Training Vehicle (LTV), Control Station Human Machine Interface (CaSHMI), and the Universal Tactical Controller (UTC) for the Common Robotic System - Individual (CRS(I)).
MAD-VR: machine learning, analysis, and design in virtual reality
As the modern battlespace continues to evolve, reliance on relatively few, dominant weapon systems is rapidly becoming infeasible. New weapon systems must be able to communicate and coordinate with other actors in the mission arena to achieve warfighter objectives. This is creating an explosion of complexity and data/information processing burden that hampers the warfighter’s ability to effectively operate. This emerging complexity is in turn driving the need for sophisticated autonomous and semi-autonomous systems, as well as adaptive real-time filters and decision-assist mechanisms.

Machine learning, Analysis, and Design in Virtual Reality (MAD-VR) is a tool to facilitate the design, proof-of-concept, and initial testing of algorithms for autonomy, information fusion, and machine learning. Designed as a next-generation front-end for high-speed simulations, it specifically addresses the need for a high-level, system-of-systems environment within which to evaluate the battlespace impact of these critical algorithms. Engineers and technicians will be able to observe the execution of new control systems, sensor data, and decision support systems in a high-fidelity simulation incorporating a diverse breadth of weapons and platforms, all working together to achieve mission success. Additionally, the warfighter will be able to use this tool directly to help with mission planning and optimization, as a means of examining outcomes in a Monte Carlo style of analysis.
DRESH: DRone EnSnaring mesH
David R. Erickson, Matthew Serge, Douglas Forrest
This paper describes the findings of novel anti-drone obstacles development developed for stopping, fouling, and trapping Class I Mini and Micro unmanned aerial vehicles by targeting motors. This work, part of the Defeat Autonomous Systems (DAS) program, investigates counters to unmanned vehicle technology. Preliminary results demonstrate trapped drones, fouled motors that suggests this is a promising new capability to deny areas to drone incursions. Results indicate there is exists a sweet spot in mechanical properties that justifies further investigation into obstacle dynamics, modeling and simulation, materials, and notional system concepts to deliver a novel defensive obstacle capability against drones, expanding from the basic obstacle design.
blindBike: an assistive bike navigation system for low-vision persons
Lynne Grewei, William Overell, Christopher Lagali
blindBike is a system that uses multiple sensors including a smartphone camera, gyroscope, gps sensors and a cadence sensor to assist in the process of bicycle driving and navigation for people with low vision. We propose that with the assistance of blindBike it may be possible for those with low vision to be mobile at a new level. However, blindBike can also be assistive to those with normal vision. Through the use of today’s smartphones, the blindBike app can affordably assist with navigation and Road Following. This work focuses on the road following and intersection assistance modules.
A game of timing with detection uncertainty
Mobility and terrain are two sides of the same coin. I cannot speak to my mobility unless I describe the terrain's ability to thwart my maneuver. Game theory describes the interactions of rational players who behave strategically. In previous work we described the interactions between a mobility player, who is trying to maximize the chances that he makes it from point A to point B with one chance to refuel, and a terrain player who is trying to minimize that probability by placing an obstacle somewhere along the path from A to B. In this paper, we add the twist that the mobility player cannot use their resource until they detect the terrain player. This relates to the literature of games of incomplete information, and can be thought of as a more realistic model of this interaction. In this paper we generalize the game of timing studied in the previous paper to include the possibility that one of the players has imperfect ability to detect his adversary.
Robotics CTA
icon_mobile_dropdown
Robotics collaborative technology alliance (RCTA) program overview
The RCTA program is an alliance of ARL and a consortium of academic and industry partners. The program was awarded in 2010 and is expected to conclude by early 2019. The program conducts leading edge research in basic and applied ground robotics technologies, with an overarching goal of going from tele-operated to autonomous robot teammates. The research addresses the US Army’s manned-unmanned teaming (MUMT) requirement, particularly for the dismounted team. However, the research is also applicable to other MUMT scenarios including larger platforms and systems. The program’s four research focus areas are: Perception, Intelligence, Human-Robot Interaction, and Dexterous Manipulation and Unique Mobility. In addition to these four research areas, the alliance regularly integrates research into demonstrable technologies and conducts technology assessments. The four research areas address key technology gaps in achieving several key autonomous robotic capabilities such as high-speed perception and mobility in rough terrain, situation awareness in unstructured environment, collaborative human-robot mission planning and execution, multimodal human-robot dialogue, and dexterous manipulation in cluttered environment. This paper provides an update on RCTA program structure and the current research topics.
An experiment to evaluate robotic grasping of occluded objects
Arnon Hurwitz, Marshal Childers, Andrew Dornbush, et al.
In December of 2017, members of the Army Research Laboratory’s Robotics Collaborative Technology Alliance (RCTA) conducted an experiment to evaluate the progress of research on robotic grasping of occluded objects. This experiment used the Robotic Manipulator (RoMan) platform equipped with an Asus Xtion to identify an object on a table cluttered with other objects, and to grasp and pick up the target object. The identification and grasping was conducted with varying input factor assignments following a formal design of experiments; these factors comprised different sizes of target, varied target orientation, variation in the number and positions of objects which occluded the target object from view, and different levels of lighting. The grasping was successful in 18 out of 23 runs (78% success rate). The grasping action was conducted within constraints placed on the position and orientation of the RoMan with respect to the table of target objects. The Statistical approach of a ‘deterministic’ design and the use of odds ratio analysis were applied to the task at hand.
Modeling and traversal of pliable materials for tracked robot navigation
Camilo Ordonez, Ryan Alicea, Brandon Rothrock, et al.
In order to fully exploit robot motion capabilities in complex environments, robots need to reason about obstacles in a non-binary fashion. In this paper, we focus on the modeling and characterization of pliable materials such as tall vegetation. These materials are of interest because they are pervasive in the real world, requiring the robotic vehicle to determine when to traverse or avoid them. This paper develops and experimentally verifies a template model for vegetation stems. In addition, it presents a methodology to generate predictions of the associated energetic cost incurred by a tracked mobile robot when traversing a vegetation patch of variable density.
When does a human replan? Exploring intent-based replanning in multi-objective path planning
Meher T. Shaikh, Michael A. Goodrich
In goal-based tasks such as navigating a robot from location A to location B in a dynamic environment, human intent can mean to choose a specific trade-off between multiple competing objectives. For example, intent can mean to find a path that balances between "Go quickly" and "Go stealthily". Given human expectations about how a path balances such tradeoffs, the path should match the human's intent throughout the entire execution of the path even if the environment changes. If the path drifts from the human's intent because the environment changes, then a new robotic-path needs to be planned -- referred to as path-replanning.

We discuss here three system-initiated triggers (prompts) for path-replanning. The objective is to create an interactive replanning system that yields paths that consistently match human intent. The triggers are to replan (a) at regular time intervals, (b) when the current robotic path deviates from the user intent, and (c) when a better path can be obtained from a different homotopy class. Further, we consider one user-generated replanning trigger that allows the user to stop the robot anytime to put the robot onto a new route. These four trigger variants seek to answer two fundamental critical questions: When is a re-planned path acceptable to a human? and How should a planner involve a human in replanning?
Parallel approach to motion planning in uncertain environments
Mario Y. Harper, Camilo Ordonez, Emmanuel G. Collins, et al.
Real world motion planning often suffers from the need to replan during execution of the trajectory. This replanning can be triggered as the robot fails to properly track the trajectory or new sensory information is provided that invalidates the planned trajectory. Particularly in the case of many occluded obstacles or in unstructured terrain, replanning is a frequent occurrence. Developing methods to allow the robots to replan efficiently allows for greater operation time and can ensure robot mission success. This paper presents a novel approach that updates heuristic weights of a sampling based A* planning algorithm. This approach utilizes parallel instances of this planner to quickly search through multiple heuristic weights within its allotted replanning time. These weights are employed upon triggered replanning to speed up computation time. The concept is tested on a simulated quadrupedal robot LLAMA with realistic constraints on computation time imposed.
Navigation
icon_mobile_dropdown
Brain emotional learning-based intelligent path planning and coordination control of networked unmanned autonomous systems (Conference Presentation)
In this paper, intelligent path planning and coordination control of Networked Unmanned Autonomous Systems (NUAS) in dynamic environments is presented. A biologically-inspired approach based on a computational model of emotional learning in mammalian limbic system is employed. The methodology, known as Brain Emotional Learning (BEL), is implemented in this application for the first time. The multi-objective properties and learning capabilities added by BEL to the path planning and coordination co-design of Networked Unmanned Autonomous Systems (NUAS) are very useful, especially while dealing with challenges caused by dynamic environments with moving obstacles. Furthermore, the proposed method is very promising for implementation in real-time applications due to its low computational complexity. Numerical results of the BEL-based path planner and intelligent controller for NUAS demonstrate the effectiveness of the proposed approach. The main contribution of this paper is to utilize the computational model of emotional learning in mammal’s brain, i.e., BEL, for developing a novel path planning and intelligent control method for practical real-time NUAS. To the best of the authors knowledge, this is the first time that BEL is implemented for accomplishing intelligent path planning and coordination control of NUAS. The learning capabilities added by the proposed approach to the path planning and coordination of MAS enhances the overall path planning strategy, which is very useful especially while dealing with challenges caused by dynamic and uncertain environments with unpredictable and unknown moving obstacles.
Image-aided inertial navigation for an Octocopter
S. Baheerathan, O. K. Hagen
A typical unmanned aerial system combines an Inertial Navigation System (INS) and a Global Navigation Satellite System (GNSS) for navigation. When the GNSS signal is unavailable, the INS errors grow over time and eventually become unacceptable as a navigation solution. Here we investigate an image-aided inertial navigation system to cope with GNSS failure. The system is based on tightly integrating inertial sensor data with position data of image-featurepoints that corresponds to landmarks over an image sequence. The aim of this experiment is to study the challenges and the performance of the image-aided inertial navigation system in realistic flight with an Octocopter. The system demonstrated the ability to cope with the GNSS failure by reducing the position drift drastically compared to the position drift of free-inertial.
UAV vision-based localization techniques using high-altitude images and barometric altimeter
Position information of unmanned aerial vehicles (UAVs) and objects is important for inspections conducted with UAVs. The accuracy with which changes in object to be inspected are detected depends on the accuracy of the past object data being compared; therefore, accurate position recording is important. A global positioning system (GPS) is commonly used as a tool for estimating position, but its accuracy is sometimes insufficient. Therefore, other methods have been proposed, such as visual simultaneous localization and mapping (visual SLAM), which uses monocular camera data to reconstruct a 3D model of a scene and simultaneously estimates the trajectories of the camera using only photos or videos.

In visual SLAM, UAV position is estimated on the basis of stereo vision (localization), and 3D points are mapped on the basis of the estimated UAV position (mapping). Processing is implemented sequentially between localization and mapping. Finally, all the UAV positions are estimated and an integrated 3D map is created. For any given iteration in the sequential processing, there will be estimation error, but in the next iteration, the previous estimated position will be used as a base position regardless of this error. As a result, error accumulates until the UAV returns to a location it passed before. Our research aims to mitigate this problem. We propose two new methods.

(1) Accumulated error caused by local matching with sequential low-altitude images (i.e. close-up photos) is corrected with global-matching between low- and high-altitude images. To perform global-matching that is robust against error, we implemented a method wherein the expected matching areas are narrowed down on the basis of UAV position and barometric altimeter measurements.

(2) Under the assumption that absolute coordinates include axis-rotation error, we proposed an error-reduction method that minimizes the difference in the UAVs’ altitude between the visual SLAM and sensor (bolometer and thermometer) results.

The proposed methods reduced accumulated error by using high-altitude images and sensors. Our methods improve the accuracy of UAV- and object-position estimation.
Collaborative Robotic Teams: Joint Session with conferences 10640 and 10651
icon_mobile_dropdown
Removing the bottleneck: utilizing autonomy to manage multiple UAS sensors from inside a cockpit
Thomas J. Alicia, Grant S. Taylor, Terry S. Turpin, et al.
The U.S. Army Aviation Development Directorate, in collaboration with United Technologies Research Center and University of California: Santa Barbara, has developed a system for controlling multiple unmanned aerial systems (UAS) from a manned helicopter cockpit. Similar manned-unmanned teaming (MUM-T) capabilities have been successfully fielded in the AH-64E attack helicopter, with the Copilot/Gunner (CPG) managing one UAS; however, managing multiple UAS in the same manner would result in a cognitive processing bottleneck within the CPG. Removing this bottleneck requires implementation of autonomous behaviors and human-centered design principles to avoid detracting from the CPG’s primary mission. This research evaluates these concepts with respect to multi-UAS MUM-T performance. Sixteen U.S. Army aviators with MUM-T experience participated in the experiment. The first phase assessed the performance of a CPG managing multiple UAS simultaneously in a fixed-base MUM-T simulator featuring touchscreen displays, simulated aided target recognition, and task-level delegation of control (DelCon). The second phase iteratively improved the DelCon capability and added an Attention Allocation Aid (AAA) in the form of real-time gaze tracking feedback. The research demonstrated that a single crewmember can manage at least three UAS assets while executing complex multi- UAS MUM-T tactical missions. The DelCon capability allowed participants to more efficiently perform a subset of mission tasks. Furthermore, subjective ratings from the participants indicated a willingness to accept the AAA and DelCon systems. Overall, this research demonstrates the potential of utilizing automation and human-centered design principles to overcome cognitive bottlenecks and achieve greater system efficiency.
Real-time inspection of 3D features using sUAS with low-cost sensor suites
Ben Purman, Chris Kawatsu, Aaron Zhao, et al.
Recently, commercial Small Unmanned Aerial Systems (sUAS) have become very popular for real-time inspection tasks due to their cost-effectiveness at covering large areas quickly. Within these tasks, many objects of interest are best characterized by their 3D geometry. This is particularly true when considering the false alarm rates associated with automated analysis of features with irregular appearance, but well-characterized geometry. However, sUAS and low-cost sensors present challenges for sUAS tasks due to the limitations in payload and sensor quality. We examine the effectiveness of multi-view stereo and commercial LIDAR in the domain of rapid airfield damage assessment.

On-board cameras that can capture high-resolution video and still images are ubiquitous in the commercial sUAS market. Multi-view stereo provides an approach that extracts dense 3D points for a scene, creating an appealing solution that requires no additional hardware for most sUAS. Recent advances in multi-view stereo approach real-time processing rates, and provide excellent stability. Also, commercial LIDAR prices have dropped dramatically in recent years, along with the size, weight, and power (SWaP) of these devices. However, typical applications of airborne LIDAR require a highly tuned motion measurement suite. We examine the processing and performance trade-offs associated with each of these approaches in the context of rapid airfield damage assessment. We examine flight time, minimum object size, processing time, SWaP, and detection and false alarm rates over a test area of approximately 35,000 square meters.
Benchmarking a LIDAR obstacle perception system for aircraft autonomy
Adam Stambler, Hugh Cover, Kyle Strabala
The limits of an unmanned aerial vehicle's (UAV) obstacle detection system place fundamental limits on the UAV's ability to fly safely. Any certification of aviation grade autonomy will require benchmarking of the obstacle perception sub-system and its effect on UAV performance. Consequently, as Near Earth Autonomy has built a state of the art lidar based obstacle perception system, it has also been developing benchmarks and performing ight tests to understand how the theoretical capabilities of its perception suite translates into operational limits on air frames using its perception suite.

This paper analyses these obstacle perception guarantees through the lens of flight testing Near Earth Autonomy's m4 perception suite. The m4 perception suite uses a scanning, nodding lidar to enable safe autonomous take off, flight, and landing. It was tested on a UH-1 helicopter as a part of the Office of Naval Research's (ONR) Autonomous Aerial Cargo/Utility System (AACUS) program. The m4 perception suite enables safe high altitude cruise flight by perceiving all large obstacles within an 800 meter range of the helicopter. As the helicopter nears the ground the perceptual guarantees required for cruise flight speeds are violated by the smallest, and most difficult obstacles: wires. The perception suite still enables safe flight by using specialized algorithms to detect wires up to 400 meters away while travelling at 30m/s. Through over 80 flights in 8 different locations, we test the obstacle perception assumptions, see how the assumptions change, and understand how m4's capabilities impact full autonomous helicopter performance.
Cooperative cognitive electronic warfare UAV game modeling for frequency hopping radar
Mark Rahmes, Dave Chester, Rich Clouse, et al.
In modern warfare concepts, the use of wireless communications and network-centric topologies with unmanned aerial vehicles (UAVs) creates an opportunity to combine the familiar concepts of wireless beamforming in opportunistic random arrays and swarm UAVs. Similar in concept to the collaborative beamforming used in ground-based randomly distributed array systems, our novel approach improves wireless beamforming performance by leveraging cooperative location and function knowledge. This enables the capabilities of individual UAVs to be enhanced, using swarming and cooperative beamforming techniques, for more-effective support of complex radar jamming and deception missions. In addition, a dedicated System Oversight function can be used to optimize the number of beamforming UAVs required to jam a given target and manage deception assets.
Poster Session
icon_mobile_dropdown
Automatic voice control system for UAV-based accessories
Filip Rezac, Jakub Safarik, Erik Gresak, et al.
This article deals with the system for voice control of the UAV (Unattended Aerial Vehicle) accessories using the mobile device and an advanced communication platform. The paper provides an overview of projects realized in last period in field of voice-controlled drones and explains the applied approach for automatic speech recognition using hidden markov models. Authors describes also converting speech commands instructions for UAV control and necessary steps in practical testing and optimization of the whole system. The achieved results and conclusions are given in the final chapter of the article in which authors provide their experience gained within the experimental development.
Enabling intelligence with temporal world models
Philip R. Osteen, Jason L. Owens, Robert St. Amant, et al.
A system for storing knowledge along with the ability to query it, which together we call a world model, provides a central source of relevant information to the various processes that constitute an autonomous agent. We outline the development of a world model that supports ontologies to describe the world, and provides interfaces for processes to populate and query the system, including queries over time. The world model is implemented as a tuple store with structural subsumption query support. We evaluate its effectiveness on an embodied system, and show that it can answer spatiotemporal queries about objects observed in the environment.
Stopped random walks and control of uncertain systems
In the design of control systems affected by uncertain parameters, a primary goal is to ensure that a controller designed based on nominal values of parameters will perform satisfactorily in the presence of uncertainties. Adaptive randomized algorithms have been proposed in literature for overcoming the issue of conservatism and computational complexity which exponentially grows with respect to the dimension of uncertainty. In this paper, we demonstrate that such adaptive randomized algorithms are inherently associated with stopped random walks. We develop a unified theory of stopped random walks which has potential to make better decision and control strategies for uncertain systems.
Confidence regions with applications to sensing and control
High volume of data are becoming increasingly common for modern sensors and control systems. In this paper, we propose new techniques for constructing confidence regions based on concentration inequalities. Such confidence regions can be used to represent large volume of data with high dimensionality. Moreover, such confidence regions can be used to analyze the performance of uncertain systems under uncertainties, especially the estimation of the average overshoot, rise time and settling time which are critical specifications of a control system.