Fusion of inertial, optical flow, and airspeed measurements for UAV navigation in GPS-denied environments
Author(s):
Andrey Soloviev;
Adam J. Rutkowski
Show Abstract
This paper describes the data fusion approach that is developed for navigation of autonomous unmanned aerial vehicles
(UAVs) for those applications where the Global Positioning System (GPS) signals are denied. Example scenarios
include navigation under interference and jamming and urban navigation missions. The system architecture is
biologically inspired and exploits measurements that are utilized by flying insects for self-localization purposes. The
data fusion algorithm implements the Kalman filter mechanization that fuses INS data (position velocity and attitude),
optical flow data from a monocular downward looking visual system (scaled body-frame vehicle velocity components),
and compass measurements (azimuth angle). Kalman filter measurement observables are formulated in a complimentary
form, i.e., as differences between optical flow/compass measurements and INS states that are projected into the
measurement domain. The filter estimates inertial error states and error in the flight height. We present the navigation
solution architecture and demonstrate its feasibility using simulations and actual data experiments. Also, we compare our
results to a data fusion algorithm that fuses airspeed and optical flow measurements.
Integrated long-range UAV/UGV collaborative target tracking
Author(s):
Mark B. Moseley;
Benjamin P. Grocholsky;
Carol Cheung;
Sanjiv Singh
Show Abstract
Coordinated operations between unmanned air and ground assets allow leveraging of multi-domain
sensing and increase opportunities for improving line of sight communications. While numerous
military missions would benefit from coordinated UAV-UGV operations, foundational capabilities
that integrate stove-piped tactical systems and share available sensor data are required and not yet
available. iRobot, AeroVironment, and Carnegie Mellon University are working together, partially
SBIR-funded through ARDEC's small unit network lethality initiative, to develop collaborative
capabilities for surveillance, targeting, and improved communications based on PackBot UGV and
Raven UAV platforms. We integrate newly available technologies into computational, vision, and
communications payloads and develop sensing algorithms to support vision-based target tracking.
We first simulated and then applied onto real tactical platforms an implementation of Decentralized
Data Fusion, a novel technique for fusing track estimates from PackBot and Raven platforms for a
moving target in an open environment. In addition, system integration with AeroVironment's Digital
Data Link onto both air and ground platforms has extended our capabilities in communications range
to operate the PackBot as well as in increased video and data throughput. The system is brought
together through a unified Operator Control Unit (OCU) for the PackBot and Raven that provides
simultaneous waypoint navigation and traditional teleoperation. We also present several recent
capability accomplishments toward PackBot-Raven coordinated operations, including single OCU
display design and operation, early target track results, and Digital Data Link integration efforts, as
well as our near-term capability goals.
PRISTA UAVs: from troop companion to troop replacement
Author(s):
Jon Maynell
Show Abstract
The role of ground based troops continues to expand as the nature of military missions and the focus of warfare
changes. Localized conflicts waged in restricted areas have replaced large mechanized conflicts in open battle spaces.
Extremely close range surveillance activities, often performed inside caves, buildings, or other structures, facilitate
strikes that must be made with surgical precision. This type of proximate reconnaissance is a critical part of the new
battlefront. To increase operational effectiveness and minimize collateral damage and loss of life among troops and noncombatants,
there is a continuing effort within the military to improve PRISTA (Proximate reconnaissance, intelligence,
surveillance and target acquisition) capabilities. As the vast majority of inherently hazardous PRISTA missions are
presently carried out by humans, there is an urgent need for a robotic tool which will enhance capabilities and reduce
risk of injury. Ideally, this tool could eventually mature to perform the PRISTA missions on its own, replacing human
troops in the field. The best candidate for this assignment within the field of currently available robotic assets would be
a hover-capable UAV (unmanned aerial vehicle). There are no other workable near term options for a robotic asset that
can go almost everywhere that humans can go, and do almost everything that humans can do.
Determining the position of runways from UAV video
Author(s):
Richard Warren;
Amber Fischer
Show Abstract
Most UAVs use a GPS-based auto-landing system. However, GPS systems can fail, either through natural interference or
deliberate jamming. To safely operate in the national airspace, UAVs must include a backup auto-landing system. At 21st
Century Systems, Inc., we are developing a vision-based landing system capable of replacing GPS when GPS fails.
Existing structure-from-motion techniques operate on two frames of video. These techniques find a collection of salient
features in each frame. They correctly match the features between the two frames and then use epipolar geometry to
calculate distances to each feature. Unfortunately, these techniques are too computationally complex to meet our realtime
requirements.
Instead, we have developed two closed-formed solutions that provide real-time calculations of the runway's relative
position from a single frame of video. Our first approach calculates the distance and orientation based on rectangular
features whose size and position are known. Precision runways have many standardized rectangular markings, providing
the opportunity to create multiple rectangular templates. In our approach, we use advanced image processing to identify
the feature points of these templates, and then calculate the distance to each template, combining the results across
multiple templates to reduce the effects of noise. The second approach incorporates additional pose information directly
from the UAV's internal compass and IMU. This both reduces the effect of noise from our image processing, and allows
us to calculate the UAV's pose relative to the runway from an arbitrary set of features. We are no longer limited to
rectangle shaped templates.
Complexity of robotic sensor networks
Author(s):
Adam Mustapha;
Harpreet Singh;
Arati M. Dixit;
Kassem Saab;
Grant R. Gerhart
Show Abstract
With the increasing need of unmanned ground vehicles for combat applications, the collaboration and
coordination of these vehicles have become important design considerations. Both collaboration and
coordination require a large number of sensors. These sensors form a network. The complexity of such
network is very important in the design stage. The objective of this paper is to give a new definition of
complexity which can be used for design and implementation of sensor networks. Algorithms for predicting
the complexity for sensor network are proposed. The implementation of the proposed algorithm is given.
Developing a UAV-based rapid mapping system for emergency response
Author(s):
Kyoungah Choi;
Impyeong Lee;
Juseok Hong;
Taewan Oh;
Sung Woong Shin
Show Abstract
As disasters and accidents due to various natural or artificial causes being increased, the demands for rapid responses for
emergency situations also have been ever-increasing. These emergency responses need to be not only more intelligent
but also more customized to the individual sites to manage the emergency situations more efficiently. More effective
counter-measures in such situations will be established only if more accurate and prompt spatial information about the
changing areas due to the emergency are available. This information can be rapidly extracted from the airborne sensory
data acquired by a UAV-based rapid mapping system. This paper introduces a Korean national project to develop a
UAV-based rapid mapping system. The overall budget is about 6 million US dollars and the period is about four years.
The goal of this project is to develop a light and flexible system at low cost to perform rapid mapping for emergency
responses. This system consists of two main parts, aerial part and ground part. The aerial part includes a small UAV
platform equipped with sensors (GPS/IMU/Camera/Laser Scanner) and sensor supporting modules for sensor
integration, data transmission to the ground, data storages, time synchronization, and sensor stabilization. The ground
part includes three sub-systems with appropriate software, which are a control/receiving/archiving subsystem, a data georeferencing
subsystem, and a spatial information generation subsystem. As being in a middle stage of this project, we
will present a brief introduction to the overall project and the design of the aerial system with its verification results.
The DARPA LANdroids program
Author(s):
Mark McClure;
Daniel R. Corbett;
Douglas W. Gage
Show Abstract
The goal of the DARPA LANdroids program is to enhance tactical communications in urban environments by
developing inexpensive pocket-sized intelligent autonomous robotic radio relay nodes. LANdroids will move to
establish and maintain mesh networks that support voice and data traffic between dismounted warfighters and higher
command. Through autonomous movement and intelligent control algorithms, LANdroids will mitigate the serious
communications problems inherent in urban settings, e.g., relaying signals into shadows and making small adjustments
to reduce multi-path effects. This presentation presents an overview of the LANdroids program and describes the
progress made during Phase I, including the results of the early 2009 end-of-Phase testing program.
Biologically inspired collision avoidance system for unmanned vehicles
Author(s):
Fernando E. Ortiz;
Brett Graham;
Kyle Spagnoli;
Eric J. Kelmelis
Show Abstract
In this project, we collaborate with researchers in the neuroscience department at the University of Delaware to develop
an Field Programmable Gate Array (FPGA)-based embedded computer, inspired by the brains of small vertebrates (fish).
The mechanisms of object detection and avoidance in fish have been extensively studied by our Delaware collaborators.
The midbrain optic tectum is a biological multimodal navigation controller capable of processing input from all senses
that convey spatial information, including vision, audition, touch, and lateral-line (water current sensing in fish).
Unfortunately, computational complexity makes these models too slow for use in real-time applications. These
simulations are run offline on state-of-the-art desktop computers, presenting a gap between the application and the target
platform: a low-power embedded device. EM Photonics has expertise in developing of high-performance computers
based on commodity platforms such as graphic cards (GPUs) and FPGAs. FPGAs offer (1) high computational power,
low power consumption and small footprint (in line with typical autonomous vehicle constraints), and (2) the ability to
implement massively-parallel computational architectures, which can be leveraged to closely emulate biological
systems. Combining UD's brain modeling algorithms and the power of FPGAs, this computer enables autonomous
navigation in complex environments, and further types of onboard neural processing in future applications.
Implementation of a piezoelectrically actuated self-contained quadruped robot
Author(s):
Thanhtam Ho;
Sangyoon Lee
Show Abstract
In this paper we present the development of a mesoscale self-contained quadruped mobile robot that employs two pieces
of piezoelectric actuators for the bounding gait locomotion, i.e., two rear legs have the same movement and two front
legs do too. The actuator named LIPCA (LIghtweight Piezoceramic Composite curved Actuator) is a piezocomposite
actuator that uses a PZT layer that is sandwiched between composite materials of carbon/epoxy and glass/epoxy layers to
amplify the displacement. A biomimetic concept is applied to the design of the robot in a simplified way, such that each
leg of the robot has only one degree of freedom. Considering that LIPCA requires a high input voltage and possesses
capacitive characteristics, a small power supply circuit using PICO chips is designed for the implementation of selfcontained
mobile robot. The prototype with the weight of 125 gram and the length of 120 mm can locomote with the
bounding gait. Experiments showed that the robot can locomote at about 50 mm/sec with the circuit on board and the
operation time is about 5 minutes, which can be considered as a meaningful progress toward the goal of building an
autonomous legged robot actuated by piezoelectric actuators.
Advancing manufacturing research through competitions
Author(s):
Stephen Balakirsky;
Raj Madhavan
Show Abstract
Competitions provide a technique for building interest and collaboration in targeted research areas. This paper will
present a new competition that aims to increase collaboration amongst Universities, automation end-users, and
automation manufacturers through a virtual competition. The virtual nature of the competition allows for reduced
infrastructure requirements while maintaining realism in both the robotic equipment deployed and the scenarios. Details
of the virtual environment as well as the competitions objectives, rules, and scoring metrics will be presented.
Robust formation control of multi-robot systems subject to interconnection time-delays using minimum dynamic communication
Author(s):
Junjie Zhang;
Suhada Jayasuriya
Show Abstract
In this research, firstly, a Minimum Spanning Tree-based local communication algorithm is employed to incur less data
propagation. Furthermore, approvingly it always preserves network connectivity and is favorable to practical implementation
as well. Secondly, desired rigid formation pattern is achieved by utilizing graph Laplacian and feedback control
theory. Then emphasis is placed upon the time-delay influence on the acquired formation in the situation where interconnection
time-delays occur in certain information flow channels while robots are communicating with spatially separated
neighboring robots. A robust stabilization scheme is discussed to largely improve or even recover the destroyed formation
pattern. Simulations verify the effectiveness of the proposed approaches to formation control.
Preliminary results in force-guided assembly for teams of heterogeneous robots
Author(s):
Juan Rojas;
R. A. Peters II
Show Abstract
The missions to the Moon and to Mars currently being planned by NASA require the advanced deployment of
robots to prepare sites for human life support prior to the arrival of astronauts. Part of the robot's work will be
the assembly of modular structures such as solar arrays, radiators, antennas, propellant tanks, and habitation
modules. The construction will require teams of robots to work cooperatively and with a certain degree of
independence. Such systems are complex and require of human intervention in the form of teleoperation attending
unexpected contingencies. Latency in communications, however, will require that robots perform autonomous
tasks during this time window. This paper proposes an approach to maximize the likelihood of success for
teams of heterogeneous robots as they autonomously perform assembly tasks using force feedback to guide the
process. An evaluation of the challenges related to the cooperation of two heterogeneous robots to join two
parts into a stable, rigid configuration in a loosely structured environment is conducted. A control basis is
such approach: it recasts a control problem by concurrently running a series of controllers to encode complex
robot behavior. Each controller represents a control law that parses the underlying continuous control space
and provides asymptotic stability, even under local perturbations. The control basis approach allows several
controllers to be active concurrently through the null space control technique. Preliminary experimental results
are presented that demonstrate the effectiveness of the control basis to address the challenges of assembly tasks
by teams of heterogeneous robots.
Field experiments using SPEAR: a speech control system for UGVs
Author(s):
Siddharth R. Chhatpar;
Chris Blanco;
Jeffrey Czerniak;
Orin Hoffman;
Amit Juneja;
Tarun Pruthi;
Dongqing Liu;
Robert Karlsen;
Jonathan Brown
Show Abstract
This paper reports on a Field Experiment carried out by the Human Research and Engineering Directorate at Ft. Benning
to evaluate the efficacy of using speech to control an Unmanned Ground Vehicle (UGV) concurrently with a handcontroller.
The SPEAR system, developed by Think-A-Move, provides speech-control of UGVs. The system picks up
user-speech in the ear canal with an in-ear microphone. This property allows it to work efficiently in high-noise
environments, where traditional speech systems, employing external microphones, fail. It has been integrated with an
iRobot PackBot 510 with EOD kit. The integrated system allows the hand-controller to be supplemented with speech for
concurrent control. At Ft. Benning, the integrated system was tested by soldiers from the Officer Candidate School. The
Experiment had dual focus: 1) Quantitative measurement of the time taken to complete each station and the cognitive
load on users; 2) Qualitative evaluation of ease-of-use and ergonomics through soldier-feedback. Also of significant
benefit to Think-A-Move was soldier-feedback on the speech-command vocabulary employed: What spoken commands
are intuitive, and how the commands should be executed, e.g., limited-motion vs. unlimited-motion commands. Overall
results from the Experiment are reported in the paper.
Flat panel 3D display for unmanned ground vehicles
Author(s):
J. Larry Pezzaniti;
Richard Edmondson;
Justin Vaden;
Brian Hyatt;
David B. Chenault;
Joseph L. Tchon;
Tracy J. Barnidge;
Brad Pettijohn
Show Abstract
A flat panel stereoscopic display has been developed and tested for application in unmanned ground systems.
The flat panel display has a footprint that is only slightly thicker than the same size LCD display and has been
installed in the lid of a TALON OCU. The approach uses stacked LCD displays and produces live stereo
video with passive polarized glasses but no spatial or temporal multiplexing. The analog display, which is
available in sizes from 6.4" diagonal to 17" diagonal, produces 640 × 480 stereo imagery. A comparison of
soldiers' performance using 3D vs. 2D using live stereo video will be given. A description of the display will
be given along with discussion of the testing.
Improved situational awareness and mission performance for explosive ordnance disposal robots
Author(s):
Kent Massey;
Jared Sapp;
Eddy Tsui
Show Abstract
The operator's situational awareness greatly affects mission performance for remote operations of Explosive Ordnance
Disposal (EOD) robots. Testing by Army EOD sergeants has shown that a Head-Aimed Remote Viewer (HARV) can significantly
increase mission performance in several key tasks, such as identifying secondary Improvised Explosive Devices
(IED's) and maneuvering in tight quarters. A HARV system improves the operator's situational awareness by providing an
intuitive, "look around" vision interface that DARPA research4 has shown provides a 400% improvement in the operator's
spatial understanding of the remote environment. This paper describes the results of functional testing conducted by US
Army civilian engineers and EOD sergeants at Picatinny Arsenal, in support of W15QKN-06-C-0190.
FOCU:S - future operator control unit: soldier
Author(s):
Barry J. O'Brien;
Cem Karan;
Stuart H. Young
Show Abstract
The U.S. Army Research Laboratory's (ARL) Computational and Information Sciences Directorate (CISD) has long
been involved in autonomous asset control, specifically as it relates to small robots. Over the past year, CISD has been
making strides in the implementation of three areas of small robot autonomy, namely platform autonomy, Soldier-robot
interface, and tactical behaviors. It is CISD's belief that these three areas must be considered as a whole in order to
provide Soldiers with useful capabilities.
In addressing the Soldier-robot interface aspect, CISD has begun development on a unique dismounted controller called
the Future Operator Control Unit: Soldier (FOCU:S) that is based on an Apple iPod Touch. The iPod Touch's small
form factor, unique touch-screen input device, and the presence of general purpose computing applications such as a web
browser combine to give this device the potential to be a disruptive technology.
Setting CISD's implementation apart from other similar iPod or iPhone-based devices is the ARL software that allows
multiple robotic platforms to be controlled from a single OCU. The FOCU:S uses the same Agile Computing
Infrastructure (ACI) that all other assets in the ARL robotic control system use, enabling automated asset discovery on
any type of network. Further, a custom ad hoc routing implementation allows the FOCU:S to communicate with the
ARL ad hoc communications system and enables it to extend the range of the network.
This paper will briefly describe the current robotic control architecture employed by ARL and provide short descriptions
of existing capabilities. Further, the paper will discuss FOCU:S specific software developed for the iPod Touch,
including unique capabilities enabled by the device's unique hardware.
Adaptable formations utilizing heterogeneous unmanned systems
Author(s):
Laura E. Barnes;
Richard Garcia;
MaryAnne Fields;
Kimon Valavanis
Show Abstract
This paper addresses the problem of controlling and coordinating heterogeneous unmanned systems required to move as
a group while maintaining formation. We propose a strategy to coordinate groups of unmanned ground vehicles (UGVs)
with one or more unmanned aerial vehicles (UAVs). UAVs can be utilized in one of two ways: (1) as alpha robots to
guide the UGVs; and (2) as beta robots to surround the UGVs and adapt accordingly. In the first approach, the UAV
guides a swarm of UGVs controlling their overall formation. In the second approach, the UGVs guide the UAVs
controlling their formation. The unmanned systems are brought into a formation utilizing artificial potential fields
generated from normal and sigmoid functions. These functions control the overall swarm geometry. Nonlinear limiting
functions are defined to provide tighter swarm control by modifying and adjusting a set of control variables forcing the
swarm to behave according to set constraints. Formations derived are subsets of elliptical curves but can be generalized
to any curvilinear shape. Both approaches are demonstrated in simulation and experimentally. To demonstrate the
second approach in simulation, a swarm of forty UAVs is utilized in a convoy protection mission. As a convoy of UGVs
travels, UAVs dynamically and intelligently adapt their formation in order to protect the convoy of vehicles as it moves.
Experimental results are presented to demonstrate the approach using a fully autonomous group of three UGVs and a
single UAV helicopter for coordination.
Road surveillance using a team of small UAVs
Author(s):
Derek Kingston
Show Abstract
Monitoring roads is an important task for missions such as base security. This paper describes a road monitoring
system composed of a team of small UAVs equipped with gimballed cameras. Given a fine description of the
target road, algorithms for waypoint approximation, sensor steering, and UAV spacing are developed to allow
N UAVs to survey stretches of road for activity. By automating the route generation and road tracking, human
operators are freed to study the sensor returns from the vehicles to detect anomalous behavior. The spacing
algorithm is robust to retasking and insertion/deletion of UAVs. Hardware results are presented that demonstrate
the applicability of the solution.
Discrete event command and control for networked teams with multiple missions
Author(s):
Frank L. Lewis;
Greg Robert Hudas;
Chee Khiang Pang;
Matthew B. Middleton;
Christopher McMurrough
Show Abstract
During mission execution in military applications, the TRADOC Pamphlet 525-66 Battle Command and Battle Space
Awareness capabilities prescribe expectations that networked teams will perform in a reliable manner under changing
mission requirements, varying resource availability and reliability, and resource faults. In this paper, a Command and
Control (C2) structure is presented that allows for computer-aided execution of the networked team decision-making
process, control of force resources, shared resource dispatching, and adaptability to change based on battlefield
conditions. A mathematically justified networked computing environment is provided called the Discrete Event Control
(DEC) Framework. DEC has the ability to provide the logical connectivity among all team participants including
mission planners, field commanders, war-fighters, and robotic platforms. The proposed data management tools are
developed and demonstrated on a simulation study and an implementation on a distributed wireless sensor network. The
results show that the tasks of multiple missions are correctly sequenced in real-time, and that shared resources are
suitably assigned to competing tasks under dynamically changing conditions without conflicts and bottlenecks.
Sequential learning for robot vision terrain classification
Author(s):
Gary Witus;
Robert Karlsen;
Shawn Hunt
Show Abstract
Terrains have widely varying visual appearance depending on the type of foliage, season, current weather conditions,
recent precipitation, time of day, relative direction of lighting, presence of man-made structures and artifacts,
landscaping, etc. It is difficult, if not impossible, to specify in advance the appearance of the different terrains that will
be encountered while operating a robot in urban or rural environments. Yet people, having accumulated wide-ranging
experience, have little trouble recognizing familiar terrain types and learning to recognize new, previously unfamiliar,
terrains. Robots typically accumulate experience in "chunks" and do not have the luxury of years of training. This paper
presents recent results in sequential learning methods applied to robot terrain recognition. In this paper we explore
different sequential learning problem formulations and alternative machine learning algorithms. The investigations are
based on the same data set. We report on the initial development of an incremental fuzzy c-means clustering algorithm
capable of learning new information. We report on an approach to convert regression tree modeling, normally a batch
learning method, to batch-incremental learning. We investigate issues in formulating the sequential learning problem
and the performance of these algorithms. We also compare performance to four incremental learning classifiers. All
investigations were conducted using the same set of image features, extracted from on-board video from a small robot
traversing different terrains.
Neural network control of nonholonomic robot formations using limited communication with reliability assessment
Author(s):
Travis Dierks;
S. Jagannathan
Show Abstract
Architectures for the control of mobile robot formations are often described by three levels of abstraction: an
intelligence layer for task planning, a network layer for relaying commands and information throughout the formation,
and finally, at the lowest level of abstraction is a robot model layer where each robot is locally controlled to be
consistent with the current formation task. In this work, the network and robot model layers are considered, and an
output feedback control law for leader-follower based formation control is developed using neural networks (NN) and
limited communication. A NN is introduced to approximate the dynamics of the follower as well as its leader using
online weight tuning while a novel NN observer is designed to estimate the linear and angular velocities of both the
follower robots and its leader. Thus, each robot can achieve its control objective with limited knowledge of its leader's
states and dynamics while simultaneously reducing the communication load required in the network layer. It is shown
using Lyapunov theory that the errors for the entire formation are uniformly ultimately bounded while relaxing the
separation principle. Numerical results are provided to verify the theoretical conjectures, and the reliability of the
scheme is evaluated by introducing processing and communication delays, as well as communication failures.
Toward cognitive robotics
Author(s):
John E. Laird
Show Abstract
Our long-term goal is to develop autonomous robotic systems that have the cognitive abilities of humans, including
communication, coordination, adapting to novel situations, and learning through experience. Our approach rests on the
recent integration of the Soar cognitive architecture with both virtual and physical robotic systems. Soar has been used to
develop a wide variety of knowledge-rich agents for complex virtual environments, including distributed training
environments and interactive computer games. For development and testing in robotic virtual environments, Soar
interfaces to a variety of robotic simulators and a simple mobile robot. We have recently made significant extensions to
Soar that add new memories and new non-symbolic reasoning to Soar's original symbolic processing, which should
significantly improve Soar abilities for control of robots. These extensions include episodic memory, semantic memory,
reinforcement learning, and mental imagery. Episodic memory and semantic memory support the learning and recalling
of prior events and situations as well as facts about the world. Reinforcement learning provides the ability of the system
to tune its procedural knowledge - knowledge about how to do things. Mental imagery supports the use of diagrammatic
and visual representations that are critical to support spatial reasoning. We speculate on the future of unmanned systems
and the need for cognitive robotics to support dynamic instruction and taskability.
Stereo-vision-based terrain mapping for off-road autonomous navigation
Author(s):
Arturo L. Rankin;
Andres Huertas;
Larry H. Matthies
Show Abstract
Successful off-road autonomous navigation by an unmanned ground vehicle (UGV) requires reliable perception and
representation of natural terrain. While perception algorithms are used to detect driving hazards, terrain mapping
algorithms are used to represent the detected hazards in a world model a UGV can use to plan safe paths. There are two
primary ways to detect driving hazards with perception sensors mounted to a UGV: binary obstacle detection and
traversability cost analysis. Binary obstacle detectors label terrain as either traversable or non-traversable, whereas,
traversability cost analysis assigns a cost to driving over a discrete patch of terrain. In uncluttered environments where
the non-obstacle terrain is equally traversable, binary obstacle detection is sufficient. However, in cluttered
environments, some form of traversability cost analysis is necessary. The Jet Propulsion Laboratory (JPL) has explored
both approaches using stereo vision systems. A set of binary detectors has been implemented that detect positive
obstacles, negative obstacles, tree trunks, tree lines, excessive slope, low overhangs, and water bodies. A compact
terrain map is built from each frame of stereo images. The mapping algorithm labels cells that contain obstacles as nogo
regions, and encodes terrain elevation, terrain classification, terrain roughness, traversability cost, and a confidence
value. The single frame maps are merged into a world map where temporal filtering is applied. In previous papers, we
have described our perception algorithms that perform binary obstacle detection. In this paper, we summarize the terrain
mapping capabilities that JPL has implemented during several UGV programs over the last decade and discuss some
challenges to building terrain maps with stereo range data.
Using a laser range finder mounted on a MicroVision robot to estimate environmental parameters
Author(s):
Duc Fehr;
Nikos Papanikolopoulos
Show Abstract
In this article we will present a new robot (MicroVision) that has been designed at the University of Minnesota
(UMN), Center for Distributed Robotics. Its design reminds of the designs of previous robots built at the UMN
such as the COTS Scouts or the eROSIs. It is composed of a body with two wheels and a tail just like the two
aforementioned robots. However, the MicroVision has more powerful processing and sensing capabilities and we
utilized these to compute areas in the surrounding environment by using a convex hull approach. We are trying
to estimate the projected area of an object onto the ground. This is done by the computation of convex hulls
that are based on the data received from the MicroVision's laser range finder. Although localization of the robot
is an important feature in being able to compute these convex hulls, localization and mapping techniques are
only used as a tool and are not an end in this work. The main idea of this work is to demonstrate the ability
of the laser carrying MicroVision robot to move around an object in order to get a scan from each side. From
these scans, the convex hull of the shape is deduced and its projected area onto the ground is estimated.
Characterization of the Hokuyo URG-04LX laser rangefinder for mobile robot obstacle negotiation
Author(s):
Yoichi Okubo;
Cang Ye;
Johann Borenstein
Show Abstract
This paper presents a characterization study of the Hokuyo URG-04LX scanning laser rangefinder (LRF). The Hokuyo
LRF is similar in function to the Sick LRF, which has been the de-facto standard range sensor for mobile robot obstacle
avoidance and mapping applications for the last decade. Problems with the Sick LRF are its relatively large size, weight,
and power consumption, allowing its use only on relatively large mobile robots. The Hokuyo LRF is substantially
smaller, lighter, and consumes less power, and is therefore more suitable for small mobile robots. The question is
whether it performs just as well as the Sick LRF in typical mobile robot applications.
In 2002, two of the authors of the present paper published a characterization study of the Sick LRF. For the present
paper we used the exact same test apparatus and test procedures as we did in the 2002 paper, but this time to characterize
the Hokuyo LRF. As a result, we are in the unique position of being able to provide not only a detailed characterization
study of the Hokuyo LRF, but also to compare the Hokuyo LRF with the Sick LRF under identical test conditions.
Among the tested characteristics are sensitivity to a variety of target surface properties and incidence angles, which may
potentially affect the sensing performance. We also discuss the performance of the Hokuyo LRF with regard to the
mixed pixels problem associated with LRFs. Lastly, the present paper provides a calibration model for improving the
accuracy of the Hokuyo LRF.
Tessellated structure from motion for midrange perception and tactical planning
Author(s):
Minbo Shim;
Samson Yilma
Show Abstract
A typical structure from motion (SFM) technique is to construct 3-D structures from the observation of the motions of
salient features tracked over time. Although the sparse feature-based SFM provides additional solutions to robotic
platforms as a tool to augment navigation performance, the technique often fails to produce dense 3-D structures due to
the sparseness that is introduced during the feature selection and matching processes. For midrange sensing and tactical
planning, it is important to have a dense map that is able to provide not only 3-D coordinates of features, but also
clustered terrain information around the features for better thematic representation of the scene. In order to overcome the
shortfalls embedded in the sparse feature-based SFM, we propose an approach that uses Voronoi decomposition with an
equidistance-based triangulation that is applied to each of segmented and classified regions. The set of the circumcenters
of the circum-hyperspheres used in the triangulation is formed with the feature points extracted from the SFM
processing. We also apply flat surface detection to find traversable surface for a robotic vehicle to be able to maneuver
safely on.
Detecting and tracking humans using a man-portable robot
Author(s):
David Baran;
Nick Fung;
Sean Ho;
James Sherman
Show Abstract
Large gains in the automation of human detection and tracking techniques have been made over the past several years.
Several of these techniques have been implemented on larger robotic platforms, in order to increase the situational
awareness provided by the platform. Further integration onto a smaller robotic platform that already has obstacle
detection and avoidance capabilities would allow these algorithms to be utilized in scenarios that are not plausible for
larger platforms, such as entering a building and surveying a room for human occupation with limited operator
intervention.
However, transitioning these algorithms to a man-portable robot imparts several unique constraints, including limited
power availability, size and weight restrictions, and limited processor ability. Many imaging sensors, processing
hardware, and algorithms fail to adequately address one or more of these constraints.
In this paper, we describe the design of a payload suitable for our chosen man-portable robot, the iRobot Packbot. While
the described payload was built for a Packbot, it was carefully designed in order to be platform agnostic, so that it can be
used on any man-portable robot. Implementations of several existing motion and face detection algorithms that have
been chosen for testing on this payload are also discussed in some detail.
A stereo camera system for autonomous maritime navigation (AMN) vehicles
Author(s):
Weihong Zhang;
Ping Zhuang;
Les Elkins;
Rick Simon;
David Gore;
Jeff Cogar;
Kevin Hildebrand;
Steve Crawford;
Joe Fuller
Show Abstract
Spatial Integrated System (SIS), Rockville, Maryland, in collaboration with NSWC Combatant Craft Division
(NSWCCD), is applying 3D imaging technology, artificial intelligence, sensor fusion, behaviors-based control,
and system integration to a prototype 40 foot, high performance Research and Development Unmanned Surface
Vehicle (USV). This paper focus on the developments of the stereo camera system in the USV navigation that
currently consists of two high-resolution cameras and will incorporate an array of cameras in the near future.
The objectives of the camera system are to re-construct 3D objects and detect them in the sea water surface.
The paper reviews two critical technological components, namely camera calibration and stereo matching. In
stereo matching, a comprehensive study is presented to compare the algorithmic performances resulted from
the various information sources (intensity, RGB values, Gaussian gradients and Gaussian Laplacians), patching
schemas (single windows, and multiple windows with same/different centers), and correlation metrics (convolution,
absolute difference, and histogram). To enhance system performance, a sub-pixel edge detection technique
has been introduced to address the precision requirement and a noise removal post-processing step added to
eliminate noisy points from the reconstructed 3D point clouds. Finally, experimental results are reported to
demonstrate the performance of the stereo camera system.
Detection of moving targets from a moving ground platform
Author(s):
Thomas B. Sebastian;
Christopher M. Wynnyk;
Peter H. Tu;
Sabrina B. Barnes
Show Abstract
Semi-autonomous operation of intelligent vehicles may require that such platforms maintain a basic situational
awareness with respect to people, other vehicles and their intent. These vehicles should be able to operate safely
among people and other vehicles, and be able to perceive threats and respond accordingly. A key requirement is
the ability to detect people and vehicles from a moving platform. We have developed one such algorithm using
video cameras mounted on the vehicle. Our person detection algorithms model the shape and appearance of
the person instead of modeling the background. This algorithm uses histogram of oriented gradients (HOG),
which model shape and appearance using image edge histograms. These HOG descriptors are computed on an
exhaustive set of image windows, which are then classified as person/non-person using a support vector machine
classifier. The image windows are computed using camera calibration, which provides approximate size of people
with respect to their location in the imagery. The algorithm is flexible and has been trained for different domains
such as urban, rural and wooded scenes. We have designed a sensor platform that can be mounted on a moving
vehicle to collect video data of pedestrians. Using manually annotated ground-truth data we have evaluated
the person detection algorithm in terms of true positive and false positive rates. This paper provides a detailed
overview of the algorithm, describes the experiments conducted and reports on algorithmic performance.
Increasing agility in unmanned ground vehicles using variable internal mass and inertial properties
Author(s):
Chenghui Nie;
Simo Cusi Van Dooren;
Jainam Shah;
Matthew Spenko
Show Abstract
Unmanned Ground Vehicles (UGV) that possess agility, or the ability to quickly change directions without
a significant loss in speed, would have several advantages in field operations over conventional UGVs. The
agile UGVs would have greater maneuverability in cluttered environments and improved obstacle avoidance
capabilities. The UGVs would also be able to better recover from unwanted dynamic behaviors. This paper
presents a novel method of increasing UGV agility by actively altering the location of the vehicle's center of mass
during locomotion. This allows the vehicle to execute extreme dynamic maneuvers by controlling the normal
force acting on the wheels. A theoretical basis for this phenomenon is presented and experimental results are
shown that validate the approach.
Stereo vision and laser odometry for autonomous helicopters in GPS-denied indoor environments
Author(s):
Markus Achtelik;
Abraham Bachrach;
Ruijie He;
Samuel Prentice;
Nicholas Roy
Show Abstract
This paper presents our solution for enabling a quadrotor helicopter to autonomously navigate unstructured and unknown
indoor environments. We compare two sensor suites, specifically a laser rangefinder and a stereo camera. Laser and camera
sensors are both well-suited for recovering the helicopter's relative motion and velocity. Because they use different cues
from the environment, each sensor has its own set of advantages and limitations that are complimentary to the other sensor.
Our eventual goal is to integrate both sensors on-board a single helicopter platform, leading to the development of an
autonomous helicopter system that is robust to generic indoor environmental conditions. In this paper, we present results
in this direction, describing the key components for autonomous navigation using either of the two sensors separately.
Test results of autonomous behaviors for urban environment exploration
Author(s):
G. Ahuja;
D. Fellars;
G. Kogut;
E. Pacis Rius;
B. Sights;
H. R. Everett
Show Abstract
Under various collaborative efforts with other government labs, private industry, and academia, SPAWAR Systems
Center Pacific (SSC Pacific) is developing and testing advanced autonomous behaviors for navigation, mapping, and
exploration in various indoor and outdoor settings. As part of the Urban Environment Exploration project, SSC
Pacific is maturing those technologies and sensor payload configurations that enable man-portable robots to
effectively operate within the challenging conditions of urban environments. For example, additional means to
augment GPS is needed when operating in and around urban structures. A MOUT site at Camp Pendleton was
selected as the test bed because of its variety in building characteristics, paved/unpaved roads, and rough terrain.
Metrics are collected based on the overall system's ability to explore different coverage areas, as well as the
performance of the individual component behaviors such as localization and mapping. The behaviors have been
developed to be portable and independent of one another, and have been integrated under a generic behavior
architecture called the Autonomous Capability Suite. This paper describes the tested behaviors, sensors, and
behavior architecture, the variables of the test environment, and the performance results collected so far.
Toward a generic UGV autopilot
Author(s):
Kevin L. Moore;
Mark Whitehorn;
Alejandro J. Weinstein;
Junjun Xia
Show Abstract
Much of the success of small unmanned air vehicles (UAVs) has arguably been due to the widespread availability of
low-cost, portable autopilots. While the development of unmanned ground vehicles (UGVs) has led to significant
achievements, as typified by recent grand challenge events, to date the UGV equivalent of the UAV autopilot
is not available. In this paper we describe our recent research aimed at the development of a generic UGV
autopilot. Assuming we are given a drive-by-wire vehicle that accepts as inputs steering, brake, and throttle
commands, we present a system that adds sonar ranging sensors, GPS/IMU/odometry, stereo camera, and
scanning laser sensors, together with a variety of interfacing and communication hardware. The system also
includes a finite state machine-based software architecture as well as a graphical user interface for the operator
control unit (OCU). Algorithms are presented that enable an end-to-end scenario whereby an operator can view
stereo images as seen by the vehicle and can input GPS waypoints either from a map or in the vehicle's scene-view
image, at which point the system uses the environmental sensors as inputs to a Kalman filter for pose estimation
and then computes control actions to move through the waypoint list, while avoiding obstacles. The long-term
goal of the research is a system that is generically applicable to any drive-by-wire unmanned ground vehicle.
An interactive physics-based unmanned ground vehicle simulator leveraging open source gaming technology: progress in the development and application of the virtual autonomous navigation environment (VANE) desktop
Author(s):
Mitchell M. Rohde;
Justin Crawford;
Matthew Toschlog;
Karl D. Iagnemma;
Guarav Kewlani;
Christopher L. Cummins;
Randolph A. Jones;
David A. Horner
Show Abstract
It is widely recognized that simulation is pivotal to vehicle development, whether manned or unmanned. There are few
dedicated choices, however, for those wishing to perform realistic, end-to-end simulations of unmanned ground vehicles
(UGVs). The Virtual Autonomous Navigation Environment (VANE), under development by US Army Engineer
Research and Development Center (ERDC), provides such capabilities but utilizes a High Performance Computing
(HPC) Computational Testbed (CTB) and is not intended for on-line, real-time performance. A product of the VANE
HPC research is a real-time desktop simulation application under development by the authors that provides a portal into
the HPC environment as well as interaction with wider-scope semi-automated force simulations (e.g. OneSAF). This
VANE desktop application, dubbed the Autonomous Navigation Virtual Environment Laboratory (ANVEL), enables
analysis and testing of autonomous vehicle dynamics and terrain/obstacle interaction in real-time with the capability to
interact within the HPC constructive geo-environmental CTB for high fidelity sensor evaluations. ANVEL leverages
rigorous physics-based vehicle and vehicle-terrain interaction models in conjunction with high-quality, multimedia
visualization techniques to form an intuitive, accurate engineering tool. The system provides an adaptable and
customizable simulation platform that allows developers a controlled, repeatable testbed for advanced simulations.
ANVEL leverages several key technologies not common to traditional engineering simulators, including techniques
from the commercial video-game industry. These enable ANVEL to run on inexpensive commercial, off-the-shelf
(COTS) hardware. In this paper, the authors describe key aspects of ANVEL and its development, as well as several
initial applications of the system.
Tracked robot controllers for climbing obstacles autonomously
Author(s):
Isabelle Vincent
Show Abstract
Research in mobile robot navigation has demonstrated some success in navigating flat indoor environments while
avoiding obstacles. However, the challenge of analyzing complex environments to climb obstacles autonomously
has had very little success due to the complexity of the task. Unmanned ground vehicles currently exhibit
simple autonomous behaviours compared to the human ability to move in the world. This paper presents the
control algorithms designed for a tracked mobile robot to autonomously climb obstacles by varying its tracks
configuration. Two control algorithms are proposed to solve the autonomous locomotion problem for climbing
obstacles. First, a reactive controller evaluates the appropriate geometric configuration based on terrain and
vehicle geometric considerations. Then, a reinforcement learning algorithm finds alternative solutions when the
reactive controller gets stuck while climbing an obstacle. The methodology combines reactivity to learning.
The controllers have been demonstrated in box and stair climbing simulations. The experiments illustrate the
effectiveness of the proposed approach for crossing obstacles.
Evaluation of terrain parameter estimation using a stochastic terrain model
Author(s):
Danielle A. Dumond;
Laura E. Ray;
Eric Trautmann
Show Abstract
Autonomous vehicles driving on off-road terrain exhibit substantial variation in mobility characteristics even when the
terrain is horizontal and qualitatively homogeneous. This paper presents a simple stochastic model for characterizing
observed variability in vehicle response to terrain and for representing transitions between homogeneous terrain with
local variability or between heterogeneous terrain types. Such a model provides a means for more realistic evaluation of
terrain parameter estimation methods through simulation. A stochastic terrain model in which friction angle and soil
cohesion are represented by Gaussian random variables qualitatively represents observed variability in traction vs. slip
characteristics measured experimentally. The stochastic terrain model is used to evaluate a terrain parameter estimation
method in which terrain forces are first estimated independent of a terrain model, and subsequently, parameters of a
terrain model, such as soil cohesion, friction angle, and stress distribution parameters are determined from estimated
vehicle-terrain forces. Simulation results show drawbar pull vs. slip characteristics resulting from terrain parameter
estimation are within statistical bounds established by the stochastic terrain model.
Land, sea, and air unmanned systems research and development at SPAWAR Systems Center Pacific
Author(s):
Hoa G. Nguyen;
Robin Laird;
Greg Kogut;
John Andrews;
Barbara Fletcher;
Todd Webber;
Rich Arrieta;
H. R. Everett
Show Abstract
The Space and Naval Warfare (SPAWAR) Systems Center Pacific (SSC Pacific) has a long and extensive history in
unmanned systems research and development, starting with undersea applications in the 1960s and expanding into
ground and air systems in the 1980s. In the ground domain, we are addressing force-protection scenarios using large
unmanned ground vehicles (UGVs) and fixed sensors, and simultaneously pursuing tactical and explosive ordnance
disposal (EOD) operations with small man-portable robots. Technology thrusts include improving robotic intelligence
and functionality, autonomous navigation and world modeling in urban environments, extended operational range of
small teleoperated UGVs, enhanced human-robot interaction, and incorporation of remotely operated weapon systems.
On the sea surface, we are pushing the envelope on dynamic obstacle avoidance while conforming to established
nautical rules-of-the-road. In the air, we are addressing cooperative behaviors between UGVs and small vertical-takeoff-
and-landing unmanned air vehicles (UAVs). Underwater applications involve very shallow water mine
countermeasures, ship hull inspection, oceanographic data collection, and deep ocean access. Specific technology thrusts
include fiber-optic communications, adaptive mission controllers, advanced navigation techniques, and concepts of
operations (CONOPs) development. This paper provides a review of recent accomplishments and current status of a
number of projects in these areas.
Evolving U.S. Department of Defense (DoD) unmanned systems research, development, test, acquisition, and evaluation (RDTA&E)
Author(s):
Robin T. Laird
Show Abstract
As research, development, test, acquisition and evaluation (RDTA&E) of unmanned systems experiences increased
attention by the Department of Defense (DoD), elements within the acquisition community are responding by mapping
out changes to management systems, including fundamental policy shifts, to support the expanding role of robots on the
battlefield. Unmanned systems - air, land, and sea - are increasing in complexity and capability such that their use is
becoming pervasive in mission areas such as explosives ordnance disposal and aerial surveillance. This paper reviews
the state of unmanned systems RDTA&E within the context of Defense Acquisition, highlighting existing and emerging
public policy, relevant acquisition reform, the resulting organizational adaptations, and the success with which one
enterprise has brought together the sometimes competing values found within the major elements of the larger
acquisition system.
TARDEC's Intelligent Ground Systems overview
Author(s):
Jeffrey F. Jaster
Show Abstract
The mission of the Intelligent Ground Systems (IGS) Area at the Tank Automotive Research, Development and
Engineering Center (TARDEC) is to conduct technology maturation and integration to increase Soldier robot
control/interface intuitiveness and robotic ground system robustness, functionality and overall system effectiveness for
the Future Combat System Brigade Combat Team, Robotics Systems Joint Project Office and game changing capabilities
to be fielded beyond the current force. This is accomplished through technology component development focused on
increasing unmanned ground vehicle autonomy, optimizing crew interfaces and mission planners that capture
commanders' intent, integrating payloads that provide 360 degree local situational awareness and expanding current
UGV tactical behavior, learning and adaptation capabilities. The integration of these technology components into
ground vehicle demonstrators permits engineering evaluation, User assessment and performance characterization in
increasingly complex, dynamic and relevant environments to include high speed on road or cross country operations, all
weather/visibility conditions and military operations in urban terrain (MOUT). Focused testing and experimentation is
directed at reducing PM risk areas (safe operations, autonomous maneuver, manned-unmanned collaboration) and
transitioning technology in the form of hardware, software algorithms, test and performance data, as well as User
feedback and lessons learned.
Joint collaborative technology experiment
Author(s):
Michael Wills;
Donny Ciccimaro;
See Yee;
Thomas Denewiler;
Nicholas Stroumtsos;
John Messamore;
Rodney Brown;
Brian Skibba;
Daniel Clapp;
Jeff Wit;
Randy J. Shirts;
Gary N. Dion;
Gary S. Anselmo
Show Abstract
Use of unmanned systems is rapidly growing within the military and civilian sectors in a variety of roles including
reconnaissance, surveillance, explosive ordinance disposal (EOD), and force-protection and perimeter security. As
utilization of these systems grows at an ever increasing rate, the need for unmanned systems teaming and inter-system
collaboration becomes apparent. Collaboration provides a means of enhancing individual system capabilities through
relevant data exchange that contributes to cooperative behaviors between systems and enables new capabilities not
possible if the systems operate independently. A collaborative networked approach to development holds the promise of
adding mission capability while simultaneously reducing the workload of system operators. The Joint Collaborative
Technology Experiment (JCTE) joins individual technology development efforts within the Air Force, Navy, and Army
to demonstrate the potential benefits of interoperable multiple system collaboration in a force-protection application.
JCTE participants are the Air Force Research Laboratory, Materials and Manufacturing Directorate, Airbase
Technologies Division, Force Protection Branch (AFRL/RXQF); the Army Aviation and Missile Research,
Development, and Engineering Center Software Engineering Directorate (AMRDEC SED); and the Space and Naval
Warfare Systems Center - Pacific (SSC Pacific) Unmanned Systems Branch operating with funding provided by the
Joint Ground Robotics Enterprise (JGRE). This paper will describe the efforts to date in system development by the
three partner organizations, development of collaborative behaviors and experimentation in the force-protection
application, results and lessons learned at a technical demonstration, simulation results, and a path forward for future
work.
A vision-based robotic follower vehicle
Author(s):
Jared L. Giesbrecht;
Hien K. Goi;
Timothy D. Barfoot;
Bruce A. Francis
Show Abstract
This paper presents the development of a vision-based robotic follower system with the eventual goal of autonomous
convoying. The follower vehicle, trained at run-time, tracks an arbitrary lead vehicle and estimates
the leader's position from the sequence of video images. Pan, tilt and zoom keep the leader in the follower's field
of view as it drives the leader's path. The system was demonstrated following vehicles in an on-road scenario,
as well as dismounted human leaders off-road.
3D visualization for improved manipulation and mobility in EOD and combat engineering applications
Author(s):
Joel Alberts;
John Edwards;
Josh Johnston;
Jeff Ferrin
Show Abstract
This paper presents a scalable modeling technique that displays 3D data from a priori and real-time sensors developed by
Autonomous Solutions under contract with NAVEODTECHDIV and TARDEC. A novel algorithm provides structure
and texture to 3D point clouds while an octree repository management technique scales level of detail for seamless
zooming from kilometer to centimeter scales. This immersive 3D environment enables direct measurement of absolute
size, automated manipulator placement, and indication of unique world coordinates for navigation. Since a priori data is
updated by new information collected with stereovision and lidar sensors, high accuracy pose is not a requirement.
OzBot and haptics: remote surveillance to physical presence
Author(s):
James Mullins;
Mick Fielding;
Saeid Nahavandi
Show Abstract
This paper reports on robotic and haptic technologies and capabilities developed for the law
enforcement and defence community within Australia by the Centre for Intelligent Systems
Research (CISR). The OzBot series of small and medium surveillance robots have been
designed in Australia and evaluated by law enforcement and defence personnel to determine
suitability and ruggedness in a variety of environments. Using custom developed digital
electronics and featuring expandable data busses including RS485, I2C, RS232, video and
Ethernet, the robots can be directly connected to many off the shelf payloads such as gas
sensors, x-ray sources and camera systems including thermal and night vision.
Differentiating the OzBot platform from its peers is its ability to be integrated directly with haptic
technology or the 'haptic bubble' developed by CISR. Haptic interfaces allow an operator to
physically 'feel' remote environments through position-force control and experience realistic
force feedback. By adding the capability to remotely grasp an object, feel its weight, texture and
other physical properties in real-time from the remote ground control unit, an operator's
situational awareness is greatly improved through Haptic augmentation in an environment
where remote-system feedback is often limited.
Laser-assisted real-time and scaled telerobotic control of a manipulator for defense and security applications
Author(s):
Eduardo Veras;
Karan Khokar;
Redwan Alqasemi;
Rajiv Dubey
Show Abstract
In this paper, we present a novel concept of shared autonomous and teleoperation control of a remote manipulator with a
laser-based assistance in a hard real-time environment for defense and security applications. The laser pointer enables
the user to make high-level decisions, such as target object selection, and it enables the system to generate trajectories
and virtual constraints to be used for autonomous motion or scaled teleoperation. Autonomous, position-teleoperation
and velocity-teleoperation control methods have been implemented in the control code. Scaling and virtual fixtures have
been used in the teleoperation-based control, depending on the user preference, for faster and easier target locking and
task execution. A real-time QNX operating system has been used to remotely control a PUMA 560 robotic arm using a
Phantom Omni haptic device as a master through a TCP/IP port. The system was implemented with different control
modes, and human subjects were trained to use the system to execute several tasks. Examples of defense and security
applications were explored and presented.
Stingray: high-speed control of small UGVs in urban terrain
Author(s):
Brian Yamauchi;
Kent Massey
Show Abstract
For the TARDEC-funded Stingray Project, iRobot Corporation and Chatten Associates are developing technologies that
will allow small UGVs to operate at tactically useful speeds. In previous work, we integrated a Chatten Head-Aimed
Remote Viewer (HARV) with an iRobot Warrior UGV, and used the HARV to drive the Warrior, as well as a small,
high-speed, gas-powered UGV surrogate. In this paper, we describe our continuing work implementing semiautonomous
driver-assist behaviors to help an operator control a small UGV at high speeds. We have implemented an
IMU-based heading control behavior that enables tracked vehicles to maintain accurate heading control even over rough
terrain. We are also developing a low-latency, low-bandwidth, high-quality digital video protocol to support immersive
visual telepresence. Our experiments show that a video compression codec using the H.264 algorithm can produce
several times better resolution than a Motion JPEG video stream, while utilizing the same limited bandwidth, and the
same low latency. With further enhancements, our H.264 codec will provide an order of magnitude greater quality,
while retaining a low latency comparable to Motion JPEG, and operating within the same bandwidth.
Implementation of small robot autonomy in an integrated environment
Author(s):
Barry J. O'Brien;
Laurel Sadler
Show Abstract
The U.S. Army Research Laboratory's (ARL) Computational and Information Sciences Directorate (CISD) has long
been involved in autonomous asset control, specifically as it relates to small robots. Over the past year, CISD has been
making strides in the implementation of three areas of small robot autonomy, namely platform autonomy, Soldier-robot
interface, and tactical behaviors. It is CISD's belief that these three areas must be considered as a whole in order to
provide Soldiers with useful capabilities.
In addressing these areas, CISD has integrated a COTS LADAR into the head of an iRobot PackBot Explorer, providing
ranging information with minimal disruption to the physical characteristics of the platform. Using this range data is an
implementation of obstacle detection and avoidance (OD/OA), leveraged from an existing autonomy software suite,
running on the platform's native processor. These capabilities will serve as the foundation of our targeted behaviorbased
control methodologies. The first behavior is guarded tele-operation that augments the existing ARL robotic
control infrastructure. The second is the implementation of a multi-robot cooperative mapping behavior. Developed at
ARL, collaborative simultaneous localization and mapping (CSLAM) will allow multiple robots to build a common map
of an area, providing the Soldier operator with a singular view of that area.
This paper will describe the hardware and software integration of the LADAR sensor into the ARL robotic control
system. Further, the paper will discuss the implementation of the small robot OD/OA and CSLAM software components
performed by ARL, as well as results on their performance and benefits to the Soldier.
Vision-based effective dispersion of miniature robots by using local sensing
Author(s):
Hyeun Jeong Min;
Nikolaos Papanikolopoulos
Show Abstract
This paper introduces a vision-based algorithm for effectively dispersing multiple robots to accomplish search or
surveillance missions by using local sensing information. A marsupial system can deliver several small robots,
and small robots are useful for strategically placing themselves in areas of interest. The issue in this marsupial
system approach is how to effectively deploy multiple robots to cover the whole task area. To accomplish this
goal a marsupial system equipped with multiple miniature robots first drives into the center of the task area,
and unloads them in order. Several experimental results will be presented from using the Loper, Adelopod,
Saddlepack, and Explorer robotic platforms developed at the University of Minnesota.
Ten-kilogram vehicle autonomous operations
Author(s):
John R. Rogers;
Christopher Korpela;
Kevin Quigley
Show Abstract
A low-cost unmanned ground vehicle designed to benchmark high-speed performance is presented. The E-Maxx four-wheel-drive
radio-controlled vehicle equipped with a Robostix controller is proposed as a low-cost, high-speed robotic platform useful for military
operations. The vehicle weighs less than ten kilograms making it easily portable by one person. Keeping cost low is a major
consideration in the design with the aim of providing a disposable military robot. The suitability of the platform was evaluated and
results are presented. Commercial-Off-The-Shelf (COTS) upgrades to the basic vehicle are recommended for durability. A procedure
was established for bird's-eye-view video recording to document vehicle dynamics. Driver/vehicle performance is quantified by entry
velocity, exit velocity and total time through a 90° turn on low-friction terrain. A setup for measuring these values is presented. Expert
drivers use controlled skidding to minimize time through turns and the long term goal of the project is to automate such expert behaviors.
Results of vehicle performance under human control are presented and stand as a reference for future autonomy.
Inexpensive robot for remote detection of UXO
Author(s):
Joshua Galloway;
Daren R. Wilcox
Show Abstract
The Electrical and Computer Engineering Technology (ECET) Honors student developed a prototype for an inexpensive
unexploded ordinance (UXO) seeking robot. The system provided functionality including: locating metallic landmines
and UXO within a defined area/environment, recording the location of said landmines and UXO's, and storing the data
off unit via an IEEE 802.11b/g connection to a Windows or Linux-based laptop computer. Application of the prototype
and corresponding research may lend themselves to de-mining the more than 100 landmine/unexploded ordinance
affected countries in the world particularly in desert terrain (US Department of State Fact Sheet, 2 July 2003).
On software implementation of reliability of unmanned ground vehicles
Author(s):
Arati M. Dixit;
Kassem Saab;
Harpreet Singh;
Adam Mustapha;
Grant R. Gerhart
Show Abstract
Critical role of unmanned intelligent ground vehicles is evident from variety of defense applications. Fuzzy Reliability
predicts reliability of the convoy of unmanned vehicles represented as a communication network with nodes as vehicle
station and branches as path between the stations. Fuzzy Reliability affirms the performance of the system. Fuzzy
reliability of a convoy of vehicles is the result of Fuzzy and Boolean approaches. The node and branch reliability is
calculated using the Fuzzy approach. The terminal reliability is calculated using Boolean algebra. Software
implementation of the fuzzy reliability is successfully done. To improve the performance evaluation of the convoy, node
failure i.e. failure of convoy station is also taken into consideration. Depending upon the reliability predicted a
commander can take appropriate decision in the battlefield. Proposed algorithm determines all paths from source to
destination and Boolean expressions are formed. A non-overlapping simplification is obtained and further transformed
into mathematical expression, where reliability values are substituted. The results of design, implementation and
simulation of the reliability of convoy of unmanned vehicles are given. It is hoped that the proposed algorithm and its
implementation will be useful for sensor network in general and graph of unmanned vehicles in particular.
UGV application modeling and sensor simulation using a rapid prototyping testbed environment
Author(s):
James Falasco;
Steve O'Leary
Show Abstract
This paper reviews hardware and software solutions that allow for rapid prototyping of new or modified UGV sensor
designs, mission payloads and functional sub assemblies. We define reconfigurable computing in the context of being
able to place various PMC modules depending upon mission scenarios onto a base SBC (Single Board Computer) or
multiprocessor architectures to achieve maximum scalability. Also addressed are the sensor and computing packaging
aspects and how such payloads could be integrated with unattended acoustic sensor topologies providing a more
complete fused "picture" to decision makers. We review how these modular payloads could be integrated with
unattended ground sensors to collaborate on mission requirements
Comparison of real-time performance of Kalman filter-based slam methods for unmanned ground vehicle (UGV) navigation
Author(s):
Hakan Temeltaş;
Deniz Kavak
Show Abstract
Simultaneous Localization and Mapping (SLAM) using for the mobile robot navigation has two main problems. First
problem is the computational complexity due to the growing state vector with the added landmark in the environment.
Second problem is data association which matches the observations and landmarks in the state vector. In this study, we
compare Extended Kalman Filter (EKF) based SLAM which is well-developed and well-known algorithm, and Compressed
Extended Kalman Filter (CEKF) based SLAM developed for decreasing of the computational complexity of the EKF based
SLAM. We write two simulation program to investigate these techniques. Firts program is written for the comparison of EKF
and CEKF based SLAM according to the computational complexity and covariance matrix error with the different numbers
of landmarks. In the second program, EKF and CEKF based SLAM simulations are presented. For this simulation differential
drive vehicle that moves in a 10m square trajectory and LMS 200 2-D laser range finder are modelled and landmarks are
randomly scattered in that 10m square environment.