SPIE Membership Get updates from SPIE Newsroom
  • Newsroom Home
  • Astronomy
  • Biomedical Optics & Medical Imaging
  • Defense & Security
  • Electronic Imaging & Signal Processing
  • Illumination & Displays
  • Lasers & Sources
  • Micro/Nano Lithography
  • Nanotechnology
  • Optical Design & Engineering
  • Optoelectronics & Communications
  • Remote Sensing
  • Sensing & Measurement
  • Solar & Alternative Energy
  • Sign up for Newsroom E-Alerts
  • Information for:

SPIE Photonics West 2019 | Register Today

SPIE Defense + Commercial Sensing 2019 | Call for Papers



Print PageEmail PageView PDF

Remote Sensing

Visual-cognition challenges for the operation of unmanned ground vehicles

The psychology of remote-operator environmental awareness is an important area of inquiry in which recent advances enable improved deployment of robotic vehicles.
17 July 2008, SPIE Newsroom. DOI: 10.1117/2.1200807.1215

The use of unmanned ground vehicles (UGVs) allows for remote sensing and action in hazardous environments at a safe distance. Proponents ofUGV deployment for both military and civilian applications must overcome technical challenges faced in realizing their fullpotential. The development of operationally effective UGVs remains demanding due to the complexity of autonomous ground navigation andthe cognitive requirements on human operators. These difficulties appear across a variety of UGV applications, including the use oflarge autonomous vehicles for reconnaissance or convoy operations and the deployment of small remotely operated vehicles for explosivesdisposal or urban search-and-rescue tasks.1

Because UGV operation invariably requires human guidance, understanding the human factors is critical. Operator tasks for UGVemployment involve visual processing of remote imagery for cognition in both local and global spatial contexts. UGV operation viaremote-camera imaging is analogous to viewing the world through a soda straw.1 Humans experience cognitive difficulties processingthe visual information received from optical sensors due to limited fields of view, impoverished context, lack of depth, extraction ofglobal-space information from limited local-space visualizations, and the need for cognitive integration of multiple viewpoints. Research atNew Mexico State University (NMSU) focusing on the UGV human-factors domain concentrates on target detection, spatial cognition, and viewintegration under a variety of experimentally manipulated visual-display conditions (see Figure 1).

Figure 1. Teleoperated miniature-camera-equipped vehicle searching for victims in a simulated disaster area (left-handpanel) and a corresponding aerial view (right-hand panel).

One human-factor area of concern in this context is the lack of awareness operators have of the local space surrounding theirvehicles. This may result in a number of navigational mishaps including collisions with obstacles, losing vehicles in unseen voids,and disorientation.2 We investigated the use of different means of providing this awareness on the basis of anobstacle-navigation task.3 We found that providing operators with either an appropriately wide field of view (FOV) or athird-person camera perspective enabled operators to pilot vehicles through obstacle courses faster and with greater comfort than whenprovided with a narrower FOV and a first-person camera angle. However, providing a wider FOV resulted in better time performance and feelingsof comfort than introducing a third-person camera perspective. The benefit of providing both means of improving local spatial awarenessis somewhat additive.

Supporting operators' ability to use video information captured by a UGV's cameras under low-bandwidth communication conditions is anotherarea of concern. We have investigated the degree to which spatial and temporal resolution need to be maintained to support an operator's basic perception.4 We found that while maintaining spatial resolution is critical to detect objects, maintaining temporalresolution can be of assistance when spatial resolution is poor. We also found that maintaining either spatial or temporalresolution is sufficient for supporting tasks that require only low-level spatial awareness. Efforts are under way to investigateresolution requirements for tasks requiring motion and higher-level spatial awareness.

Figure 2. The speed and accuracy of identifying the designated object in the ground view (e.g., tank with red dotin left image) in a satellite image map (right panel) depends on the environment. Features of interest include thetarget object's color and shape uniqueness, which are not very helpful in this particular image pair. Here the objectin question is viewpoint invariant, i.e., the cylinder is easily recognized as a circle from above and also partof a gestalt grouping of four similar objects, which facilitates identification.

Integrating the visual information from ground-level cameras with that of global-space views5 such as aerial imagery or mapdisplays is another domain of human factors our research has addressed. This task, although key to many UGV operations, can becognitively challenging for human operators. When manipulating a vehicle remotely, the operator must maintain awareness not only of thevehicle's position and orientation, but also of the identification and location of various environmental objects of interest with regard tomission objectives. This requires the integration of vehicle-camera views with top-down-map or aerial views providing an overview of theglobal space surrounding the vehicle. An operator's speed and accuracy in the integration of ground and aerial images depends on the natureof the environment. The similarity and complexity of shapes and colors, the presence of shadows, and viewpoint invariance of theenvironmental objects themselves all have an effect on an operator's ability to quickly and accurately make judgments6 (see Figure 2). Understanding these factors is important for designing systems, maps, image requirements, and displays thatfacilitate view integration.

In the future, a greater number, type, and complexity of UGVs will likely be deployed across a variety of tasks in which they offer themain advantage of action at a safe distance. Understanding the human-factor challenges associated with visual processing7 involved in these tasks will lead to better designs and enhanced capabilities.

This research was partially performed under contract number DAAD19-01-2-0009 of the US Army Research Laboratory. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies or position, either expressed or implied, of the US Army Research Laboratory or the US Government.

Roger Chadwick
Department of Psychology
Human–Robotic-Interaction Laboratory
New Mexico State University
Las Cruces, NM
Engineering Analysis Department
Caelum Research
Las Cruces, NM

Roger Chadwick is a research scientist at NMSU's Department of Psychology in the Human–Robotic-Interaction Laboratory. He was previously a project engineer at NASA's White Sands complex. He received a BSEE from the University of Maryland in 1981 and an MA in psychology from NMSU in 2005. He is currently completing his PhD dissertation at NMSU.

Skye Pazuchanics
Department of Psychology
New Mexico State University
Las Cruces, NM
Douglas Gillan
Department of Psychology
North Carolina State University
Raleigh, NC 

Douglas Gillan is currently head of the Department of Psychology at North Carolina State University.