Share Email Print
cover

Proceedings Paper

Semantic perception for ground robotics
Author(s): M. Hebert; J. A. Bagnell; M. Bajracharya; K. Daniilidis; L. H. Matthies; L. Mianzo; L. Navarro-Serment; J. Shi; M. Wellfare
Format Member Price Non-Member Price
PDF $14.40 $18.00
cover GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free. Check Access

Paper Abstract

Semantic perception involves naming objects and features in the scene, understanding the relations between them, and understanding the behaviors of agents, e.g., people, and their intent from sensor data. Semantic perception is a central component of future UGVs to provide representations which 1) can be used for higher-level reasoning and tactical behaviors, beyond the immediate needs of autonomous mobility, and 2) provide an intuitive description of the robot's environment in terms of semantic elements that can shared effectively with a human operator. In this paper, we summarize the main approaches that we are investigating in the RCTA as initial steps toward the development of perception systems for UGVs.

Paper Details

Date Published: 25 May 2012
PDF: 12 pages
Proc. SPIE 8387, Unmanned Systems Technology XIV, 83870Y (25 May 2012); doi: 10.1117/12.918915
Show Author Affiliations
M. Hebert, Carnegie Mellon Univ. (United States)
J. A. Bagnell, Carnegie Mellon Univ. (United States)
M. Bajracharya, Jet Propulsion Lab. (United States)
K. Daniilidis, Univ. of Pennsylvania (United States)
L. H. Matthies, Jet Propulsion Lab. (United States)
L. Mianzo, General Dynamics Robotic Systems (United States)
L. Navarro-Serment, Carnegie Mellon Univ. (United States)
J. Shi, Univ. of Pennsylvania (United States)
M. Wellfare, General Dynamics Robotic Systems (United States)


Published in SPIE Proceedings Vol. 8387:
Unmanned Systems Technology XIV
Robert E. Karlsen; Douglas W. Gage; Charles M. Shoemaker; Grant R. Gerhart, Editor(s)

© SPIE. Terms of Use
Back to Top