Share Email Print
cover

Proceedings Paper

Intertwining Of Teleoperation And Computer Vision
Author(s): B. C. Bloom; G. S. Duane; M. A. Epstein; M. Magee; D. W. Mathis; M. J. Nathan; W. J. Wolfe
Format Member Price Non-Member Price
PDF $14.40 $18.00
cover GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free. Check Access

Paper Abstract

In the rapid pursuit of automation, it is sometimes overlooked that an elaborate human-machine interplay is still necessary, despite the fact that a fully automated system, by definition, would not require a human interface. In the future, real-time sensing, intelligent processing, and dextrous manipulation will become more viable, but until then it is necessary to use humans for many critical processes. It is not obvious, however, how automated subsystems could account for human intervention, especially if a philosophy of "pure" automation dominates the design. Teleoperation, by contrast, emphasizes the creation of hardware pathways (e.g., hand-controllers, exoskeletons) to quickly communicate low-level control data to various mechanisms, while providing sensory feedback in a format suitable for human consumption (e.g., stereo displays, force reflection), leaving the "intelligence" to the human. These differences in design strategy, both hardware and software, make it difficult to tie automation and teleoperation together, while allowing for graceful transitions at the appropriate times. In no area of artifical intelligence is this problem more evident than in computer vision. Teleoperation typically uses video displays (monochrome/color, monoscopic/ stereo) with contrast enhancement and gain control without any digital processing of the images. However, increases in system performance such as automatic collision avoidance, path finding, and object recognition depend on computer vision techniques. Basically, computer vision relies on the digital processing of the images to extract low-level primitives such as boundaries and regions that are used in higher-level processes for object recognition and positions. Real-time processing of complex environments is currently unattainable, but there are many aspects of the processing that are useful for situation assessment, provided it is understood the human can assist in the more time-consuming steps.

Paper Details

Date Published: 1 January 1987
PDF: 8 pages
Proc. SPIE 0852, Mobile Robots II, (1 January 1987); doi: 10.1117/12.968265
Show Author Affiliations
B. C. Bloom, Martin Marietta Astronautics Group (United States)
G. S. Duane, Ball Aerospace (United States)
M. A. Epstein, Martin Marietta Astronautics Group (United States)
M. Magee, University of Wyoming (United States)
D. W. Mathis, Martin Marietta Astronautics Group (United States)
M. J. Nathan, University of Colorado (United States)
W. J. Wolfe, Martin Marietta Astronautics Group (United States)


Published in SPIE Proceedings Vol. 0852:
Mobile Robots II
Wendell H. Chun; William J. Wolfe, Editor(s)

© SPIE. Terms of Use
Back to Top