Share Email Print

Proceedings Paper

Dimensions of complexity in learning from interactive instruction
Author(s): Scott B. Huffman; John E. Laird
Format Member Price Non-Member Price
PDF $17.00 $21.00

Paper Abstract

Robotic systems deployed in space must exhibit flexibility. In particular, an intelligent robotic agent should not have to be reprogrammed for each of the various tasks it may face during the course of its lifetime. However, pre-programming knowledge for all of the possible tasks that may be needed is extremely difficult. Therefore, a powerful notion is that of an instructible agent, one which is able to receive task-level instructions and advice from a human advisor. An agent must do more than simply memorize the instructions it is given (this would amount to programming). Rather, after mapping instructions into task constructs that it can reason with, it must determine each instruction's proper scope of applicability. In this paper, we examine the characteristics of instruction, and the characteristics of agents, that affect learning from instruction. We find that in addition to a myriad of linguistic concerns, both the situatedness of the instructions (their placement within the ongoing execution of tasks) and the prior domain knowledge of the agent have an impact on what can be learned.

Paper Details

Date Published: 1 November 1992
PDF: 12 pages
Proc. SPIE 1829, Cooperative Intelligent Robotics in Space III, (1 November 1992); doi: 10.1117/12.131692
Show Author Affiliations
Scott B. Huffman, Univ. of Michigan (United States)
John E. Laird, Univ. of Michigan (United States)

Published in SPIE Proceedings Vol. 1829:
Cooperative Intelligent Robotics in Space III
Jon D. Erickson, Editor(s)

© SPIE. Terms of Use
Back to Top