Share Email Print

Proceedings Paper

Decentralized reinforcement-learning control and emergence of motion patterns
Author(s): Mikhail Svinin; Kazuyaki Yamada; Kazuhiro Okhura; Kanji Ueda
Format Member Price Non-Member Price
PDF $17.00 $21.00

Paper Abstract

In this paper we propose a system for studying emergence of motion patterns in autonomous mobile robotic systems. The system implements an instance-based reinforcement learning control. Three spaces are of importance in formulation of the control scheme. They are the work space, the sensor space, and the action space. Important feature of our system is that all these spaces are assumed to be continuous. The core part of the system is a classifier system. Based on the sensory state space analysis, the control is decentralized and is specified at the lowest level of the control system. However, the local controllers are implicitly connected through the perceived environment information. Therefore, they constitute a dynamic environment with respect to each other. The proposed control scheme is tested under simulation for a mobile robot in a navigation task. It is shown that some patterns of global behavior--such as collision avoidance, wall-following, light-seeking--can emerge from the local controllers.

Paper Details

Date Published: 9 October 1998
PDF: 12 pages
Proc. SPIE 3523, Sensor Fusion and Decentralized Control in Robotic Systems, (9 October 1998); doi: 10.1117/12.327004
Show Author Affiliations
Mikhail Svinin, Kobe Univ. (Japan)
Kazuyaki Yamada, Kobe Univ. (Japan)
Kazuhiro Okhura, Kobe Univ. (Japan)
Kanji Ueda, Kobe Univ. (Japan)

Published in SPIE Proceedings Vol. 3523:
Sensor Fusion and Decentralized Control in Robotic Systems
Paul S. Schenker; Gerard T. McKee, Editor(s)

© SPIE. Terms of Use
Back to Top
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research
Forgot your username?