Share Email Print
cover

Proceedings Paper

Reinforcement learning of periodical gaits in locomotion robots
Author(s): Mikhail Svinin; Kazuyaki Yamada; S. Ushio; Kanji Ueda
Format Member Price Non-Member Price
PDF $14.40 $18.00
cover GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free. Check Access

Paper Abstract

Emergence of stable gaits in locomotion robots is studied in this paper. A classifier system, implementing an instance- based reinforcement learning scheme, is used for sensory- motor control of an eight-legged mobile robot. Important feature of the classifier system is its ability to work with the continuous sensor space. The robot does not have a prior knowledge of the environment, its own internal model, and the goal coordinates. It is only assumed that the robot can acquire stable gaits by learning how to reach a light source. During the learning process the control system, is self-organized by reinforcement signals. Reaching the light source defines a global reward. Forward motion gets a local reward, while stepping back and falling down get a local punishment. Feasibility of the proposed self-organized system is tested under simulation and experiment. The control actions are specified at the leg level. It is shown that, as learning progresses, the number of the action rules in the classifier systems is stabilized to a certain level, corresponding to the acquired gait patterns.

Paper Details

Date Published: 26 August 1999
PDF: 11 pages
Proc. SPIE 3839, Sensor Fusion and Decentralized Control in Robotic Systems II, (26 August 1999); doi: 10.1117/12.360338
Show Author Affiliations
Mikhail Svinin, Kobe Univ. (Japan)
Kazuyaki Yamada, Kobe Univ. (Japan)
S. Ushio, Kobe Univ. (Japan)
Kanji Ueda, Kobe Univ. (Japan)


Published in SPIE Proceedings Vol. 3839:
Sensor Fusion and Decentralized Control in Robotic Systems II
Gerard T. McKee; Paul S. Schenker, Editor(s)

© SPIE. Terms of Use
Back to Top