Share Email Print

Proceedings Paper

Trajectory recognition using state transition learning
Author(s): Tadashi Ae; Keiichi Sakai; Keiji Otaka; Nguyen Duy Thien Chuong; Yuuki Obara
Format Member Price Non-Member Price
PDF $14.40 $18.00
cover GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free. Check Access

Paper Abstract

The system receives a pattern sequence, i.e., a time-series of consecutive patterns as an input sequence. The set of input sequences are given as a training set, where a category is attached to each input sequence, and a supervised learning is introduced. First, we introduce a state transition model, AST(Abstract State Transition), where the information of speed of moving objects is added to a state transition model. Next, we extend it to the model including a reinforcement learning, because it will be more powerful to learn the sequence from the start to the goal. Last, we extend it to the model of state including a kind of pushdown tape that represents a knowledge behavior, which we call Pushdown Markov Model. The learning procedure is similar to the learning in MDP(Markov Decision Process) by using DP (Dynamic Programming) matching. As a result, we show a reasonable learning-based recognition of a trajectory for human behavior.

Paper Details

Date Published: 28 May 2003
PDF: 9 pages
Proc. SPIE 5014, Image Processing: Algorithms and Systems II, (28 May 2003); doi: 10.1117/12.477724
Show Author Affiliations
Tadashi Ae, Hiroshima Univ. (Japan)
Keiichi Sakai, Hiroshima Univ. (Japan)
Keiji Otaka, Hiroshima Univ. (Japan)
Nguyen Duy Thien Chuong, Hiroshima Univ. (Japan)
Yuuki Obara, Hiroshima Univ. (Japan)

Published in SPIE Proceedings Vol. 5014:
Image Processing: Algorithms and Systems II
Edward R. Dougherty; Jaakko T. Astola; Karen O. Egiazarian, Editor(s)

© SPIE. Terms of Use
Back to Top