Share Email Print

Proceedings Paper

Motion lecture annotation system to learn Naginata performances
Author(s): Daisuke Kobayashi; Ryota Sakamoto; Yoshihiko Nomura
Format Member Price Non-Member Price
PDF $14.40 $18.00

Paper Abstract

This paper describes a learning assistant system using motion capture data and annotation to teach “Naginata-jutsu” (a skill to practice Japanese halberd) performance. There are some video annotation tools such as YouTube. However these video based tools have only single angle of view. Our approach that uses motion-captured data allows us to view any angle. A lecturer can write annotations related to parts of body. We have made a comparison of effectiveness between the annotation tool of YouTube and the proposed system. The experimental result showed that our system triggered more annotations than the annotation tool of YouTube.

Paper Details

Date Published: 3 February 2014
PDF: 7 pages
Proc. SPIE 9025, Intelligent Robots and Computer Vision XXXI: Algorithms and Techniques, 90250F (3 February 2014); doi: 10.1117/12.2041630
Show Author Affiliations
Daisuke Kobayashi, Mie Univ. (Japan)
Ryota Sakamoto, Mie Univ. Hospital (Japan)
Yoshihiko Nomura, Mie Univ. (Japan)

Published in SPIE Proceedings Vol. 9025:
Intelligent Robots and Computer Vision XXXI: Algorithms and Techniques
Juha Röning; David Casasent, Editor(s)

© SPIE. Terms of Use
Back to Top