Share Email Print
cover

Proceedings Paper

Interacting with target tracking algorithms in a gaze-enhanced motion video analysis system
Author(s): Jutta Hild; Wolfgang Krüger; Norbert Heinze; Elisabeth Peinsipp-Byma; Jürgen Beyerer
Format Member Price Non-Member Price
PDF $14.40 $18.00
cover GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free. Check Access

Paper Abstract

Motion video analysis is a challenging task, particularly if real-time analysis is required. It is therefore an important issue how to provide suitable assistance for the human operator. Given that the use of customized video analysis systems is more and more established, one supporting measure is to provide system functions which perform subtasks of the analysis. Recent progress in the development of automated image exploitation algorithms allow, e.g., real-time moving target tracking. Another supporting measure is to provide a user interface which strives to reduce the perceptual, cognitive and motor load of the human operator for example by incorporating the operator’s visual focus of attention. A gaze-enhanced user interface is able to help here. This work extends prior work on automated target recognition, segmentation, and tracking algorithms as well as about the benefits of a gaze-enhanced user interface for interaction with moving targets. We also propose a prototypical system design aiming to combine both the qualities of the human observer’s perception and the automated algorithms in order to improve the overall performance of a real-time video analysis system. In this contribution, we address two novel issues analyzing gaze-based interaction with target tracking algorithms. The first issue extends the gaze-based triggering of a target tracking process, e.g., investigating how to best relaunch in the case of track loss. The second issue addresses the initialization of tracking algorithms without motion segmentation where the operator has to provide the system with the object’s image region in order to start the tracking algorithm.

Paper Details

Date Published: 13 May 2016
PDF: 9 pages
Proc. SPIE 9841, Geospatial Informatics, Fusion, and Motion Video Analytics VI, 98410K (13 May 2016); doi: 10.1117/12.2223726
Show Author Affiliations
Jutta Hild, Fraunhofer-Institut für Optronik, Systemtechnik und Bildauswertung (Germany)
Wolfgang Krüger, Fraunhofer-Institut für Optronik, Systemtechnik und Bildauswertung (Germany)
Norbert Heinze, Fraunhofer-Institut für Optronik, Systemtechnik und Bildauswertung (Germany)
Elisabeth Peinsipp-Byma, Fraunhofer-Institut für Optronik, Systemtechnik und Bildauswertung (Germany)
Jürgen Beyerer, Fraunhofer-Institut für Optronik, Systemtechnik und Bildauswertung (Germany)
Karlsruher Institut für Technologie (Germany)


Published in SPIE Proceedings Vol. 9841:
Geospatial Informatics, Fusion, and Motion Video Analytics VI
Matthew F. Pellechia; Kannappan Palaniappan; Peter J. Doucette; Shiloh L. Dockstader; Gunasekaran Seetharaman; Paul B. Deignan, Editor(s)

© SPIE. Terms of Use
Back to Top