Share Email Print
cover

Proceedings Paper

Learning and detecting coordinated multi-entity activities from persistent surveillance
Author(s): Georgiy Levchuk; Matt Jacobsen; Caitlin Furjanic; Aaron Bobick
Format Member Price Non-Member Price
PDF $14.40 $18.00
cover GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free. Check Access

Paper Abstract

In this paper, we present our enhanced model of multi-entity activity recognition, which operates on person and vehicle tracks, converts them into motion and interaction events, and represents activities via multiattributed role networks encoding spatial, temporal, contextual, and semantic characteristics of coordinated activities. Our model is flexible enough to capture variations of behaviors, and is used for both learning of repetitive activity patterns in semi-supervised manner, and detection of activities in data with large ambiguity and high ratio of irrelevant to relevant tracks and events. We demonstrate our models using activities captured in CLIF persistent wide area motion data collections.

Paper Details

Date Published: 23 May 2013
PDF: 12 pages
Proc. SPIE 8745, Signal Processing, Sensor Fusion, and Target Recognition XXII, 87451L (23 May 2013); doi: 10.1117/12.2014875
Show Author Affiliations
Georgiy Levchuk, Aptima, Inc. (United States)
Matt Jacobsen, Aptima, Inc. (United States)
Caitlin Furjanic, Aptima, Inc. (United States)
Aaron Bobick, Aptima, Inc. (United States)


Published in SPIE Proceedings Vol. 8745:
Signal Processing, Sensor Fusion, and Target Recognition XXII
Ivan Kadar, Editor(s)

© SPIE. Terms of Use
Back to Top