Share Email Print

Optical Engineering • Open Access

Re-establish the time-order across sensors of different modalities
Author(s): Ming-Kai Hsu; Ting N. Lee; Harold Szu

Paper Abstract

Modern cameras can cut passengers' faces into boxes in 0.04 s per frame in parallel without time stamps. Unfortunately, that creates random storage without the tracking capability, and one can no longer meet the 5 W's challenge-"who speaks what, where and when." We develop a time-order reconstruction methodology which sorts the boxes as follows. i. A morphological image preprocessing to overcome the facial changes is based on the peripheral invariance of a human visual system when focusing on a maximum overlapping central region. ii. Replacing the Wiener matched filter desired output with an averaged but blurred long exposure, one can select the best matched sharp short exposures called the anchor faces β's. iii. The time-order neighborhood chaining is done by an iterative self-affirmation logic that demands a mutually agreed-upon minimum distance: whether or not the two nearest neighbors of β, namely face A and face C, also consider β to be their two nearest neighbors. The reconstruction procedure mathematically amounts to a product of two triple correlation functions sharing an intermediate state. We have thus demonstrated the time-order helps us associate a video submanifold with the acoustic manifold that solves the 5 W's challenge.

Paper Details

Date Published: 1 April 2011
PDF: 16 pages
Opt. Eng. 50(4) 047002 doi: 10.1117/1.3562322
Published in: Optical Engineering Volume 50, Issue 4
Show Author Affiliations
Ming-Kai Hsu, The George Washington Univ. (United States)
Ting N. Lee, The George Washington Univ. (United States)
Harold Szu, U.S. Army Night Vision & Electronic Sensors Directorate (United States)

© SPIE. Terms of Use
Back to Top