Share Email Print

Proceedings Paper

Machine-assisted editing of user-generated content
Author(s): Markus Cremer; Randall Cook
Format Member Price Non-Member Price
PDF $17.00 $21.00

Paper Abstract

Over recent years user-generated content has become ubiquitously available and an attractive entertainment source for millions of end-users. Particularly for larger events, where many people use their devices to capture the action, a great number of short video clips are made available through appropriate web services. The objective of this presentation is to describe a way to combine these clips by analyzing them, and automatically reconstruct the time line in which the individual video clips were captured. This will enable people to easily create a compelling multimedia experience by leveraging multiple clips taken by different users from different angles, and across different time spans. The user will be able to shift into the role of a movie director mastering a multi-camera recording of the event. To achieve this goal, the audio portion of the video clips is analyzed, and waveform characteristics are computed with high temporal granularity in order to facilitate precise time alignment and overlap computation of the user-generated clips. Special care has to be given not only to the robustness of the selected audio features against ambient noise and various distortions, but also to the matching algorithm used to align the user-generated clips properly.

Paper Details

Date Published: 4 February 2009
PDF: 10 pages
Proc. SPIE 7254, Media Forensics and Security, 725404 (4 February 2009); doi: 10.1117/12.807515
Show Author Affiliations
Markus Cremer, Gracenote (United States)
Randall Cook, Gracenote (United States)

Published in SPIE Proceedings Vol. 7254:
Media Forensics and Security
Edward J. Delp III; Jana Dittmann; Nasir D. Memon; Ping Wah Wong, Editor(s)

© SPIE. Terms of Use
Back to Top