Share Email Print
cover

Proceedings Paper • new

Edge-to-fog computing for color-assisted moving object detection
Format Member Price Non-Member Price
PDF $17.00 $21.00

Paper Abstract

Future Internet-of-Things (IoT) will be featured by ubiquitous and pervasive vision sensors that generate enormous amount of streaming videos. The ability to analyze the big video data in a timely manner is essential to delay-sensitive applications, such as autonomous vehicles and body-worn cameras for police forces. Due to the limitation of computing power and storage capacity on local devices, the fog computing paradigm has been developed in recent years to process big sensor data closer to the end users while it avoids the transmission delay and huge uplink bandwidth requirements in cloud-based data analysis. In this work, we propose an edge-to-fog computing framework for object detection from surveillance videos. Videos are captured locally at an edge device and sent to fog nodes for color-assisted L1-subspace background modeling. The results are then sent back to the edge device for data fusion and final object detection. Experimental studies demonstrate that the proposed color-assisted background modeling offers more diversity than pure luminance based background modeling and hence achieves higher object detection accuracy. Meanwhile, the proposed edge-to-fog paradigm leverages the computing resources on multiple platforms.

Paper Details

Date Published: 13 May 2019
PDF: 9 pages
Proc. SPIE 10989, Big Data: Learning, Analytics, and Applications, 1098903 (13 May 2019); doi: 10.1117/12.2516023
Show Author Affiliations
Ying Liu, Santa Clara Univ. (United States)
Zachary Bellay, Santa Clara Univ. (United States)
Payton Bradsky, Santa Clara Univ. (United States)
Glen Chandler, Santa Clara Univ. (United States)
Brandon Craig, Santa Clara Univ. (United States)


Published in SPIE Proceedings Vol. 10989:
Big Data: Learning, Analytics, and Applications
Fauzia Ahmad, Editor(s)

© SPIE. Terms of Use
Back to Top