Share Email Print
cover

Proceedings Paper

3D+T motion analysis: motion sensor network versus multiple video cameras
Author(s): Jean-Pierre Leduc
Format Member Price Non-Member Price
PDF $17.00 $21.00
cover GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free. Check Access

Paper Abstract

This paper addresses the problem of motion analysis performed from digital data captured by a network of motion sensors scattered over a 3D+T field of interest. The digital signals captured in the field are transmitted through a telecommunication network to be processed in a remote monitoring center by an artificial intelligent system based on a dual control involving both deep learning and model based algorithms. Motion analysis proceeds through consecutive steps as detection, motion-oriented classification, parameter estimation, tracking and building trajectories. Eventually, the system can detect and predict abnormalities, incidents and accidents. In all current applications, it is commonly thought that motion analysis is to be performed from data streams captured from multiple video-cameras distributed in a network following a so-called “camera-everywhere” approach. A basic observation of how animal biology proceeds shows that information analyzed by the cortex usually originates from one global eye and from a network of sensors non-uniformly distributed over the the entire skin. Telecommunications are performed through a network of nerves that acts as a bundle of telephone lines that connects each sensor located in the eye and in the skin to specific and dedicated areas of the cortex. This paper shows the relevance of this natural way to perform full motion analysis from a network of motion sensor distributed over a field of interest where the 3D+T motion analysis is performed. Using all video-cameras in a network involves two main drawbacks. First, this setting requires to transmit over a telecommunication network an amount of information of which more than ninety nine percent of the content is useless for motion analysis. Second, additional software algorithms that are consuming time and resources have to be implemented to extract the motion information from the video signal and build the trajectories. Video cameras are in fact useful for both pattern recognition and motion disambiguation, and therefore, should be be limited in number and located at key spots or on robots moving in the field. Eventually, a central station implements the motion analysis algorithm. This paper describes the application with a three layer scheme and compares both approaches namely composed of all video-cameras or all motion sensors. Comparisons are presented between two schemes, namely multiple video cameras and sensor networks, in term of networking and processing the motion information in the remote monitoring center.

Paper Details

Date Published: 7 September 2018
PDF: 20 pages
Proc. SPIE 10751, Optics and Photonics for Information Processing XII, 107510X (7 September 2018); doi: 10.1117/12.2321393
Show Author Affiliations
Jean-Pierre Leduc, Reliance Core Consulting (United States)


Published in SPIE Proceedings Vol. 10751:
Optics and Photonics for Information Processing XII
Abdul A. S. Awwal; Khan M. Iftekharuddin; Mireya García Vázquez, Editor(s)

© SPIE. Terms of Use
Back to Top