Share Email Print
cover

Proceedings Paper

Computationally efficient target classification in multispectral image data with Deep Neural Networks
Author(s): Lukas Cavigelli; Dominic Bernath; Michele Magno; Luca Benini
Format Member Price Non-Member Price
PDF $14.40 $18.00
cover GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free. Check Access

Paper Abstract

Detecting and classifying targets in video streams from surveillance cameras is a cumbersome, error-prone and expensive task. Often, the incurred costs are prohibitive for real-time monitoring. This leads to data being stored locally or transmitted to a central storage site for post-incident examination. The required communication links and archiving of the video data are still expensive and this setup excludes preemptive actions to respond to imminent threats. An effective way to overcome these limitations is to build a smart camera that analyzes the data on-site, close to the sensor, and transmits alerts when relevant video sequences are detected.

Deep neural networks (DNNs) have come to outperform humans in visual classifications tasks and are also performing exceptionally well on other computer vision tasks. The concept of DNNs and Convolutional Networks (ConvNets) can easily be extended to make use of higher-dimensional input data such as multispectral data. We explore this opportunity in terms of achievable accuracy and required computational effort.

To analyze the precision of DNNs for scene labeling in an urban surveillance scenario we have created a dataset with 8 classes obtained in a field experiment. We combine an RGB camera with a 25-channel VIS-NIR snapshot sensor to assess the potential of multispectral image data for target classification. We evaluate several new DNNs, showing that the spectral information fused together with the RGB frames can be used to improve the accuracy of the system or to achieve similar accuracy with a 3x smaller computation effort. We achieve a very high per-pixel accuracy of 99.1%. Even for scarcely occurring, but particularly interesting classes, such as cars, 75% of the pixels are labeled correctly with errors occurring only around the border of the objects. This high accuracy was obtained with a training set of only 30 labeled images, paving the way for fast adaptation to various application scenarios.

Paper Details

Date Published: 30 November 2016
PDF: 12 pages
Proc. SPIE 9997, Target and Background Signatures II, 99970L (30 November 2016); doi: 10.1117/12.2241383
Show Author Affiliations
Lukas Cavigelli, ETH Zürich (Switzerland)
Dominic Bernath, ETH Zürich (Switzerland)
Michele Magno, ETH Zürich (Switzerland)
Univ. of Bologna (Italy)
Luca Benini, ETH Zürich (Switzerland)
Univ. of Bologna (Italy)


Published in SPIE Proceedings Vol. 9997:
Target and Background Signatures II
Karin U. Stein; Ric H. M. A. Schleijpen, Editor(s)

© SPIE. Terms of Use
Back to Top