Share Email Print
cover

Proceedings Paper • new

Virtual multi-modal object detection and classification with deep convolutional neural networks
Author(s): Nikolaos Mitsakos; Manos Papadakis
Format Member Price Non-Member Price
PDF $17.00 $21.00

Paper Abstract

In this paper we demonstrate how the post-processing of gray-scale images with algorithms which have a singularity enhancement effect can assume the role of auxiliary modalities, as in the case where an intelligent system fuses information from multiple physical modalities. We show that as in multimodal AI-fusion, “virtual” multimodal inputs can improve the performance of object detection. We design, implement and test a novel Convolutional Neural Network architecture, based on the Faster R-CNN network for multiclass object detection and classification. Our architecture combines deep feature representations of the input images, generated by networks trained independently on physical and virtual imaging modalities. Using an analog of the ROC curve, the Average Recall over Precision curve, we show that the fusion of certain virtual modality inputs, capable of enhancing singularities and neutralizing illumination, improve recognition accuracy.

Paper Details

Date Published: 9 September 2019
PDF: 21 pages
Proc. SPIE 11138, Wavelets and Sparsity XVIII, 1113805 (9 September 2019); doi: 10.1117/12.2529233
Show Author Affiliations
Nikolaos Mitsakos, Univ. of Houston (United States)
Manos Papadakis, Univ. of Houston (United States)


Published in SPIE Proceedings Vol. 11138:
Wavelets and Sparsity XVIII
Dimitri Van De Ville; Manos Papadakis; Yue M. Lu, Editor(s)

© SPIE. Terms of Use
Back to Top