Share Email Print
cover

Journal of Electronic Imaging • new

Object detection system based on multimodel saliency maps
Author(s): Ya'nan Guo; Chongfan Luo; Yide Ma
Format Member Price Non-Member Price
PDF $20.00 $25.00

Paper Abstract

Detection of visually salient image regions is extensively applied in computer vision and computer graphics, such as object detection, adaptive compression, and object recognition, but any single model always has its limitations to various images, so in our work, we establish a method based on multimodel saliency maps to detect the object, which intelligently absorbs the merits of various individual saliency detection models to achieve promising results. The method can be roughly divided into three steps: in the first step, we propose a decision-making system to evaluate saliency maps obtained by seven competitive methods and merely select the three most valuable saliency maps; in the second step, we introduce heterogeneous PCNN algorithm to obtain three prime foregrounds; and then a self-designed nonlinear fusion method is proposed to merge these saliency maps; at last, the adaptive improved and simplified PCNN model is used to detect the object. Our proposed method can constitute an object detection system for different occasions, which requires no training, is simple, and highly efficient. The proposed saliency fusion technique shows better performance over a broad range of images and enriches the applicability range by fusing different individual saliency models, this proposed system is worthy enough to be called a strong model. Moreover, the proposed adaptive improved SPCNN model is stemmed from the Eckhorn’s neuron model, which is skilled in image segmentation because of its biological background, and in which all the parameters are adaptive to image information. We extensively appraise our algorithm on classical salient object detection database, and the experimental results demonstrate that the aggregation of saliency maps outperforms the best saliency model in all cases, yielding highest precision of 89.90%, better recall rates of 98.20%, greatest F -measure of 91.20%, and lowest mean absolute error value of 0.057, the value of proposed saliency evaluation E H A reaches to 215.287. We deem our method can be wielded to diverse applications in the future.

Paper Details

Date Published: 14 April 2017
PDF: 12 pages
J. Electron. Imaging. 26(2) 023022 doi: 10.1117/1.JEI.26.2.023022
Published in: Journal of Electronic Imaging Volume 26, Issue 2
Show Author Affiliations
Ya'nan Guo, Lanzhou Univ. (China)
Chongfan Luo, Lanzhou Univ. (China)
Yide Ma, Lanzhou Univ. (China)


© SPIE. Terms of Use
Back to Top