Share Email Print

Proceedings Paper

Predicting visual saliency via a dilated inception module-based model
Author(s): Sheng Yang; Weisi Lin
Format Member Price Non-Member Price
PDF $17.00 $21.00

Paper Abstract

With the advent of deep convolutional neural networks (DCNN), the improvements in visual saliency prediction research are impressive. Despite this, it is still needed to fully characterize the multi-scale saliency-influential factors into the current deep saliency framework for further improvement. However, the existing approaches aiming at capturing multi-scale contextual features either suffer from the heavy computation or limited performance gain. To overcome this, a lightweight yet powerful module for fully exploiting multi-scale contextual features is desired. In this paper, we propose a DCNN-based visual saliency prediction model to approach this goal. Our model is inspired by the GoogleNet, which use the inception module to capture multi-scale contextual features at various receptive fields. Specifically, we revise the original inception module to have more powerful multi-scale feature extraction capacity and less computation load by utilizing dilated convolutions to replace the original standard ones. The whole model is trained end-to-end and is efficient to achieve real-time performance. Experimental results on several challenging saliency benchmark datasets, including SALICON, MIT1003, and MIT300, demonstrate that our proposed saliency model can achieve state-of-the-art performance with competitive inference time.

Paper Details

Date Published: 22 March 2019
PDF: 6 pages
Proc. SPIE 11049, International Workshop on Advanced Image Technology (IWAIT) 2019, 110491D (22 March 2019); doi: 10.1117/12.2521507
Show Author Affiliations
Sheng Yang, Nanyang Technological Univ. (Singapore)
Weisi Lin, Nanyang Technological Univ. (Singapore)

Published in SPIE Proceedings Vol. 11049:
International Workshop on Advanced Image Technology (IWAIT) 2019
Qian Kemao; Kazuya Hayase; Phooi Yee Lau; Wen-Nung Lie; Yung-Lyul Lee; Sanun Srisuk; Lu Yu, Editor(s)

© SPIE. Terms of Use
Back to Top
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research
Forgot your username?