Share Email Print
cover

Proceedings Paper

Dynamic saliency detection via CNN and spatial-temporal fusion
Author(s): Qi Zhang; Dong Xu
Format Member Price Non-Member Price
PDF $14.40 $18.00
cover GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free. Check Access

Paper Abstract

Visual saliency prediction has obtained a significant popularity these years but the majority research is for static saliency prediction. An approach to detect dynamic saliency of videos is proposed in this paper, which exploits a spatial-temporal fusion way. Spatial saliency is detected by a trained convolutional neutral network, and we use a larger convolutional kernel for some layers in our network because saliency is influenced by global contrast according to visual psychology. While temporal saliency is extracted by optical flow and we combine it with K-means cluster, which brings a more accurate result. In addition, the two are fused in an optimal weighted way. Our experiments on DIEM datasets outperforms compared to four other dynamic saliency models on two metrics.

Paper Details

Date Published: 9 August 2018
PDF: 8 pages
Proc. SPIE 10806, Tenth International Conference on Digital Image Processing (ICDIP 2018), 1080619 (9 August 2018); doi: 10.1117/12.2503058
Show Author Affiliations
Qi Zhang, Beihang Univ. (China)
Dong Xu, Beihang Univ. (China)


Published in SPIE Proceedings Vol. 10806:
Tenth International Conference on Digital Image Processing (ICDIP 2018)
Xudong Jiang; Jenq-Neng Hwang, Editor(s)

© SPIE. Terms of Use
Back to Top