Share Email Print
cover

Journal of Applied Remote Sensing • Open Access

Sparsity-guided saliency detection for remote sensing images
Author(s): Danpei Zhao; Jiajia Wang; Jun Shi; Zhiguo Jiang

Paper Abstract

Traditional saliency detection can effectively detect possible objects using an attentional mechanism instead of automatic object detection, and thus is widely used in natural scene detection. However, it may fail to extract salient objects accurately from remote sensing images, which have their own characteristics such as large data volumes, multiple resolutions, illumination variation, and complex texture structure. We propose a sparsity-guided saliency detection model for remote sensing images that uses a sparse representation to obtain the high-level global and background cues for saliency map integration. Specifically, it first uses pixel-level global cues and background prior information to construct two dictionaries that are used to characterize the global and background properties of remote sensing images. It then employs a sparse representation for the high-level cues. Finally, a Bayesian formula is applied to integrate the saliency maps generated by both types of high-level cues. Experimental results on remote sensing image datasets that include various objects under complex conditions demonstrate the effectiveness and feasibility of the proposed method.

Paper Details

Date Published: 11 September 2015
PDF: 14 pages
J. Appl. Remote Sens. 9(1) 095055 doi: 10.1117/1.JRS.9.095055
Published in: Journal of Applied Remote Sensing Volume 9, Issue 1
Show Author Affiliations
Danpei Zhao, BeiHang Univ. (China)
Beijing Key Lab. of Digital Media (China)
Jiajia Wang, BeiHang Univ. (China)
Beijing Key Lab. of Digital Media (China)
Jun Shi, BeiHang Univ. (China)
Beijing Key Lab. of Digital Media (China)
Zhiguo Jiang, BeiHang Univ. (China)
Beijing Key Lab. of Digital Media (China)


© SPIE. Terms of Use
Back to Top