Share Email Print

Proceedings Paper

A visual perceptual descriptor with depth feature for image retrieval
Author(s): Tianyang Wang; Zhengrui Qin
Format Member Price Non-Member Price
PDF $14.40 $18.00
cover GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free. Check Access

Paper Abstract

This paper proposes a visual perceptual descriptor (VPD) and a new approach to extract perceptual depth feature for 2D image retrieval. VPD mimics human visual system, which can easily distinguish regions that have different textures, whereas for regions which have similar textures, color features are needed for further differentiation. We apply VPD on the gradient direction map of an image, capture texture-similar regions to generate a VPD map. We then impose the VPD map on a quantized color map and extract color features only from the overlapped regions. To reflect the nature of perceptual distance in single 2D image, we propose and extract the perceptual depth feature by computing the nuclear norm of the sparse depth map of an image. Extracted color features and the perceptual depth feature are both incorporated to a feature vector, we utilize this vector to represent an image and measure similarity. We observe that the proposed VPD + depth method achieves a promising result, and extensive experiments prove that it outperforms other typical methods on 2D image retrieval.

Paper Details

Date Published: 21 July 2017
PDF: 8 pages
Proc. SPIE 10420, Ninth International Conference on Digital Image Processing (ICDIP 2017), 104201K (21 July 2017); doi: 10.1117/12.2282077
Show Author Affiliations
Tianyang Wang, Southern Illinois Univ. Carbondale (United States)
Zhengrui Qin, Northwest Missouri State Univ. (United States)

Published in SPIE Proceedings Vol. 10420:
Ninth International Conference on Digital Image Processing (ICDIP 2017)
Charles M. Falco; Xudong Jiang, Editor(s)

© SPIE. Terms of Use
Back to Top