Share Email Print
cover

Proceedings Paper • new

Generating description with multi-feature and saliency maps of image
Author(s): Lisha Liu; Chunna Tian; Ruiguo Zhang; Yuxuan Ding
Format Member Price Non-Member Price
PDF $17.00 $21.00

Paper Abstract

Automatically generating the description of an image is a task that connects computer vision and natural language processing. It has gained more and more attention in the field of artificial intelligence. In this paper, we present a model that generates description for images based on RNN (recurrent neural network) with multi-feature weighted by object attention to represent images. We use LSTM (long short term memory), which is a RNN model, to translate multi-feature of images to text. Most existing methods use single CNN (convolution neural network) trained on ImageNet to extract image features which mainly focuses on objects in images. However, the context in the scene is also informative to image captioning. So we incorporate the scene feature extracted with CNN trained on Places205. We evaluate our model on MSCOCO dataset based on standard metrics. Experiments show that multi-feature performs better than single feature. In addition, the saliency weight on images emphasizes the salient objects in images as the subject in image descriptions. The results show that our model performs better than several state-of-the-art methods on image captioning.

Paper Details

Date Published: 3 January 2020
PDF: 9 pages
Proc. SPIE 11373, Eleventh International Conference on Graphics and Image Processing (ICGIP 2019), 113730Z (3 January 2020); doi: 10.1117/12.2557584
Show Author Affiliations
Lisha Liu, Xidian Univ. (China)
Chunna Tian, Xidian Univ. (China)
Ruiguo Zhang, Xidian Univ. (China)
Yuxuan Ding, Xidian Univ. (China)


Published in SPIE Proceedings Vol. 11373:
Eleventh International Conference on Graphics and Image Processing (ICGIP 2019)
Zhigeng Pan; Xun Wang, Editor(s)

© SPIE. Terms of Use
Back to Top