Share Email Print
cover

Proceedings Paper

Using triplet loss to generate better descriptors for 3D object retrieval
Author(s): Haowen Deng; Lei Luo; Mei Wen; Chunyuan Zhang
Format Member Price Non-Member Price
PDF $14.40 $18.00

Paper Abstract

This paper investigates the 3D object retrieval problem by adapting a Convolution Network and introduce triplet loss into the training process of network. The 3D objects are converted to vexolized volumetric grids and then fed into the network. The outputs from the first full connection layer are taken as the 3D object descriptors. Triplet loss is designed to make the learned descriptors more suitable for retrieval. Experiments demonstrate that our descriptors are distinctive for objects from different categories and similar among those from the same category. It is much better than traditional handcrafted features like SPH and LFD. The superiority over another deep network based method ShapeNets validates the effectiveness of the triplet loss in driving same-class descriptors to assemble and different-class ones to disperse.

Paper Details

Date Published: 29 August 2016
PDF: 5 pages
Proc. SPIE 10033, Eighth International Conference on Digital Image Processing (ICDIP 2016), 100335P (29 August 2016); doi: 10.1117/12.2243781
Show Author Affiliations
Haowen Deng, National Univ. of Defense Technology (China)
Lei Luo, National Univ. of Defense Technology (China)
Mei Wen, National Univ. of Defense Technology (China)
Chunyuan Zhang, National Univ. of Defense Technology (China)


Published in SPIE Proceedings Vol. 10033:
Eighth International Conference on Digital Image Processing (ICDIP 2016)
Charles M. Falco; Xudong Jiang, Editor(s)

© SPIE. Terms of Use
Back to Top