Share Email Print
cover

Proceedings Paper • new

Learning deep features with adaptive triplet loss for person reidentification
Author(s): Zhiqiang Li; Nong Sang; Kezhou Chen; Changxin Gao; Ruolin Wang
Format Member Price Non-Member Price
PDF $14.40 $18.00
cover GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free. Check Access

Paper Abstract

Person reidentification (re-id) aims to match a specified person across non-overlapping cameras, which remains a very challenging problem. While previous methods mostly focus on feature extraction or metric learning, this paper makes the attempt in jointly learning both the global full-body and local body-parts features of the input persons with a multichannel convolutional neural network (CNN) model, which is trained by an adaptive triplet loss function that serves to minimize the distance between the same person and maximize the distance between different persons. The experimental results show that our approach achieves very promising results on the large-scale Market-1501 and DukeMTMC-reID datasets.

Paper Details

Date Published: 8 March 2018
PDF: 6 pages
Proc. SPIE 10609, MIPPR 2017: Pattern Recognition and Computer Vision, 106090G (8 March 2018); doi: 10.1117/12.2283478
Show Author Affiliations
Zhiqiang Li, Huazhong Univ. of Science and Technology (China)
Nong Sang, Huazhong Univ. of Science and Technology (China)
Kezhou Chen, Huazhong Univ. of Science and Technology (China)
Changxin Gao, Huazhong Univ. of Science and Technology (China)
Ruolin Wang, Wuhan Univ. (China)


Published in SPIE Proceedings Vol. 10609:
MIPPR 2017: Pattern Recognition and Computer Vision
Zhiguo Cao; Yuehuang Wang; Chao Cai, Editor(s)

© SPIE. Terms of Use
Back to Top