Share Email Print
cover

Proceedings Paper

Learning object models from few examples
Author(s): Ishan Misra; Yuxiong Wang; Martial Hebert
Format Member Price Non-Member Price
PDF $17.00 $21.00

Paper Abstract

Current computer vision systems rely primarily on fixed models learned in a supervised fashion, i.e., with extensive manually labelled data. This is appropriate in scenarios in which the information about all the possible visual queries can be anticipated in advance, but it does not scale to scenarios in which new objects need to be added during the operation of the system, as in dynamic interaction with UGVs. For example, the user might have found a new type of object of interest, e.g., a particular vehicle, which needs to be added to the system right away. The supervised approach is not practical to acquire extensive data and to annotate it. In this paper, we describe techniques for rapidly updating or creating models using sparsely labelled data. The techniques address scenarios in which only a few annotated training samples are available and need to be used to generate models suitable for recognition. These approaches are crucial for on-the-fly insertion of models by users and on-line learning.

Paper Details

Date Published: 13 May 2016
PDF: 10 pages
Proc. SPIE 9837, Unmanned Systems Technology XVIII, 98370O (13 May 2016); doi: 10.1117/12.2231108
Show Author Affiliations
Ishan Misra, Carnegie Mellon Univ. (United States)
Yuxiong Wang, Carnegie Mellon Univ. (United States)
Martial Hebert, Carnegie Mellon Univ. (United States)


Published in SPIE Proceedings Vol. 9837:
Unmanned Systems Technology XVIII
Robert E. Karlsen; Douglas W. Gage; Charles M. Shoemaker; Grant R. Gerhart, Editor(s)

© SPIE. Terms of Use
Back to Top