Share Email Print
cover

Proceedings Paper

Classification of foods by transferring knowledge from ImageNet dataset
Author(s): Elnaz J. Heravi; Hamed H. Aghdam; Domenec Puig
Format Member Price Non-Member Price
PDF $14.40 $18.00

Paper Abstract

Automatic classification of foods is a way to control food intake and tackle with obesity. However, it is a challenging problem since foods are highly deformable and complex objects. Results on ImageNet dataset have revealed that Convolutional Neural Network has a great expressive power to model natural objects. Nonetheless, it is not trivial to train a ConvNet from scratch for classification of foods. This is due to the fact that ConvNets require large datasets and to our knowledge there is not a large public dataset of food for this purpose. Alternative solution is to transfer knowledge from trained ConvNets to the domain of foods. In this work, we study how transferable are state-of-art ConvNets to the task of food classification. We also propose a method for transferring knowledge from a bigger ConvNet to a smaller ConvNet by keeping its accuracy similar to the bigger ConvNet. Our experiments on UECFood256 datasets show that Googlenet, VGG and residual networks produce comparable results if we start transferring knowledge from appropriate layer. In addition, we show that our method is able to effectively transfer knowledge to the smaller ConvNet using unlabeled samples.

Paper Details

Date Published: 17 March 2017
PDF: 5 pages
Proc. SPIE 10341, Ninth International Conference on Machine Vision (ICMV 2016), 1034128 (17 March 2017); doi: 10.1117/12.2268737
Show Author Affiliations
Elnaz J. Heravi, Univ. Rovira i Virgili (Spain)
Hamed H. Aghdam, Univ. Rovira i Virgili (Spain)
Domenec Puig, Univ. Rovira i Virgili (Spain)


Published in SPIE Proceedings Vol. 10341:
Ninth International Conference on Machine Vision (ICMV 2016)
Antanas Verikas; Petia Radeva; Dmitry P. Nikolaev; Wei Zhang; Jianhong Zhou, Editor(s)

© SPIE. Terms of Use
Back to Top