Share Email Print
cover

Proceedings Paper

Comparing humans to automation in rating photographic aesthetics
Format Member Price Non-Member Price
PDF $14.40 $18.00

Paper Abstract

Computer vision researchers have recently developed automated methods for rating the aesthetic appeal of a photograph. Machine learning techniques, applied to large databases of photos, mimic with reasonably good accuracy the mean ratings of online viewers. However, owing to the many factors underlying aesthetics, it is likely that such techniques for rating photos do not generalize well beyond the data on which they are trained. This paper reviews recent attempts to compare human ratings, obtained in a controlled setting, to ratings provided by machine learning techniques. We review methods to obtain meaningful ratings both from selected groups of judges and also from crowd sourcing. We find that state-of-the-art techniques for automatic aesthetic evaluation are only weakly correlated with human ratings. This shows the importance of obtaining data used for training automated systems under carefully controlled conditions.

Paper Details

Date Published: 6 March 2015
PDF: 10 pages
Proc. SPIE 9408, Imaging and Multimedia Analytics in a Web and Mobile World 2015, 94080C (6 March 2015); doi: 10.1117/12.2084991
Show Author Affiliations
Ramakrishna Kakarala, Nanyang Technological Univ. (Singapore)
Abhishek Agrawal, Nanyang Technological Univ. (Singapore)
Sandino Morales, Nanyang Technological Univ. (Singapore)


Published in SPIE Proceedings Vol. 9408:
Imaging and Multimedia Analytics in a Web and Mobile World 2015
Qian Lin; Jan P. Allebach; Zhigang Fan, Editor(s)

© SPIE. Terms of Use
Back to Top