Share Email Print
cover

Proceedings Paper

Parameterized framework for the analysis of visual quality assessments using crowdsourcing
Author(s): Anthony Fremuth; Velibor Adzic; Hari Kalva
Format Member Price Non-Member Price
PDF $14.40 $18.00
cover GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free. Check Access

Paper Abstract

The ability to assess the quality of new multimedia tools and applications relies heavily on the perception of the end user. In order to quantify the perception, subjective tests are required to evaluate the effectiveness of new technologies. However, the standard for subjective user studies requires a highly controlled test environment and is costly in terms of both money and time. To circumvent these issues we are utilizing crowdsourcing platforms such as CrowdFlower and Amazon's Mechanical Turk. The reliability of the results relies on factors that are not controlled and can be considered “hidden”. We are using pre-test survey to collect responses from subjects that reveal some of the hidden factors. Using statistical analysis we build parameterized model allowing for proper adjustments to collected test scores.

Paper Details

Date Published: 17 March 2015
PDF: 10 pages
Proc. SPIE 9394, Human Vision and Electronic Imaging XX, 93940C (17 March 2015); doi: 10.1117/12.2080661
Show Author Affiliations
Anthony Fremuth, Florida Atlantic Univ. (United States)
Velibor Adzic, Florida Atlantic Univ. (United States)
Hari Kalva, Florida Atlantic Univ. (United States)


Published in SPIE Proceedings Vol. 9394:
Human Vision and Electronic Imaging XX
Bernice E. Rogowitz; Thrasyvoulos N. Pappas; Huib de Ridder, Editor(s)

© SPIE. Terms of Use
Back to Top