Share Email Print
cover

Proceedings Paper

Evaluation in visualization: some issues and best practices
Format Member Price Non-Member Price
PDF $17.00 $21.00

Paper Abstract

The first data and information visualization techniques and systems were developed and presented without a systematic evaluation; however, researchers have become, and are more and more, aware of the importance of evaluation (Plaisant, 2004)1. Evaluation is not only a means of improving techniques and applications, but it can also produce evidence of measurable benefits that will encourage adoption. Yet, evaluating visualization applications or techniques, is not simple. We deem visualization applications should be developed using a user-centered design approach and that evaluation should take place in several phases along the process and with different purposes. An account of what issues we consider relevant while planning an evaluation in Medical Data Visualization can be found in (Sousa Santos and Dillenseger, 2005) 2. In that work the question “how well does a visualization represent the underlying phenomenon and help the user understand it?” is identified as fundamental, and is decomposed in two aspects: A) the evaluation of the representation of the phenomenon (first part of the question). B) the evaluation of the users’ performance in their tasks when using the visualization, which implies the understanding of the phenomenon (second part of the question). We contend that these questions transcend Medical Data Visualization and can be considered central to evaluating Data and Information Visualization applications and techniques in general. In fact, the latter part of the question is related to the question Freitas et al. (2009) 3 deem crucial to user centered visualization evaluation: “How do we know if information visualization tools are useful and usable for real users performing real visualization tasks?” In what follows issues and methods that we have been using to tackle this latter question, are briefly addressed. This excludes equally relevant topics as algorithm optimization, and accuracy, that can be dealt with using concepts and methods well known in other disciplines and are mainly related to how well the phenomenon is represented. A list of guidelines considered as our best practices to perform evaluations is presented and some conclusions are drawn.

Paper Details

Date Published: 3 February 2014
PDF: 8 pages
Proc. SPIE 9017, Visualization and Data Analysis 2014, 90170O (3 February 2014); doi: 10.1117/12.2038259
Show Author Affiliations
Beatriz Sousa Santos, Univ. de Aveiro (Portugal)
Paulo Dias, Univ. de Aveiro (Portugal)


Published in SPIE Proceedings Vol. 9017:
Visualization and Data Analysis 2014
Pak Chung Wong; David L. Kao; Ming C. Hao; Chaomei Chen, Editor(s)

© SPIE. Terms of Use
Back to Top
PREMIUM CONTENT
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research
Forgot your username?
close_icon_gray