Share Email Print

Proceedings Paper

Data quality issues in visualization
Author(s): Alex Pang; Jeff J. Furman; Wendell Nuss
Format Member Price Non-Member Price
PDF $14.40 $18.00
cover GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free. Check Access

Paper Abstract

Recent efforts in visualization have concentrated on high volume data sets from numerical simulations and medical imaging. There is another large class of data, characterized by their spatial sparsity with noisy and possibly missing data points, that also need to be visualized. Two places where these type of data sets can be found are in oceanographic and atmospheric science studies. In such cases, it is not uncommon to have on the order on one percent of sampled data available within a space volume. Techniques that attempt to deal with the problem of filling in the holes range in complexity from simple linear interpolation to more sophisticated multiquadric and optimal interpolation techniques. These techniques will generally produce results that do not fully agree with each other. To avoid misleading the users, it is important to highlight these differences and make sure the users are aware of the idiosyncrasies of the different methods. This paper compares some of these interpolation techniques on sparse data sets and also discusses how other parameters such as confidence levels and drop-off rates may be incorporated into the visual display.

Paper Details

Date Published: 4 April 1994
PDF: 12 pages
Proc. SPIE 2178, Visual Data Exploration and Analysis, (4 April 1994); doi: 10.1117/12.172069
Show Author Affiliations
Alex Pang, Univ. of California/Santa Cruz (United States)
Jeff J. Furman, Univ. of California/Santa Cruz (United States)
Wendell Nuss, Naval Postgraduate School (United States)

Published in SPIE Proceedings Vol. 2178:
Visual Data Exploration and Analysis
Robert J. Moorhead; Deborah E. Silver; Samuel P. Uselton, Editor(s)

© SPIE. Terms of Use
Back to Top