Share Email Print
cover

Proceedings Paper

Statistical methodology for massive datasets and model selection
Author(s): G. Jogesh Babu; James P. McDermott
Format Member Price Non-Member Price
PDF $14.40 $18.00

Paper Abstract

Astronomy is facing a revolution in data collection, storage, analysis, and interpretation of large datasets. The data volumes here are several orders of magnitude larger than what astronomers and statisticians are used to dealing with, and the old methods simply do not work. The National Virtual Observatory (NVO) initiative has recently emerged in recognition of this need and to federate numerous large digital sky archives, both ground based and space based, and develop tools to explore and understand these vast volumes of data. In this paper, we address some of the critically important statistical challenges raised by the NVO. In particular a low-storage, single-pass, sequential method for simultaneous estimation of multiple quantiles for massive datasets will be presented. Density estimation based on this procedure and a multivariate extension will also be discussed. The NVO also requires statistical tools to analyze moderate size databases. Model selection is an important issue for many astrophysical databases. We present a simple likelihood based 'leave one out' method to select the best among the several possible alternatives. The performance of the method is compared to those based on Akaike Information Criterion and Bayesian Information Criterion.

Paper Details

Date Published: 19 December 2002
PDF: 10 pages
Proc. SPIE 4847, Astronomical Data Analysis II, (19 December 2002); doi: 10.1117/12.460339
Show Author Affiliations
G. Jogesh Babu, The Pennsylvania State Univ. (United States)
James P. McDermott, The Pennsylvania State Univ. (United States)


Published in SPIE Proceedings Vol. 4847:
Astronomical Data Analysis II
Jean-Luc Starck; Fionn D. Murtagh, Editor(s)

© SPIE. Terms of Use
Back to Top