Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

p-Values for High-Dimensional Regression

p-Values for High-Dimensional Regression Assigning significance in high-dimensional regression is challenging. Most computationally efficient selection algorithms cannot guard against inclusion of noise variables. Asymptotically valid p-values are not available. An exception is a recent proposal by Wasserman and Roeder that splits the data into two parts. The number of variables is then reduced to a manageable size using the first split, while classical variable selection techniques can be applied to the remaining variables, using the data from the second split. This yields asymptotic error control under minimal conditions. This involves a one-time random split of the data, however. Results are sensitive to this arbitrary choice, which amounts to a “p-value lottery” and makes it difficult to reproduce results. Here we show that inference across multiple random splits can be aggregated while maintaining asymptotic control over the inclusion of noise variables. We show that the resulting p-values can be used for control of both family-wise error and false discovery rate. In addition, the proposed aggregation is shown to improve power while reducing the number of falsely selected variables substantially. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Journal of the American Statistical Association Taylor & Francis

p-Values for High-Dimensional Regression

11 pages

Loading next page...
 
/lp/taylor-francis/p-values-for-high-dimensional-regression-hSJn04tmz5

References (26)

Publisher
Taylor & Francis
Copyright
© 2009 American Statistical Association
ISSN
1537-274X
eISSN
0162-1459
DOI
10.1198/jasa.2009.tm08647
Publisher site
See Article on Publisher Site

Abstract

Assigning significance in high-dimensional regression is challenging. Most computationally efficient selection algorithms cannot guard against inclusion of noise variables. Asymptotically valid p-values are not available. An exception is a recent proposal by Wasserman and Roeder that splits the data into two parts. The number of variables is then reduced to a manageable size using the first split, while classical variable selection techniques can be applied to the remaining variables, using the data from the second split. This yields asymptotic error control under minimal conditions. This involves a one-time random split of the data, however. Results are sensitive to this arbitrary choice, which amounts to a “p-value lottery” and makes it difficult to reproduce results. Here we show that inference across multiple random splits can be aggregated while maintaining asymptotic control over the inclusion of noise variables. We show that the resulting p-values can be used for control of both family-wise error and false discovery rate. In addition, the proposed aggregation is shown to improve power while reducing the number of falsely selected variables substantially.

Journal

Journal of the American Statistical AssociationTaylor & Francis

Published: Dec 1, 2009

Keywords: Data splitting; False discovery rate; Family-wise error rate; High-dimensional variable selection; Multiple comparisons

There are no references for this article.