Access the full text.
Sign up today, get DeepDyve free for 14 days.
G. Kerkyacharian, D. Picard, L. Birgé, P. Hall, O. Lepski, E. Mammen, A. Tsybakov, G. Kerkyacharian, D. Picard (2000)
Thresholding algorithms, maxisets and well-concentrated basesTest, 9
R. Nishii (1984)
Asymptotic Properties of Criteria for Selection of Variables in Multiple RegressionAnnals of Statistics, 12
(1990)
Ondelettes et Opérateurs I
D. Donoho, I. Johnstone (1994)
Ideal spatial adaptation by wavelet shrinkageBiometrika, 81
L. Birgé, P. Massart (1997)
From Model Selection to Adaptive Estimation
(1987)
Asymptotic optimality for
(1976)
Norm of gaussian sample function
Lucien Birgé (1983)
Approximation dans les espaces métriques et théorie de l'estimationZeitschrift für Wahrscheinlichkeitstheorie und Verwandte Gebiete, 65
(1993)
Wavelets and fast wavelet transform on an interval
A. Barron, L. Birgé, P. Massart (1999)
Risk bounds for model selection via penalizationProbability Theory and Related Fields, 113
G. Schwarz (1978)
Estimating the Dimension of a ModelAnnals of Statistics, 6
Dean Foster, R. Stine, R. Waterman (1998)
Fitting Equations to Data
(2000)
Universal aggregation rules with sharp oracle inequalities
A. Barron (1987)
Are Bayes Rules Consistent in Information
A. Kneip (1994)
Ordered Linear SmoothersAnnals of Statistics, 22
A. Nemirovski (2000)
Topics in Non-Parametric Statistics
(1999)
Modified Akaike’s criterion for histogram density estimation
(1997)
Adaptive estimation in an autoregression and geometrical β-mixing framework
O. Lepskii (1992)
Asymptotically Minimax Adaptive Estimation. I: Upper Bounds. Optimally Adaptive EstimatesTheory of Probability and Its Applications, 36
R. Dudley (1967)
The Sizes of Compact Subsets of Hilbert Space and Continuity of Gaussian ProcessesJournal of Functional Analysis, 1
L. Birgé, P. Massart (1998)
Minimum contrast estimators on sieves: exponential bounds and rates of convergenceBernoulli, 4
A. Barron, T. Cover (1991)
Minimum complexity density estimationIEEE Trans. Inf. Theory, 37
Y. Baraud, F. Comte, G. Viennet (2001)
MODEL SELECTION FOR (AUTO-)REGRESSION WITH DEPENDENT DATAEsaim: Probability and Statistics, 5
Michel Ledoux (1996)
Isoperimetry and Gaussian analysis
T. Sriram (2002)
Asymptotics in Statistics–Some Basic ConceptsJournal of the American Statistical Association, 97
(1984)
Learning algorithm for nonparametric filtering
R. DeVore, G. Kyriazis, D. Leviatan, V. Tikhomirov (1993)
Wavelet compression and nonlinearn-widthsAdvances in Computational Mathematics, 1
(2001)
A new look at an old result: Fano’s Lemma
O. Lepskii (1991)
On a Problem of Adaptive Estimation in Gaussian White NoiseTheory of Probability and Its Applications, 35
D. Donoho, Richard Liu, B. MacGibbon (1990)
Minimax Risk Over Hyperrectangles, and ImplicationsAnnals of Statistics, 18
D. Donoho, I. Johnstone (1998)
Minimax estimation via wavelet shrinkageAnnals of Statistics, 26
(1981)
Applied regression analysis, second edition
H. Akaike (1973)
Information Theory and an Extension of the Maximum Likelihood Principle, 1
D. Donoho, I. Johnstone (1996)
Neo-classical minimax problems, thresholding and adaptive function estimationBernoulli, 2
(2001)
A generalized Cp criterion for Gaussian model selection
L. Birgé, Pascal Massart (2000)
An Adaptive Compression Algorithm in Besov SpacesConstructive Approximation, 16
(1979)
Linear and nonlinear methods of nonparametric regression analysis
R. DeVore, G. Lorentz (1993)
Constructive Approximation, 303
(1980)
Optimal filtration of square-integrable signals in Gaussian noise
A. McQuarrie, Chih-Ling Tsai (1998)
Regression and Time Series Model Selection
D. Donoho, I. Johnstone (1995)
Adapting to Unknown Smoothness via Wavelet ShrinkageJournal of the American Statistical Association, 90
C. Hurvich, Chih-Ling Tsai (1989)
Regression and time series model selection in small samplesBiometrika, 76
D. Donoho, I. Johnstone (1994)
Ideal denoising in an orthonormal basis chosen from a library of bases, 319
Our purpose in this paper is to provide a general approach to model selection via penalization for Gaussian regression and to develop our point of view about this subject. The advantage and importance of model selection come from the fact that it provides a suitable approach to many different types of problems, starting from model selection per se (among a family of parametric models, which one is more suitable for the data at hand), which includes for instance variable selection in regression models, to nonparametric estimation, for which it provides a very powerful tool that allows adaptation under quite general circumstances. Our approach to model selection also provides a natural connection between the parametric and nonparametric points of view and copes naturally with the fact that a model is not necessarily true. The method is based on the penalization of a least squares criterion which can be viewed as a generalization of Mallows’C p . A large part of our efforts will be put on choosing properly the list of models and the penalty function for various estimation problems like classical variable selection or adaptive estimation for various types of l p -bodies.
Journal of the European Mathematical Society – Springer Journals
Published: Aug 1, 2001
Read and print from thousands of top scholarly journals.
Already have an account? Log in
Bookmark this article. You can see your Bookmarks on your DeepDyve Library.
To save an article, log in first, or sign up for a DeepDyve account if you don’t already have one.
Copy and paste the desired citation format or use the link below to download a file formatted for EndNote
Access the full text.
Sign up today, get DeepDyve free for 14 days.
All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.