Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

L 1 penalty and shrinkage estimation in partially linear models with random coefficient autoregressive errors

L 1 penalty and shrinkage estimation in partially linear models with random coefficient... In partially linear models, we consider methodology for simultaneous model selection and parameter estimation with random coefficient autoregressive errors by using lasso and shrinkage strategies. We provide natural adaptive estimators that significantly improve upon the classical procedures in the situation where some of the predictors are nuisance variables that may or may not affect the association between the response and the main predictors. In the context of two competing partially linear regression models (full and submodels), we consider an adaptive shrinkage estimation strategy and propose the shrinkage estimator and the positive‐rule shrinkage estimator. We develop the properties of these estimators by using the notion of asymptotic distributional risk. The shrinkage estimators are shown to have a higher efficiency than the classical estimators for a wide class of models. For the lasso‐type estimation strategy, we devise efficient algorithms to obtain numerical results. We compare the relative performance of lasso with the shrinkage estimator and the other estimators. Monte Carlo simulation experiments are conducted for various combinations of the nuisance parameters and sample size, and the performance of each method is evaluated in terms of simulated mean squared error. The comparison reveals that lasso and shrinkage strategies outperform the classical procedure. The relative performance of lasso and shrinkage strategies is comparable. The shrinkage estimators perform better than the lasso strategy in the effective part of the parameter space when, and only when, there are many nuisance variables in the model. A data example is showcased to illustrate the usefulness of suggested methods. Copyright © 2011 John Wiley & Sons, Ltd. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Applied Stochastic Models in Business and Industry Wiley

L 1 penalty and shrinkage estimation in partially linear models with random coefficient autoregressive errors

Loading next page...
 
/lp/wiley/l-1-penalty-and-shrinkage-estimation-in-partially-linear-models-with-JI0LTegeUT

References (39)

Publisher
Wiley
Copyright
Copyright © 2012 John Wiley & Sons, Ltd.
ISSN
1524-1904
eISSN
1526-4025
DOI
10.1002/asmb.933
Publisher site
See Article on Publisher Site

Abstract

In partially linear models, we consider methodology for simultaneous model selection and parameter estimation with random coefficient autoregressive errors by using lasso and shrinkage strategies. We provide natural adaptive estimators that significantly improve upon the classical procedures in the situation where some of the predictors are nuisance variables that may or may not affect the association between the response and the main predictors. In the context of two competing partially linear regression models (full and submodels), we consider an adaptive shrinkage estimation strategy and propose the shrinkage estimator and the positive‐rule shrinkage estimator. We develop the properties of these estimators by using the notion of asymptotic distributional risk. The shrinkage estimators are shown to have a higher efficiency than the classical estimators for a wide class of models. For the lasso‐type estimation strategy, we devise efficient algorithms to obtain numerical results. We compare the relative performance of lasso with the shrinkage estimator and the other estimators. Monte Carlo simulation experiments are conducted for various combinations of the nuisance parameters and sample size, and the performance of each method is evaluated in terms of simulated mean squared error. The comparison reveals that lasso and shrinkage strategies outperform the classical procedure. The relative performance of lasso and shrinkage strategies is comparable. The shrinkage estimators perform better than the lasso strategy in the effective part of the parameter space when, and only when, there are many nuisance variables in the model. A data example is showcased to illustrate the usefulness of suggested methods. Copyright © 2011 John Wiley & Sons, Ltd.

Journal

Applied Stochastic Models in Business and IndustryWiley

Published: May 1, 2012

There are no references for this article.