Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

A Gradient-Based Optimization Algorithm for LASSO

A Gradient-Based Optimization Algorithm for LASSO LASSO is a useful method for achieving both shrinkage and variable selection simultaneously. The main idea of LASSO is to use the L1 constraint in the regularization step which has been applied to various models such as wavelets, kernel machines, smoothing splines, and multiclass logistic models. We call such models with the L1 constraint generalized LASSO models. In this article, we propose a new algorithm called the gradient LASSO algorithm for generalized LASSO. The gradient LASSO algorithm is computationally more stable than QP-based algorithms because it does not require matrix inversions, and thus it can be more easily applied to high-dimensional data. Simulation results show that the proposed algorithm is fast enough for practical purposes and provides reliable results. To illustrate its computing power with high-dimensional data, we analyze multiclass microarray data using the proposed algorithm. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Journal of Computational and Graphical Statistics Taylor & Francis

A Gradient-Based Optimization Algorithm for LASSO

A Gradient-Based Optimization Algorithm for LASSO

Journal of Computational and Graphical Statistics , Volume 17 (4): 16 – Dec 1, 2008

Abstract

LASSO is a useful method for achieving both shrinkage and variable selection simultaneously. The main idea of LASSO is to use the L1 constraint in the regularization step which has been applied to various models such as wavelets, kernel machines, smoothing splines, and multiclass logistic models. We call such models with the L1 constraint generalized LASSO models. In this article, we propose a new algorithm called the gradient LASSO algorithm for generalized LASSO. The gradient LASSO algorithm is computationally more stable than QP-based algorithms because it does not require matrix inversions, and thus it can be more easily applied to high-dimensional data. Simulation results show that the proposed algorithm is fast enough for practical purposes and provides reliable results. To illustrate its computing power with high-dimensional data, we analyze multiclass microarray data using the proposed algorithm.

Loading next page...
 
/lp/taylor-francis/a-gradient-based-optimization-algorithm-for-lasso-waU8IcqhMk

References (34)

Publisher
Taylor & Francis
Copyright
© American Statistical Association, Institute of Mathematical Statistics, and Interface Foundation of North America
ISSN
1537-2715
eISSN
1061-8600
DOI
10.1198/106186008X386210
Publisher site
See Article on Publisher Site

Abstract

LASSO is a useful method for achieving both shrinkage and variable selection simultaneously. The main idea of LASSO is to use the L1 constraint in the regularization step which has been applied to various models such as wavelets, kernel machines, smoothing splines, and multiclass logistic models. We call such models with the L1 constraint generalized LASSO models. In this article, we propose a new algorithm called the gradient LASSO algorithm for generalized LASSO. The gradient LASSO algorithm is computationally more stable than QP-based algorithms because it does not require matrix inversions, and thus it can be more easily applied to high-dimensional data. Simulation results show that the proposed algorithm is fast enough for practical purposes and provides reliable results. To illustrate its computing power with high-dimensional data, we analyze multiclass microarray data using the proposed algorithm.

Journal

Journal of Computational and Graphical StatisticsTaylor & Francis

Published: Dec 1, 2008

Keywords: Gradient descent; LASSO; Multiclass logistic model; Variable selection

There are no references for this article.