Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Alternating DCA for reduced-rank multitask linear regression with covariance matrix estimation

Alternating DCA for reduced-rank multitask linear regression with covariance matrix estimation We study a challenging problem in machine learning that is the reduced-rank multitask linear regression with covariance matrix estimation. The objective is to build a linear relationship between multiple output variables and input variables of a multitask learning process, taking into account the general covariance structure for the errors of the regression model in one hand, and reduced-rank regression model in another hand. The problem is formulated as minimizing a nonconvex function in two joint matrix variables (X,Θ) under the low-rank constraint on X and positive definiteness constraint on Θ. It has a double difficulty due to the non-convexity of the objective function as well as the low-rank constraint. We investigate a nonconvex, nonsmooth optimization approach based on DC (Difference of Convex functions) programming and DCA (DC Algorithm) for this hard problem. A penalty reformulation is considered which takes the form of a partial DC program. An alternating DCA and its inexact version are developed, both algorithms converge to a weak critical point of the considered problem. Numerical experiments are performed on several synthetic and benchmark real multitask linear regression datasets. The numerical results show the performance of the proposed algorithms and their superiority compared with three classical alternating/joint methods. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Annals of Mathematics and Artificial Intelligence Springer Journals

Alternating DCA for reduced-rank multitask linear regression with covariance matrix estimation

Loading next page...
 
/lp/springer-journals/alternating-dca-for-reduced-rank-multitask-linear-regression-with-qUpKOTuWLC

References (53)

Publisher
Springer Journals
Copyright
Copyright © The Author(s), under exclusive licence to Springer Nature Switzerland AG part of Springer Nature 2021
ISSN
1012-2443
eISSN
1573-7470
DOI
10.1007/s10472-021-09732-8
Publisher site
See Article on Publisher Site

Abstract

We study a challenging problem in machine learning that is the reduced-rank multitask linear regression with covariance matrix estimation. The objective is to build a linear relationship between multiple output variables and input variables of a multitask learning process, taking into account the general covariance structure for the errors of the regression model in one hand, and reduced-rank regression model in another hand. The problem is formulated as minimizing a nonconvex function in two joint matrix variables (X,Θ) under the low-rank constraint on X and positive definiteness constraint on Θ. It has a double difficulty due to the non-convexity of the objective function as well as the low-rank constraint. We investigate a nonconvex, nonsmooth optimization approach based on DC (Difference of Convex functions) programming and DCA (DC Algorithm) for this hard problem. A penalty reformulation is considered which takes the form of a partial DC program. An alternating DCA and its inexact version are developed, both algorithms converge to a weak critical point of the considered problem. Numerical experiments are performed on several synthetic and benchmark real multitask linear regression datasets. The numerical results show the performance of the proposed algorithms and their superiority compared with three classical alternating/joint methods.

Journal

Annals of Mathematics and Artificial IntelligenceSpringer Journals

Published: Sep 1, 2022

Keywords: Reduced-rank multitask linear regression; Covariance matrix estimation; DC programming; DCA; Partial DC program; Alternating DCA; 90C26; 90C90; 62J05

There are no references for this article.