Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Multiple-source adaptation theory and algorithms

Multiple-source adaptation theory and algorithms We present a general theoretical and algorithmic analysis of the problem of multiple-source adaptation, a key learning problem in applications. We derive new normalized solutions with strong theoretical guarantees for the cross-entropy loss and other similar losses. We also provide new guarantees that hold in the case where the conditional probabilities for the source domains are distinct. We further present a novel analysis of the convergence properties of density estimation used in distribution-weighted combinations, and study their effects on the learning guarantees. Moreover, we give new algorithms for determining the distribution-weighted combination solution for the cross-entropy loss and other losses. We report the results of a series of experiments with real-world datasets. We find that our algorithm outperforms competing approaches by producing a single robust predictor that performs well on any target mixture distribution. Altogether, our theory, algorithms, and empirical results provide a full solution for the multiple-source adaptation problem with very practical benefits. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Annals of Mathematics and Artificial Intelligence Springer Journals

Multiple-source adaptation theory and algorithms

Loading next page...
 
/lp/springer-journals/multiple-source-adaptation-theory-and-algorithms-OlE3e13dTH

References (70)

Publisher
Springer Journals
Copyright
Copyright © Springer Nature Switzerland AG 2020
ISSN
1012-2443
eISSN
1573-7470
DOI
10.1007/s10472-020-09716-0
Publisher site
See Article on Publisher Site

Abstract

We present a general theoretical and algorithmic analysis of the problem of multiple-source adaptation, a key learning problem in applications. We derive new normalized solutions with strong theoretical guarantees for the cross-entropy loss and other similar losses. We also provide new guarantees that hold in the case where the conditional probabilities for the source domains are distinct. We further present a novel analysis of the convergence properties of density estimation used in distribution-weighted combinations, and study their effects on the learning guarantees. Moreover, we give new algorithms for determining the distribution-weighted combination solution for the cross-entropy loss and other losses. We report the results of a series of experiments with real-world datasets. We find that our algorithm outperforms competing approaches by producing a single robust predictor that performs well on any target mixture distribution. Altogether, our theory, algorithms, and empirical results provide a full solution for the multiple-source adaptation problem with very practical benefits.

Journal

Annals of Mathematics and Artificial IntelligenceSpringer Journals

Published: Nov 5, 2020

There are no references for this article.