Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Unsupervised and weakly supervised approaches for answer selection tasks with scarce annotations

Unsupervised and weakly supervised approaches for answer selection tasks with scarce annotations AbstractAddressing Answer Selection (AS) tasks with complex neural networks typically requires a large amount of annotated data to increase the accuracy of the models. In this work, we are interested in simple models that can potentially give good performance on datasets with no or few annotations. First, we propose new unsupervised baselines that leverage distributed word and sentence representations. Second, we compare the ability of our neural architectures to learn from few annotated examples in a weakly supervised scheme and we demonstrate how these methods can benefit from a pre-training on an external dataset. With an emphasis on results reproducibility, we show that our simple methods can reach or approach state-of-the-art performances on four common AS datasets. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Open Computer Science de Gruyter

Unsupervised and weakly supervised approaches for answer selection tasks with scarce annotations

Loading next page...
 
/lp/de-gruyter/unsupervised-and-weakly-supervised-approaches-for-answer-selection-cPw11ABMe1
Publisher
de Gruyter
Copyright
© 2019 Emmanuel Vallee et al., published by De Gruyter Open
eISSN
2299-1093
DOI
10.1515/comp-2019-0008
Publisher site
See Article on Publisher Site

Abstract

AbstractAddressing Answer Selection (AS) tasks with complex neural networks typically requires a large amount of annotated data to increase the accuracy of the models. In this work, we are interested in simple models that can potentially give good performance on datasets with no or few annotations. First, we propose new unsupervised baselines that leverage distributed word and sentence representations. Second, we compare the ability of our neural architectures to learn from few annotated examples in a weakly supervised scheme and we demonstrate how these methods can benefit from a pre-training on an external dataset. With an emphasis on results reproducibility, we show that our simple methods can reach or approach state-of-the-art performances on four common AS datasets.

Journal

Open Computer Sciencede Gruyter

Published: Jan 1, 2019

References