Access the full text.
Sign up today, get DeepDyve free for 14 days.
AbstractAddressing Answer Selection (AS) tasks with complex neural networks typically requires a large amount of annotated data to increase the accuracy of the models. In this work, we are interested in simple models that can potentially give good performance on datasets with no or few annotations. First, we propose new unsupervised baselines that leverage distributed word and sentence representations. Second, we compare the ability of our neural architectures to learn from few annotated examples in a weakly supervised scheme and we demonstrate how these methods can benefit from a pre-training on an external dataset. With an emphasis on results reproducibility, we show that our simple methods can reach or approach state-of-the-art performances on four common AS datasets.
Open Computer Science – de Gruyter
Published: Jan 1, 2019
Read and print from thousands of top scholarly journals.
Already have an account? Log in
Bookmark this article. You can see your Bookmarks on your DeepDyve Library.
To save an article, log in first, or sign up for a DeepDyve account if you don’t already have one.
Copy and paste the desired citation format or use the link below to download a file formatted for EndNote
Access the full text.
Sign up today, get DeepDyve free for 14 days.
All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.