Access the full text.
Sign up today, get DeepDyve free for 14 days.
Reiter (2018)
393Computational Linguistics, 44
Yao (2017)
297Knowledge and Information Systems, 53
Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova (2019)
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Laskar (2020)
5505
Victor Sanh, Albert Webson, Colin Raffel, Stephen Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Teven Scao, Arun Raja, Manan Dey, M Bari, Canwen Xu, Urmish Thakker, Shanya Sharma, Eliza Szczechla, Taewoon Kim, Gunjan Chhablani, Nihal Nayak, Debajyoti Datta, Jonathan Chang, Mike Jiang, Han Wang, Matteo Manica, Sheng Shen, Zheng Yong, Harshit Pandey, Rachel Bawden, Thomas Wang, Trishala Neeraj, Jos Rozen, Abheesht Sharma, Andrea Santilli, Thibault Févry, Jason Fries, Ryan Teehan, Stella Biderman, Leo Gao, T. Bers, Thomas Wolf, Alexander Rush (2021)
Multitask Prompted Training Enables Zero-Shot Task GeneralizationArXiv, abs/2110.08207
Nikita Kitaev, Lukasz Kaiser, Anselm Levskaya (2020)
Reformer: The Efficient TransformerArXiv, abs/2001.04451
Stratos Xenouleas, Prodromos Malakasiotis, Marianna Apidianaki, Ion Androutsopoulos (2019)
SUM-QE: a BERT-based Summary Quality Estimation Model
Haggai Roitman, Guy Feigenblat, D. Konopnicki, D. Cohen, O. Boni (2018)
Unsupervised Dual-Cascade Learning with Pseudo-Feedback Distillation for Query-based Extractive SummarizationArXiv, abs/1811.00436
Abdullah (2020)
80
Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang, Ming Zhou (2020)
ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-trainingArXiv, abs/2001.04063
Xiaojun Wan, Jianguo Xiao (2009)
Graph-Based Multi-Modality Learning for Topic-Focused Multi-Document Summarization
Guy Feigenblat, Haggai Roitman, O. Boni, D. Konopnicki (2017)
Unsupervised Query-Focused Multi-Document Summarization using the Cross Entropy MethodProceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval
Li (2019)
43IEEE Transactions on Knowledge and Data Engineering, 33
Preksha Nema, Mitesh Khapra, Anirban Laha, Balaraman Ravindran (2017)
Diversity driven attention model for query-based abstractive summarization
Tal Baumel, Matan Eyal, Michael Elhadad (2018)
Query Focused Abstractive Summarization: Incorporating Query Relevance, Multi-Document Coverage, and Summary Length Constraints into seq2seq ModelsArXiv, abs/1801.07704
Devlin (2019)
4171
Hermann (2015)
1693
Ehud Reiter (2018)
A Structured Review of the Validity of BLEUComputational Linguistics, Just Accepted
Radford (2019)
9OpenAI Blog, 1
Song (2019)
5926
Jingqing Zhang, Yao Zhao, Mohammad Saleh, Peter Liu (2019)
PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive SummarizationArXiv, abs/1912.08777
Liu (2019)
3721
Wang (2008)
1435
Yumo Xu, Mirella Lapata (2020)
Coarse-to-Fine Query Focused Multi-Document Summarization
Chopra (2016)
93
Laskar (2020)
342
Md Laskar, Enamul Hoque, Xiangji Huang (2020)
Query Focused Abstractive Summarization via Incorporating Query Relevance and Transfer Learning with Transformer Models
Xipeng Qiu, Tianxiang Sun, Yige Xu, Yunfan Shao, Ning Dai, Xuanjing Huang (2020)
Pre-trained models for natural language processing: A surveyScience China Technological Sciences, 63
Yumo Xu, Mirella Lapata (2020)
Abstractive Query Focused Summarization with Query-Free ResourcesArXiv, abs/2012.14774
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, J. Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, T. Henighan, Rewon Child, A. Ramesh, Daniel Ziegler, Jeff Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, S. Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, Dario Amodei (2020)
Language Models are Few-Shot LearnersArXiv, abs/2005.14165
Savery (2020)
1Scientific Data, 7
Sayali Kulkarni, Sheide Chammas, Wan Zhu, Fei Sha, Eugene Ie (2020)
AQuaMuSe: Automatically Generating Datasets for Query-Based Multi-Document SummarizationArXiv, abs/2010.12694
S. Chopra, Michael Auli, Alexander Rush (2016)
Abstractive Sentence Summarization with Attentive Recurrent Neural Networks
(2020)
Electra: Pre-training text encoders as discriminators rather than generators
Xiaojun Wan, Jianmin Zhang (2014)
CTSUM: extracting more certain summaries for news articlesProceedings of the 37th international ACM SIGIR conference on Research & development in information retrieval
Lai (2019)
5955
Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xiaodong Liu, Yu Wang, Jianfeng Gao, M. Zhou, H. Hon (2019)
Unified Language Model Pre-training for Natural Language Understanding and Generation
Yang Deng, Wai Lam, Yuexiang Xie, Daoyuan Chen, Yaliang Li, Min Yang, Ying Shen (2019)
Joint Learning of Answer Selection and Answer Summary Generation in Community Question AnsweringArXiv, abs/1911.09801
Tom Young, Devamanyu Hazarika, Soujanya Poria, E. Cambria (2017)
Recent Trends in Deep Learning Based Natural Language ProcessingIEEE Comput. Intell. Mag., 13
Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu (2019)
MASS: Masked Sequence to Sequence Pre-training for Language Generation
Kwiatkowski (2019)
453Transactions of the Association for Computational Linguistics, 7
Yang Liu, Xiangji Huang, Aijun An, Xiaohui Yu (2007)
ARSA: a sentiment-aware model for predicting sales performance using blogs
Wan (2014)
787
Louis (2013)
267Computational Linguistics, 39
Rush (2015)
379
Xiaodong Liu, Pengcheng He, Weizhu Chen, Jianfeng Gao (2019)
Improving Multi-Task Deep Neural Networks via Knowledge Distillation for Natural Language UnderstandingArXiv, abs/1904.09482
Weizhe Yuan, Graham Neubig, Pengfei Liu (2021)
BARTScore: Evaluating Generated Text as Text GenerationArXiv, abs/2106.11520
Alec Radford, Jeff Wu, Rewon Child, D. Luan, Dario Amodei, Ilya Sutskever (2019)
Language Models are Unsupervised Multitask Learners
Ramponi (2020)
6838
Md Laskar, Enamul Hoque, J. Huang (2020)
WSL-DS: Weakly Supervised Learning with Distant Supervision for Query Focused Multi-Document Abstractive Summarization
Matthew Peters, Sebastian Ruder, Noah Smith (2019)
To Tune or Not to Tune? Adapting Pretrained Representations to Diverse TasksArXiv, abs/1903.05987
Wang (2018)
1918
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter Liu (2019)
Exploring the Limits of Transfer Learning with a Unified Text-to-Text TransformerJ. Mach. Learn. Res., 21
Yu (2012)
720IEEE Transactions on Knowledge and Data Engineering, 24
Dan Su, Yan Xu, Tiezheng Yu, Farhad Siddique, Elham Barezi, Pascale Fung (2020)
CAiRE-COVID: A Question Answering and Query-focused Multi-Document Summarization System for COVID-19 Scholarly Information ManagementProceedings of the 1st Workshop on NLP for COVID-19 (Part 2) at EMNLP 2020
Dong (2019)
13063
Travis Goodwin, Max Savery, Dina Demner-Fushman (2020)
Flight of the PEGASUS? Comparing Transformers on Few-Shot and Zero-Shot Multi-document Abstractive SummarizationProceedings of COLING. International Conference on Computational Linguistics, 2020
T. Lai, Quan Tran, Trung Bui, D. Kihara (2019)
A Gated Self-attention Memory Network for Answer Selection
Yi Tay, Mostafa Dehghani, Dara Bahri, Donald Metzler (2020)
Efficient Transformers: A SurveyACM Computing Surveys, 55
Yao (2015)
1376
Yi Yang, Wen-tau Yih, Christopher Meek (2015)
WikiQA: A Challenge Dataset for Open-Domain Question Answering
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, T. Rault, Rémi Louf, Morgan Funtowicz, Jamie Brew (2019)
HuggingFace's Transformers: State-of-the-art Natural Language ProcessingArXiv, abs/1910.03771
Xiangji Huang, Ming Zhong, Luo Si (2005)
York University at TREC 2005: Genomics Track
Ma (2016)
1514
Romain Paulus, Caiming Xiong, R. Socher (2017)
A Deep Reinforced Model for Abstractive SummarizationArXiv, abs/1705.04304
Dingding Wang, Shenghuo Zhu, Tao Li, Yun Chi, Yihong Gong (2008)
Integrating clustering and multi-document summarization to improve document understanding
Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut (2019)
ALBERT: A Lite BERT for Self-supervised Learning of Language RepresentationsArXiv, abs/1909.11942
Dan Su, Tiezheng Yu, Pascale Fung (2021)
Improve Query Focused Abstractive Summarization by Incorporating Answer Relevance
K. Hermann, Tomás Kociský, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, P. Blunsom (2015)
Teaching Machines to Read and ComprehendArXiv, abs/1506.03340
Di Jin, Zhijing Jin, Zhiting Hu, Olga Vechtomova, Rada Mihalcea (2020)
Deep Learning for Text Style Transfer: A SurveyComputational Linguistics, 48
Dae Kim, Enamul Hoque, Maneesh Agrawala (2020)
Answering Questions about Charts and Generating Visual ExplanationsProceedings of the 2020 CHI Conference on Human Factors in Computing Systems
Max Savery, Asma Abacha, Soumya Gayen, Dina Demner-Fushman (2020)
Question-driven summarization of answers to consumer health questionsScientific Data, 7
Antol (2015)
2425
Qi (2020)
2401
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, M. Lewis, Luke Zettlemoyer, Veselin Stoyanov (2019)
RoBERTa: A Robustly Optimized BERT Pretraining ApproachArXiv, abs/1907.11692
A. See, Peter Liu, Christopher Manning (2017)
Get To The Point: Summarization with Pointer-Generator NetworksArXiv, abs/1704.04368
Kyosuke Nishida, Itsumi Saito, Kosuke Nishida, Kazutoshi Shinoda, Atsushi Otsuka, Hisako Asano, J. Tomita (2019)
Multi-style Generative Reading Comprehension
Zhong (2015)
8146Expert Systems with Applications, 42
Ramakanth Pasunuru, Asli Celikyilmaz, Michel Galley, Chenyan Xiong, Yizhe Zhang, Mohit Bansal, Jianfeng Gao (2021)
Data Augmentation for Abstractive Query-Focused Multi-Document SummarizationArXiv, abs/2103.01863
Sutskever (2014)
3104
M. Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdel-rahman Mohamed, Omer Levy, Veselin Stoyanov, Luke Zettlemoyer (2019)
BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension
Zhong (2021)
5905
Huang (2005)
56
Nallapati (2016)
280
T. Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, D. Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew Dai, Jakob Uszkoreit, Quoc Le, Slav Petrov (2019)
Natural Questions: A Benchmark for Question Answering ResearchTransactions of the Association for Computational Linguistics, 7
H. Hine (2005)
Natural QuestionsThe Classical Review, 55
Guangyou Zhou, Zhiwen Xie, Z. Yu, Xiangji Huang (2021)
DFM: A parameter-shared deep fused model for knowledge base question answeringInf. Sci., 547
Wei Li, H. Zhuge (2021)
Abstractive Multi-Document Summarization Based on Semantic Link NetworkIEEE Transactions on Knowledge and Data Engineering, 33
Xiangji Huang, Qinmin Hu (2009)
A bayesian learning approach to promoting diversity in ranking for biomedical information retrievalProceedings of the 32nd international ACM SIGIR conference on Research and development in information retrieval
Mengqiu Wang, Noah Smith, T. Mitamura (2007)
What is the Jeopardy Model? A Quasi-Synchronous Grammar for QA
M Bari, Tasnim Mohiuddin, Shafiq Joty (2020)
UXLA: A Robust Unsupervised Data Augmentation Framework for Zero-Resource Cross-Lingual NLP
Md Laskar, Enamul Hoque, J. Huang (2020)
Utilizing Bidirectional Encoder Representations from Transformers for Answer SelectionArXiv, abs/2011.07208
Jin-ge Yao, Xiaojun Wan, Jianguo Xiao (2017)
Recent advances in document summarizationKnowledge and Information Systems, 53
Iz Beltagy, Matthew Peters, Arman Cohan (2020)
Longformer: The Long-Document TransformerArXiv, abs/2004.05150
Jin-ge Yao, Xiaojun Wan, Jianguo Xiao (2015)
Compressive Document Summarization via Sparse Optimization
Fu (2021)
168
Laskar (2020)
5647
Xiaohui Yu, Yang Liu, Xiangji Huang, Aijun An (2012)
Mining Online Reviews for Predicting Sales Performance: A Case Study in the Movie DomainIEEE Transactions on Knowledge and Data Engineering, 24
Vaswani (2017)
5998
Nema (2017)
1063
Xenouleas (2019)
6005
Zhou (2021)
103Information Sciences, 547
Yumo Xu, Mirella Lapata (2021)
Text Summarization with Latent QueriesArXiv, abs/2106.00104
Yang Liu, Mirella Lapata (2019)
Hierarchical Transformers for Multi-Document Summarization
Haghighi (2009)
362
Xu (2020)
3632
Peters (2019)
7
Ilya Sutskever, Oriol Vinyals, Quoc Le (2014)
Sequence to Sequence Learning with Neural NetworksArXiv, abs/1409.3215
Nishida (2019)
2273
Chin-Yew Lin (2004)
ROUGE: A Package for Automatic Evaluation of Summaries
Chudamani Aryal, Yllias Chali (2020)
Selection Driven Query Focused Abstractive Document Summarization
Alan Ramponi, Barbara Plank (2020)
Neural Unsupervised Domain Adaptation in NLP—A SurveyArXiv, abs/2006.00632
D. Abdullah, Yllias Chali (2020)
Towards Generating Query to Perform Query Focused Abstractive Summarization using Pre-trained Model
Papineni (2002)
311
Yang Liu, Mirella Lapata (2019)
Text Summarization with Pretrained EncodersArXiv, abs/1908.08345
A. Haghighi, Lucy Vanderwende (2009)
Exploring Content Models for Multi-Document Summarization
Jesse Vig, Alexander Fabbri, Wojciech Kryscinski (2021)
Exploring Neural Models for Query-Focused SummarizationArXiv, abs/2112.07637
Goodwin (2020)
5640
M. Zaheer, Guru Guruganesh, Kumar Dubey, J. Ainslie, Chris Alberti, Santiago Ontañón, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed (2020)
Big Bird: Transformers for Longer SequencesArXiv, abs/2007.14062
Lin (2004)
74
Feigenblat (2017)
961
S. Zhong, Yang Liu, Bin Li, Jing Long (2015)
Query-oriented unsupervised multi-document summarization via deep learning modelExpert Syst. Appl., 42
Liu (2019)
5070
Huang (2009)
307
Md Laskar, Xiangji Huang, Enamul Hoque (2020)
Contextualized Embeddings based Transformer Encoder for Sentence Similarity Modeling in Answer Selection Task
Shulei Ma, Zhihong Deng, Yunlun Yang (2016)
An Unsupervised Multi-Document Summarization Framework Based on Neural Document Model
H. Kay (1961)
Teaching MachinesNature, 192
Siddhant Garg, Thuy Vu, Alessandro Moschitti (2019)
TANDA: Transfer and Adapt Pre-Trained Transformer Models for Answer Sentence Selection
Roitman (2020)
2577
Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Weinberger, Yoav Artzi (2019)
BERTScore: Evaluating Text Generation with BERTArXiv, abs/1904.09675
A. Fabbri, Simeng Han, Haoyuan Li, Haoran Li, Marjan Ghazvininejad, Shafiq Joty, Dragomir Radev, Yashar Mehdad (2020)
Improving Zero and Few-Shot Abstractive Summarization with Intermediate Fine-tuning and Data Augmentation
Yujia Xie, Tianyi Zhou, Yi Mao, Weizhu Chen (2020)
Conditional Self-Attention for Query-based SummarizationArXiv, abs/2002.07338
Yizhong Wang, Kai Liu, Jing Liu, W. He, Yajuan Lyu, Hua Wu, Sujian Li, Haifeng Wang (2018)
Multi-Passage Machine Reading Comprehension with Cross-Passage Answer Verification
Aryal (2020)
118
See (2017)
1073
Wang (2007)
23
Tatsuya Ishigaki, Hen-Hsen Huang, Hiroya Takamura, Hsin-Hsi Chen, M. Okumura (2020)
Neural Query-Biased Abstractive Summarization Using Copying MechanismAdvances in Information Retrieval, 12036
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan Gomez, Lukasz Kaiser, Illia Polosukhin (2017)
Attention is All you Need
Ramesh Nallapati, Bowen Zhou, C. Santos, Çaglar Gülçehre, Bing Xiang (2016)
Abstractive Text Summarization using Sequence-to-sequence RNNs and Beyond
Ming Zhong, Da Yin, Tao Yu, A. Zaidi, Mutethia Mutuma, Rahul Jha, A. Awadallah, Asli Celikyilmaz, Yang Liu, Xipeng Qiu, Dragomir Radev (2021)
QMSum: A New Benchmark for Query-based Multi-domain Meeting Summarization
Ishigaki (2020)
174
Xue-Yong Fu, Cheng Chen, Md Laskar, TN ShashiBhushan, Simon Corston-Oliver (2021)
Improving Punctuation Restoration for Speech Transcripts via External Data
Aishwarya Agrawal, Jiasen Lu, Stanislaw Antol, Margaret Mitchell, C. Zitnick, Devi Parikh, Dhruv Batra (2015)
VQA: Visual Question AnsweringInternational Journal of Computer Vision, 123
Liu (2007)
607
Fabbri (2021)
704
Alexander Rush, S. Chopra, J. Weston (2015)
A Neural Attention Model for Abstractive Sentence Summarization
K. Choromanski, Valerii Likhosherstov, David Dohan, Xingyou Song, Andreea Gane, Tamás Sarlós, Peter Hawkins, Jared Davis, Afroz Mohiuddin, Lukasz Kaiser, David Belanger, Lucy Colwell, Adrian Weller (2020)
Rethinking Attention with PerformersArXiv, abs/2009.14794
Annie Louis, A. Nenkova (2013)
Automatically Assessing Machine Summary Content Without a Gold StandardComputational Linguistics, 39
K. Papineni, S. Roukos, T. Ward, Wei-Jing Zhu (2002)
Bleu: a Method for Automatic Evaluation of Machine Translation
Sinong Wang, Belinda Li, Madian Khabsa, Han Fang, Hao Ma (2020)
Linformer: Self-Attention with Linear ComplexityArXiv, abs/2006.04768
Yang (2015)
2013
The Query-Focused Text Summarization (QFTS) task aims at building systems that generate the summary of the text document(s) based on the given query. A key challenge in addressing this task is the lack of large labeled data for training the summarization model. In this article, we address this challenge by exploring a series of domain adaptation techniques. Given the recent success of pre-trained transformer models in a wide range of natural language processing tasks, we utilize such models to generate abstractive summaries for the QFTS task for both single-document and multi-document scenarios. For domain adaptation, we apply a variety of techniques using pre-trained transformer-based summarization models including transfer learning, weakly supervised learning, and distant supervision. Extensive experiments on six datasets show that our proposed approach is very effective in generating abstractive summaries for the QFTS task while setting a new state-of-the-art result in several datasets across a set of automatic and human evaluation metrics.
Computational Linguistics – MIT Press
Published: Jun 9, 2022
Read and print from thousands of top scholarly journals.
Already have an account? Log in
Bookmark this article. You can see your Bookmarks on your DeepDyve Library.
To save an article, log in first, or sign up for a DeepDyve account if you don’t already have one.
Copy and paste the desired citation format or use the link below to download a file formatted for EndNote
Access the full text.
Sign up today, get DeepDyve free for 14 days.
All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.