Access the full text.
Sign up today, get DeepDyve free for 14 days.
P. Cimiano, V. López, Christina Unger, Elena Cabrio, A. Ngomo, S. Walter (2013)
Multilingual Question Answering over Linked Data (QALD-3): Lab Overview
Chris Dyer, Victor Chahuneau, Noah Smith (2013)
A Simple, Fast, and Effective Reparameterization of IBM Model 2
Sen Hu, Lei Zou, J. Yu, Haixun Wang, Dongyan Zhao (2018)
Answering Natural Language Questions by Subgraph Matching over Knowledge GraphsIEEE Transactions on Knowledge and Data Engineering, 30
Vijaya Dhyani (2020)
Knowledge and data mining for recent and advanced applications using emerging technologiesData Technol. Appl., 54
S. Ferrández, Antonio Toral, Ó. Ferrández, A. Rodríguez, R. Muñoz (2007)
Applying Wikipedia's Multilingual Knowledge to Cross-Lingual Question Answering
Yanchao Hao, Yuanzhe Zhang, Kang Liu, Shizhu He, Zhanyi Liu, Hua Wu, Jun Zhao (2017)
An End-to-End Model for Question Answering over Knowledge Base with Cross-Attention Combining Global Knowledge
Michael Petrochuk, Luke Zettlemoyer (2018)
SimpleQuestions Nearly Solved: A New Upperbound and Baseline Approach
Xiao Huang, Jingyuan Zhang, Dingcheng Li, Ping Li (2019)
Knowledge Graph Embedding Based Question AnsweringProceedings of the Twelfth ACM International Conference on Web Search and Data Mining
Tsung-yuan Hsu, Chi-Liang Liu, Hung-yi Lee (2019)
Zero-shot Reading Comprehension by Cross-lingual Transfer Learning with Multi-lingual Language Representation ModelArXiv, abs/1909.09587
Yang Zhao, Jiajun Zhang, Yu Zhou, Chengqing Zong (2020)
Knowledge Graphs Enhanced Neural Machine Translation
Wen-tau Yih, Ming-Wei Chang, Xiaodong He, Jianfeng Gao (2015)
Semantic Parsing via Staged Query Graph Generation: Question Answering with Knowledge Base
Sarthak Jain (2016)
Question Answering over Knowledge Base using Factual Memory Networks
Georges Ghantous, A. Gill (2018)
Evaluating the DevOps Reference Architecture for Multi-cloud IoT-ApplicationsSN Computer Science, 2
Wenpeng Yin, Mo Yu, Bing Xiang, Bowen Zhou, Hinrich Schütze (2016)
Simple Question Answering by Attentive Convolutional Neural NetworkArXiv, abs/1606.03391
Vishwajeet Kumar, Nitish Joshi, A. Mukherjee, Ganesh Ramakrishnan, P. Jyothi (2019)
Cross-Lingual Training for Automatic Question Generation
Guobin Chen, Wongun Choi, Xiang Yu, T. Han, Manmohan Chandraker (2017)
Learning Efficient Object Detection Models with Knowledge Distillation
Vladimir Karpukhin, Omer Levy, Jacob Eisenstein, Marjan Ghazvininejad (2019)
Training on Synthetic Noise Improves Robustness to Natural Noise in Machine TranslationArXiv, abs/1902.01509
Jonathan Berant, A. Chou, Roy Frostig, Percy Liang (2013)
Semantic Parsing on Freebase from Question-Answer Pairs
K. Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, Jamie Taylor (2008)
Freebase: a collaboratively created graph database for structuring human knowledge
Kevin Clark, Minh-Thang Luong, Urvashi Khandelwal, Christopher Manning, Quoc Le (2019)
BAM! Born-Again Multi-Task Networks for Natural Language Understanding
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan Gomez, Lukasz Kaiser, Illia Polosukhin (2017)
Attention is All you Need
Happy Buzaaba, T. Amagasa (2021)
Question Answering Over Knowledge Base: A Scheme for Integrating Subject and the Identified Relation to Answer Simple QuestionsSN Computer Science, 2
S. Ferrández, Antonio Toral, Ó. Ferrández, A. Rodríguez, R. Muñoz (2009)
Exploiting Wikipedia and EuroWordNet to solve Cross-Lingual Question AnsweringInf. Sci., 179
Jeffrey Pennington, R. Socher, Christopher Manning (2014)
GloVe: Global Vectors for Word Representation
K. Macková, Milan Straka (2020)
Reading Comprehension in Czech via Machine Translation and Cross-lingual TransferArXiv, abs/2007.01667
Dennis Diefenbach, V. López, K. Singh, P. Maret (2017)
Core techniques of question answering systems over knowledge bases: a surveyKnowledge and Information Systems, 55
Sabin Kafle, Nisansa Silva, D. Dou (2019)
An Overview of Utilizing Knowledge Bases in Neural Networks for Question AnsweringInformation Systems Frontiers, 22
P. Manghi, Claudio Atzori, M. Bonis, A. Bardi (2020)
Entity deduplication in big data graphs for scholarly communicationData Technol. Appl., 54
Antoine Bordes, Nicolas Usunier, Alberto García-Durán, J. Weston, Oksana Yakhnenko (2013)
Translating Embeddings for Modeling Multi-relational Data
Xiaoming Zhang, Mingming Meng, Xiaoling Sun, Yu Bai (2019)
FactQA: question answering over domain knowledge graph based on two-level query expansionData Technol. Appl., 54
Zhiyong Wu, B. Kao, Tien-Hsuan Wu, Pengcheng Yin, Qun Liu (2020)
PERQ: Predicting, Explaining, and Rectifying Failed Questions in KB-QA SystemsProceedings of the 13th International Conference on Web Search and Data Mining
Previous knowledge base question answering (KBQA) models only consider the monolingual scenario and cannot be directly extended to the cross-lingual scenario, in which the language of questions and that of knowledge base (KB) are different. Although a machine translation (MT) model can bridge the gap through translating questions to the language of KB, the noises of translated questions could accumulate and further sharply impair the final performance. Therefore, the authors propose a method to improve the robustness of KBQA models in the cross-lingual scenario.Design/methodology/approachThe authors propose a knowledge distillation-based robustness enhancement (KDRE) method. Specifically, first a monolingual model (teacher) is trained by ground truth (GT) data. Then to imitate the practical noises, a noise-generating model is designed to inject two types of noise into questions: general noise and translation-aware noise. Finally, the noisy questions are input into the student model. Meanwhile, the student model is jointly trained by GT data and distilled data, which are derived from the teacher when feeding GT questions.FindingsThe experimental results demonstrate that KDRE can improve the performance of models in the cross-lingual scenario. The performance of each module in KBQA model is improved by KDRE. The knowledge distillation (KD) and noise-generating model in the method can complementarily boost the robustness of models.Originality/valueThe authors first extend KBQA models from monolingual to cross-lingual scenario. Also, the authors first implement KD for KBQA to develop robust cross-lingual models.
Data Technologies and Applications – Emerald Publishing
Published: Oct 11, 2021
Keywords: Knowledge base question answering; Cross-lingual; Knowledge distillation; Noise-generating model; Robustness; Machine translation
Read and print from thousands of top scholarly journals.
Already have an account? Log in
Bookmark this article. You can see your Bookmarks on your DeepDyve Library.
To save an article, log in first, or sign up for a DeepDyve account if you don’t already have one.
Copy and paste the desired citation format or use the link below to download a file formatted for EndNote
Access the full text.
Sign up today, get DeepDyve free for 14 days.
All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.