Capturing Greater Context for Question Generation
Author(s)
Tuan, Luu Anh; Shah, Darsh J.(Darsh Jaidip); Barzilay, Regina
DownloadSubmitted version (1.978Mb)
Open Access Policy
Open Access Policy
Creative Commons Attribution-Noncommercial-Share Alike
Terms of use
Metadata
Show full item recordAbstract
Automatic question generation can benefit many applications ranging from dialogue systems to reading comprehension. While questions are often asked with respect to long documents, there are many challenges with modeling such long documents. Many existing techniques generate questions by effectively looking at one sentence at a time, leading to questions that are easy and not reflective of the human process of question generation. Our goal is to incorporate interactions across multiple sentences to generate realistic questions for long documents. In order to link a broad document context to the target answer, we represent the relevant context via a multi-stage attention mechanism, which forms the foundation of a sequence to sequence model. We outperform state-of-the-art methods on question generation on three question-answering datasets - SQuAD, MS MARCO and NewsQA.
Date issued
2020-04Department
Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory; Massachusetts Institute of Technology. Department of Electrical Engineering and Computer ScienceJournal
Proceedings of the AAAI Conference on Artificial Intelligence
Publisher
Association for the Advancement of Artificial Intelligence (AAAI)
Citation
Tuan, Luu Anh et al. "Capturing Greater Context for Question Generation." Proceedings of the AAAI Conference on Artificial Intelligence 34, 5 (April 2020): 9065-9072 © 2020 Association for the Advancement of Artificial Intelligence
Version: Original manuscript
ISSN
2374-3468
2159-5399