CS Weekly Seminar Series
January 26, 2022
Knowledge Graph.png

Enhanced Story Comprehension for Large Language Models through Dynamic Document-Based Knowledge Graphs

Yeganeh Nasiri, a BYU Computer Science graduate student


Large transformer-based language models have achieved incredible success at various tasks which require narrative comprehension, including story completion, answering questions about stories, and generating stories ex nihilo. However, due to the limitations of finite context windows, these language models struggle to produce or understand stories longer than several thousand tokens. In order to mitigate the document length limitations that come with finite context windows, we introduce a novel architecture that augments story processing with an external dynamic knowledge graph. In contrast to static commonsense knowledge graphs which hold information about the real world, these dynamic knowledge graphs reflect facts extracted from the story being processed. Our architecture uses these knowledge graphs to create information-rich prompts which better facilitate story comprehension than prompts composed only of story text. We apply our architecture to the tasks of question answering and story completion. To complement this line of research, we introduce two long-form question answering tasks, LF-SQuAD and LFQUOREF, in which the document length exceeds the size of the language model’s context window, and introduce a story completion evaluation method that bypasses the stochastic nature of language model generation. We demonstrate broad improvement over typical prompt formulation methods for both question answering and story completion using GPT-2, GPT-3, and XLNet.

Thursday, January 27th at 11am in 1170 TMCB

Donuts served before and after presentation!