[2019 NLPCC] How Question Generation can Help Question Answering over Knowledge Base

Sen Hu’s paper “How Question Generation can Help Question Answering over Knowledge Base” was accepted by NLPCC 2019.

The task of question generation (QG) is to generate a corresponding natural language question given the input answer, while question answering (QA) is a reverse task to find a proper answer given the question. For the KBQA task, the answer could be regarded as a fact containing a predicate and two entities from the knowledge base. Training an effective KBQA system needs a lot of labeled data which are hard to acquire. And a trained KBQA system still performs poor when answering the questions corresponding with unseen predicates in the training process.

 

 

To solve these challenges, this paper proposes a unified framework to combine the QG and QA with the help of knowledge base and text corpus. The models of QA and QG are first trained jointly on the gold dataset, then the QA model is fine tuned by utilizing a supplemental dataset constructed by the QG model with the help of text evidence. We conduct experiments on two datasets SimpleQuestions and WebQSP with the Freebase knowledge base. Empirical results show that our framework improves the performance of KBQA and performs comparably with or even better than the state-of-the-art.