Aiming at the relation linking task for question answering over knowledge base,especially the multi relation linking task for complex questions,a relation linking approach based on the multi-attention recurrent neural...Aiming at the relation linking task for question answering over knowledge base,especially the multi relation linking task for complex questions,a relation linking approach based on the multi-attention recurrent neural network(RNN)model is proposed,which works for both simple and complex questions.First,the vector representations of questions are learned by the bidirectional long short-term memory(Bi-LSTM)model at the word and character levels,and named entities in questions are labeled by the conditional random field(CRF)model.Candidate entities are generated based on a dictionary,the disambiguation of candidate entities is realized based on predefined rules,and named entities mentioned in questions are linked to entities in knowledge base.Next,questions are classified into simple or complex questions by the machine learning method.Starting from the identified entities,for simple questions,one-hop relations are collected in the knowledge base as candidate relations;for complex questions,two-hop relations are collected as candidates.Finally,the multi-attention Bi-LSTM model is used to encode questions and candidate relations,compare their similarity,and return the candidate relation with the highest similarity as the result of relation linking.It is worth noting that the Bi-LSTM model with one attentions is adopted for simple questions,and the Bi-LSTM model with two attentions is adopted for complex questions.The experimental results show that,based on the effective entity linking method,the Bi-LSTM model with the attention mechanism improves the relation linking effectiveness of both simple and complex questions,which outperforms the existing relation linking methods based on graph algorithm or linguistics understanding.展开更多
知识库问答(knowledge base question answering,KBQA)是一个具有挑战性的热门研究方向,多跳知识库问答主要的挑战是非结构化的自然语言问题与结构化的知识库推理路径存在不一致性,基于图检索的多跳知识库问答模型善于把握图的拓扑结构...知识库问答(knowledge base question answering,KBQA)是一个具有挑战性的热门研究方向,多跳知识库问答主要的挑战是非结构化的自然语言问题与结构化的知识库推理路径存在不一致性,基于图检索的多跳知识库问答模型善于把握图的拓扑结构,但忽略了图中结点和边携带的文本信息。为了充分学习知识库三元组的文本信息,构造了知识库三元组的文本形式,并提出了三个基于非图检索的特征增强模型RBERT、CBERT、GBERT,它们分别使用前馈神经网络、深层金字塔卷积网络、图注意力网络增强特征。三个模型显著提高了特征表示能力和问答准确率,其中RBERT结构最简单,CBERT训练最快,GBERT性能最优。在数据集MetaQA、WebQSP和CWQ上进行实验对比,在Hits@1和F1两个指标上三个模型明显优于目前的主流模型,也明显优于其他BERT的改进模型。展开更多
Question answering (QA) over knowledge base (KB) aims to provide a structured answer from a knowledge base to a natural language question. In this task, a key step is how to represent and understand the natural langua...Question answering (QA) over knowledge base (KB) aims to provide a structured answer from a knowledge base to a natural language question. In this task, a key step is how to represent and understand the natural language query. In this paper, we propose to use tree-structured neural networks constructed based on the constituency tree to model natural language queries. We identify an interesting observation in the constituency tree: different constituents have their own semantic characteristics and might be suitable to solve different subtasks in a QA system. Based on this point, we incorporate the type information as an auxiliary supervision signal to improve the QA performance. We call our approach type-aware QA. We jointly characterize both the answer and its answer type in a unified neural network model with the attention mechanism. Instead of simply using the root representation, we represent the query by combining the representations of different constituents using task-specific attention weights. Extensive experiments on public datasets have demonstrated the effectiveness of our proposed model. More specially, the learned attention weights are quite useful in understanding the query. The produced representations for intermediate nodes can be used for analyzing the effectiveness of components in a QA system.展开更多
目前知识库问答(Knowledge base question answering,KBQA)技术无法有效地处理复杂问题,难以理解其中的复杂语义.将一个复杂问题先分解再整合,是解析复杂语义的有效方法.但是,在问题分解的过程中往往会出现实体判断错误或主题实体缺失...目前知识库问答(Knowledge base question answering,KBQA)技术无法有效地处理复杂问题,难以理解其中的复杂语义.将一个复杂问题先分解再整合,是解析复杂语义的有效方法.但是,在问题分解的过程中往往会出现实体判断错误或主题实体缺失的情况,导致分解得到的子问题与原始复杂问题并不匹配.针对上述问题,提出了一种融合事实文本的问解分解式语义解析方法.对复杂问题的处理分为分解-抽取-解析3个阶段,首先把复杂问题分解成简单子问题,然后抽取问句中的关键信息,最后生成结构化查询语句.同时,本文又构造了事实文本库,将三元组转化成用自然语言描述的句子,采用注意力机制获取更丰富的知识.在ComplexWebQuestions数据集上的实验表明,本文提出的模型在性能上优于其他基线模型.展开更多
基金The National Natural Science Foundation of China(No.61502095).
文摘Aiming at the relation linking task for question answering over knowledge base,especially the multi relation linking task for complex questions,a relation linking approach based on the multi-attention recurrent neural network(RNN)model is proposed,which works for both simple and complex questions.First,the vector representations of questions are learned by the bidirectional long short-term memory(Bi-LSTM)model at the word and character levels,and named entities in questions are labeled by the conditional random field(CRF)model.Candidate entities are generated based on a dictionary,the disambiguation of candidate entities is realized based on predefined rules,and named entities mentioned in questions are linked to entities in knowledge base.Next,questions are classified into simple or complex questions by the machine learning method.Starting from the identified entities,for simple questions,one-hop relations are collected in the knowledge base as candidate relations;for complex questions,two-hop relations are collected as candidates.Finally,the multi-attention Bi-LSTM model is used to encode questions and candidate relations,compare their similarity,and return the candidate relation with the highest similarity as the result of relation linking.It is worth noting that the Bi-LSTM model with one attentions is adopted for simple questions,and the Bi-LSTM model with two attentions is adopted for complex questions.The experimental results show that,based on the effective entity linking method,the Bi-LSTM model with the attention mechanism improves the relation linking effectiveness of both simple and complex questions,which outperforms the existing relation linking methods based on graph algorithm or linguistics understanding.
文摘知识库问答(knowledge base question answering,KBQA)是一个具有挑战性的热门研究方向,多跳知识库问答主要的挑战是非结构化的自然语言问题与结构化的知识库推理路径存在不一致性,基于图检索的多跳知识库问答模型善于把握图的拓扑结构,但忽略了图中结点和边携带的文本信息。为了充分学习知识库三元组的文本信息,构造了知识库三元组的文本形式,并提出了三个基于非图检索的特征增强模型RBERT、CBERT、GBERT,它们分别使用前馈神经网络、深层金字塔卷积网络、图注意力网络增强特征。三个模型显著提高了特征表示能力和问答准确率,其中RBERT结构最简单,CBERT训练最快,GBERT性能最优。在数据集MetaQA、WebQSP和CWQ上进行实验对比,在Hits@1和F1两个指标上三个模型明显优于目前的主流模型,也明显优于其他BERT的改进模型。
文摘Question answering (QA) over knowledge base (KB) aims to provide a structured answer from a knowledge base to a natural language question. In this task, a key step is how to represent and understand the natural language query. In this paper, we propose to use tree-structured neural networks constructed based on the constituency tree to model natural language queries. We identify an interesting observation in the constituency tree: different constituents have their own semantic characteristics and might be suitable to solve different subtasks in a QA system. Based on this point, we incorporate the type information as an auxiliary supervision signal to improve the QA performance. We call our approach type-aware QA. We jointly characterize both the answer and its answer type in a unified neural network model with the attention mechanism. Instead of simply using the root representation, we represent the query by combining the representations of different constituents using task-specific attention weights. Extensive experiments on public datasets have demonstrated the effectiveness of our proposed model. More specially, the learned attention weights are quite useful in understanding the query. The produced representations for intermediate nodes can be used for analyzing the effectiveness of components in a QA system.
文摘目前知识库问答(Knowledge base question answering,KBQA)技术无法有效地处理复杂问题,难以理解其中的复杂语义.将一个复杂问题先分解再整合,是解析复杂语义的有效方法.但是,在问题分解的过程中往往会出现实体判断错误或主题实体缺失的情况,导致分解得到的子问题与原始复杂问题并不匹配.针对上述问题,提出了一种融合事实文本的问解分解式语义解析方法.对复杂问题的处理分为分解-抽取-解析3个阶段,首先把复杂问题分解成简单子问题,然后抽取问句中的关键信息,最后生成结构化查询语句.同时,本文又构造了事实文本库,将三元组转化成用自然语言描述的句子,采用注意力机制获取更丰富的知识.在ComplexWebQuestions数据集上的实验表明,本文提出的模型在性能上优于其他基线模型.