With the advent of the information age, it will be more troublesome to search for a lot of relevant knowledge to find the information you need. Text reasoning is a very basic and important part of multi-hop question a...With the advent of the information age, it will be more troublesome to search for a lot of relevant knowledge to find the information you need. Text reasoning is a very basic and important part of multi-hop question and answer tasks. This paper aims to study the integrity, uniformity, and speed of computational intelligence inference data capabilities. That is why multi-hop reasoning came into being, but it is still in its infancy, that is, it is far from enough to conduct multi-hop question and answer questions, such as search breadth, process complexity, response speed, comprehensiveness of information, etc. This paper makes a text comparison between traditional information retrieval and computational intelligence through corpus relevancy and other computing methods. The study finds that in the face of multi-hop question and answer reasoning, the reasoning data that traditional retrieval methods lagged behind in intelligence are about 35% worse. It shows that computational intelligence would be more complete, unified, and faster than traditional retrieval methods. This paper also introduces the relevant points of text reasoning and describes the process of the multi-hop question answering system, as well as the subsequent discussions and expectations.展开更多
Aiming at the relation linking task for question answering over knowledge base,especially the multi relation linking task for complex questions,a relation linking approach based on the multi-attention recurrent neural...Aiming at the relation linking task for question answering over knowledge base,especially the multi relation linking task for complex questions,a relation linking approach based on the multi-attention recurrent neural network(RNN)model is proposed,which works for both simple and complex questions.First,the vector representations of questions are learned by the bidirectional long short-term memory(Bi-LSTM)model at the word and character levels,and named entities in questions are labeled by the conditional random field(CRF)model.Candidate entities are generated based on a dictionary,the disambiguation of candidate entities is realized based on predefined rules,and named entities mentioned in questions are linked to entities in knowledge base.Next,questions are classified into simple or complex questions by the machine learning method.Starting from the identified entities,for simple questions,one-hop relations are collected in the knowledge base as candidate relations;for complex questions,two-hop relations are collected as candidates.Finally,the multi-attention Bi-LSTM model is used to encode questions and candidate relations,compare their similarity,and return the candidate relation with the highest similarity as the result of relation linking.It is worth noting that the Bi-LSTM model with one attentions is adopted for simple questions,and the Bi-LSTM model with two attentions is adopted for complex questions.The experimental results show that,based on the effective entity linking method,the Bi-LSTM model with the attention mechanism improves the relation linking effectiveness of both simple and complex questions,which outperforms the existing relation linking methods based on graph algorithm or linguistics understanding.展开更多
目前知识库问答(Knowledge base question answering,KBQA)技术无法有效地处理复杂问题,难以理解其中的复杂语义.将一个复杂问题先分解再整合,是解析复杂语义的有效方法.但是,在问题分解的过程中往往会出现实体判断错误或主题实体缺失...目前知识库问答(Knowledge base question answering,KBQA)技术无法有效地处理复杂问题,难以理解其中的复杂语义.将一个复杂问题先分解再整合,是解析复杂语义的有效方法.但是,在问题分解的过程中往往会出现实体判断错误或主题实体缺失的情况,导致分解得到的子问题与原始复杂问题并不匹配.针对上述问题,提出了一种融合事实文本的问解分解式语义解析方法.对复杂问题的处理分为分解-抽取-解析3个阶段,首先把复杂问题分解成简单子问题,然后抽取问句中的关键信息,最后生成结构化查询语句.同时,本文又构造了事实文本库,将三元组转化成用自然语言描述的句子,采用注意力机制获取更丰富的知识.在ComplexWebQuestions数据集上的实验表明,本文提出的模型在性能上优于其他基线模型.展开更多
文摘With the advent of the information age, it will be more troublesome to search for a lot of relevant knowledge to find the information you need. Text reasoning is a very basic and important part of multi-hop question and answer tasks. This paper aims to study the integrity, uniformity, and speed of computational intelligence inference data capabilities. That is why multi-hop reasoning came into being, but it is still in its infancy, that is, it is far from enough to conduct multi-hop question and answer questions, such as search breadth, process complexity, response speed, comprehensiveness of information, etc. This paper makes a text comparison between traditional information retrieval and computational intelligence through corpus relevancy and other computing methods. The study finds that in the face of multi-hop question and answer reasoning, the reasoning data that traditional retrieval methods lagged behind in intelligence are about 35% worse. It shows that computational intelligence would be more complete, unified, and faster than traditional retrieval methods. This paper also introduces the relevant points of text reasoning and describes the process of the multi-hop question answering system, as well as the subsequent discussions and expectations.
基金The National Natural Science Foundation of China(No.61502095).
文摘Aiming at the relation linking task for question answering over knowledge base,especially the multi relation linking task for complex questions,a relation linking approach based on the multi-attention recurrent neural network(RNN)model is proposed,which works for both simple and complex questions.First,the vector representations of questions are learned by the bidirectional long short-term memory(Bi-LSTM)model at the word and character levels,and named entities in questions are labeled by the conditional random field(CRF)model.Candidate entities are generated based on a dictionary,the disambiguation of candidate entities is realized based on predefined rules,and named entities mentioned in questions are linked to entities in knowledge base.Next,questions are classified into simple or complex questions by the machine learning method.Starting from the identified entities,for simple questions,one-hop relations are collected in the knowledge base as candidate relations;for complex questions,two-hop relations are collected as candidates.Finally,the multi-attention Bi-LSTM model is used to encode questions and candidate relations,compare their similarity,and return the candidate relation with the highest similarity as the result of relation linking.It is worth noting that the Bi-LSTM model with one attentions is adopted for simple questions,and the Bi-LSTM model with two attentions is adopted for complex questions.The experimental results show that,based on the effective entity linking method,the Bi-LSTM model with the attention mechanism improves the relation linking effectiveness of both simple and complex questions,which outperforms the existing relation linking methods based on graph algorithm or linguistics understanding.
文摘目前知识库问答(Knowledge base question answering,KBQA)技术无法有效地处理复杂问题,难以理解其中的复杂语义.将一个复杂问题先分解再整合,是解析复杂语义的有效方法.但是,在问题分解的过程中往往会出现实体判断错误或主题实体缺失的情况,导致分解得到的子问题与原始复杂问题并不匹配.针对上述问题,提出了一种融合事实文本的问解分解式语义解析方法.对复杂问题的处理分为分解-抽取-解析3个阶段,首先把复杂问题分解成简单子问题,然后抽取问句中的关键信息,最后生成结构化查询语句.同时,本文又构造了事实文本库,将三元组转化成用自然语言描述的句子,采用注意力机制获取更丰富的知识.在ComplexWebQuestions数据集上的实验表明,本文提出的模型在性能上优于其他基线模型.