摘要
答案选择是问答系统领域的关键子任务,其性能表现支撑着问答系统的发展。基于参数冻结的BERT模型生成的动态词向量存在句级语义特征匮乏、问答对词级交互关系缺失等问题。多层感知机具有多种优势,不仅能够实现深度特征挖掘,且计算成本较低。在动态文本向量的基础上,文中提出了一种基于多层感知机和语义矩阵的答案选择模型,多层感知机主要实现文本向量句级语义维度重建,而通过不同的计算方法生成语义矩阵能够挖掘不同的文本特征信息。多层感知机与基于线性模型生成的语义理解矩阵相结合,实现一个语义理解模块,旨在分别挖掘问题句和答案句的句级语义特征;多层感知机与基于双向注意力计算方法生成的语义交互矩阵相结合,实现一个语义交互模块,旨在构建问答对之间的词级交互关系。实验结果表明,所提模型在WikiQA数据集上MAP和MRR分别为0.789和0.806,相比基线模型,该模型在性能上有一致的提升,在SelQA数据集上MAP和MRR分别为0.903和0.911,也具有较好的性能表现。
Answer selection is a key sub-task in the field of question answering systems,and its performance supports the deve-lopment of question answering systems.The dynamic word vector generated by the BERT model based on parameter freezing also has problems such as lack of sentence-level semantic features and the lack of word-level interaction between question and answer.Multilayer perceptrons have a variety of advantages,they not only can achieve deep feature mining,but also have low computational costs.On the basis of dynamic text vectors,this paper proposes an answer selection model based on multi-layer perceptrons and semantic matrix,which mainly realizes the semantic dimension reconstruction of text vector sentences,and generates semantic matrix through different calculation methods to mine different text feature information.The multi-layer perceptron is combined with the semantic understanding matrix generated by the linear model to implement a semantic understanding module,which aims to excavate the sentence-level semantic characteristics of the question sentence and the answer sentence respectively;the multi-layer perceptron is combined with the semantic interaction matrix generated based on the two-way attention calculation method to achieve a semantic interaction module,which aims to build the word-level interaction relationship between the question and answer pairs.Experimental results show that the proposed model has a MAP and MRR of 0.789 and 0.806 on the WikiQA dataset,respectively,which has a consistent performance improvement over the baseline model,on the SelQA dataset,MAP and MRR is 0.903 and 0.911,respectively,which also has a good performance.
作者
罗亮
程春玲
刘倩
归耀城
LUO Liang;CHENG Chunling;LIU Qian;GUI Yaocheng(School of Computer Science,Nanjing University of Posts and Telecommunications,Nanjing 210023,China;School of Modern Posts,Nanjing University of Posts and Telecommunications,Nanjing 210023,China)
出处
《计算机科学》
CSCD
北大核心
2023年第5期270-276,共7页
Computer Science
基金
江苏省双创博士项目(JSSCBS20210507)
南京邮电大学引进人才科研启动基金(NY220176)。