期刊文献+

融合问题分类与RoBERTa的答案选择方法 被引量:1

An Answer Selection Method Integrating Question Classification and RoBERTa
原文传递
导出
摘要 【目的】为解决现有预训练模型在答案选择任务中对问答句之间的语义交互信息利用不足、模型进行微调时精度不稳定等问题,提出融合问题分类与RoBERTa模型的答案选择方法。【方法】提出保留原实体语义的EAT标注方法并结合多句联合建模的RoBERTa模型构建答案选择模型。此外,通过两段微调过程,对模型进行迁移学习,提高模型微调过程的精度稳定性。【结果】在WiKiQA数据集上,本文方法在P@1、MAP和MRR三个指标分别达到0.843、0.896、0.903;在TrecQA数据集上,上述三个指标分别达到0.955、0.944、0.974。同时,该方法提升了模型精度收敛过程的稳定性。【局限】对于“缩写(ABBR)”和“描述(DESC)”这两种类型的复杂问题,使用命名实体识别工具无法抽取答案句中的关键实体,导致不能利用这两种分类信息增强问答句语义信息交互建模。【结论】将保留原实体语义的融合问题分类信息方法与迁移-自适应策略引入多句建模RoBERTa模型,可以有效提升模型表现并改善模型的鲁棒性。 [Objective]This paper proposes an answer selection method integrating question classification with the RoBERTa model.It aims to address the issues in existing pre-trained models,such as insufficient utilization of semantic interaction information between question-answer sentences and unstable accuracy during fine-tuning.[Methods]We introduced an EAT annotation approach that retained the original entity semantics and combined it with multi-sentence joint-RoBERTa modeling to construct an answer selection model.Additionally,we employed a two-stage fine-tuning process for transfer learning to enhance the model’s stability during fine-tuning.[Results]The proposed method achieved P@1,MAP,and MRR scores of 0.843,0.896,and 0.903 on the WiKiQA dataset.On the TrecQA dataset,the scores reached 0.955,0.944,and 0.974,respectively.Moreover,this method enhanced the stability of the model’s accuracy convergence process.[Limitations]For complex questions of the types“abbreviations(ABBR)”and“descriptions(DESC)”,the new method cannot effectively extract key entities from the answer sentences.Therefore,we cannot use the classification information to enhance semantic interaction modeling between question-answer sentences.[Conclusions]The proposed method could effectively improve model performance and robustness.
作者 何丽 柳岚清 刘杰 段建勇 王昊 He Li;Liu Lanqing;Liu Jie;Duan Jianyong;Wang Hao(School of Information Science and Technology,North China University of Technology,Beijing 100144,China;China CNONIX National Standard Application and Promotion Lab,Beijing 100144,China;Research Center for Language Intelligence of China,Capital Normal University,Beijing 100089,China)
出处 《数据分析与知识发现》 EI CSSCI CSCD 北大核心 2024年第8期157-167,共11页 Data Analysis and Knowledge Discovery
基金 科技创新2030—“新一代人工智能”重大项目(项目编号:2020AAA0109703) 北方工业大学北京城市治理研究基地(项目编号:2023CSZL16)的研究成果之一
关键词 答案选择 期望答案类型 问题分类 RoBERTa 微调 迁移学习 Answer Sentence Selection Expected Answer Type Question Classification RoBERTa Fine-Tune Transfer Learning
  • 相关文献

同被引文献19

引证文献1

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部