摘要
针对电子病历构建过程中难以捕捉信息抽取任务之间的关联性和医患对话上下文信息的问题,提出了一种基于Transformer交互指导的联合信息抽取方法,称为CT-JIE(collaborative Transformer for joint information extraction)。首先,该方法使用滑动窗口并结合Bi-LSTM获取对话中的历史信息,利用标签感知模块捕捉对话语境中与任务标签相关的信息;其次,通过全局注意力模块提高了模型对于症状实体及其状态的上下文感知能力;最后,通过交互指导模块显式地建模了意图识别、槽位填充与状态识别三个任务之间的交互关系,以捕捉多任务之间的复杂语境和关系。实验表明,该方法在IMCS21和CMDD两个数据集上的性能均优于其他基线模型和消融模型,在处理联合信息抽取任务时具有较强的泛化能力和性能优势。
Addressing the challenges of capturing the correlation between information extraction tasks and the contextual information in doctor-patient dialogues during electronic medical record construction,this paper proposed a Transformer-based collaborative information extraction method called CT-JIE(collaborative Transformer for joint information extraction).Firstly,this method utilized a sliding window combined with Bi-LSTM to acquire historical information from the dialogues and employed a label-aware module to capture task-related information in the dialogue context.Secondly,the global attention module enhanced the model’s ability to perceive the context of symptom entities and their status.Finally,the interactive guidance module explicitly modeled the interaction among intent recognition,slot filling,and status recognition tasks to capture the complex contexts and relationships among multiple tasks.Experiments demonstrate that this method outperforms other baseline and ablation models on the IMCS21 and CMDD datasets,showing strong generalization ability and performance advantages in handling joint information extraction tasks.
作者
林致中
王华珍
Lin Zhizhong;Wang Huazhen(School of Computer Science&Technology,Huaqiao University,Xiamen Fujian 361000,China)
出处
《计算机应用研究》
CSCD
北大核心
2024年第8期2315-2321,共7页
Application Research of Computers
基金
装备预研教育部联合基金资助项目(8091B022150)
福建省社会科学基金基础研究项目(FJ2021B110)
厦门市重大科技计划资助项目(3502Z20221021)
厦门市一般科技资助项目(3502Z20226037)。
关键词
联合信息抽取
医患对话
电子病历
多任务学习
joint information extraction
medical dialogues
electronic medical record
multi-task learning