摘要
针对开源情报信息抽取过程依赖多类专用模型和抽取属性限制强等问题,基于一种GLM大语言模型进行指令微调和上下文学习提高信息抽取准确率,利用指令自动化生成方法对原始问题进行泛化,构建SFT数据集。开展多任务统一的微调学习常见抽取模式,通过自动思维链扩充提示增强模型推理能力。实验结果表明,该方法在开源情报命名实体识别、关系抽取和事件抽取任务上,微调模型能满足不同场景下的抽取要求,具有较好的抽取效果。
To address the issues of the dependency on multiple specialized models and limitations on extraction attributes in the process of open source intelligence extraction,generative language model was adopted as an extraction tool and the accuracy of information extraction was improved through instruction fine-tuning and in-context learning.The SFT dataset was constructed using automated instruction generation methods to generalize the original problems.The fine-tuning was conducted for multiple tasks to learn common extraction patterns.The automatic thinking chain expansion prompts were employed to enhance the model’s reasoning ability.Experimental results demonstrate that this method,in tasks such as named entity recognition,relation extraction,and event extraction in open source intelligence,achieves satisfactory extraction results in various scenarios,indicating its effectiveness in extraction.
作者
赵勤博
王又辰
陈荣
宋颖毅
栾真
田夫兰
ZHAO Qin-bo;WANG You-chen;CHEN Rong;SONG Ying-yi;LUAN Zhen;TIAN Fu-lan(Institute 706,Second Academy of China Aerospace Science and Industry Corporation,Beijing 100854,China;Information Technology Center,General Office of Yunnan Provincial Committee of the Communist Party of China,Kunming 650228,China)
出处
《计算机工程与设计》
北大核心
2024年第12期3772-3778,共7页
Computer Engineering and Design
关键词
开源情报
大语言模型
信息抽取
指令自动化生成
指令微调
上下文学习
自动思维链
open source intelligence
large language model
information extraction
automatic instruction generation
instruction tuning
in-context learning
automatic chain-of-thought