期刊文献+

面向中医药大模型的知识增强方法研究

Knowledge Augmentation on Traditional Chinese Medicine Language Model
下载PDF
导出
摘要 近年来,大语言模型(LLM)在各个领域取得了许多重大成果。由于缺乏专业知识,以及中医和现代医学的思想不同,大模型在中医药领域的应用仍是一项挑战。现有的知识增强方法难以保持中医方剂具有的自身结构性。为了解决以上问题,提出了一种新的知识增强方法。该方法由模型训练、图谱构建和知识增强三部分组成。在模型训练阶段,通过对基础大模型在中医药数据集上进行预训练和微调两阶段训练,得到中医药领域大模型。在图谱构建阶段,基于中医十万首经典方剂和古籍中的方剂,利用清洗后的数据集构建中医药图谱。在知识增强阶段,基于对知识图谱上信息的计算,利用检索图谱中的专业知识和图谱结构计算检索结果,中医药方剂中的结构特性得以保留。在中医药方剂配伍任务上,针对于任务特性提出了一组评价标准,包括主观指标和客观指标,用于评估模型在该任务上的表现。实验表明,该方法相对于基准测试模型,在主观指标和客观指标上均获得了较大提升,BLEU-1最高提升0.09,ROUGE-1最高提升0.21。消融实验表明,该方法对于模型在该任务上具有较大作用,未使用知识增强的模型BLEU-1相比于使用知识增强下降约37%。 Recently,large language models(LLM)have made significant achievements in various fields.However,due to lack of specialized knowledge and the gap between modern medicine and traditional Chinese medicine(TCM),it is still a challenge to deploy LLM in TCM.Existing methods fail to maintain the structure of TCM pre-scription.To address the problems,a pattern of knowledge augmentation is proposed.The method includes model training,knowledge graph construction and knowledge augmentation.In the training phase,TCM language model is trained on TCM corpus,by a two-stage method combining pre-training and fine-tuning.In the knowledge graph con-struction phase,prescription knowledge graph is constructed from nearly 100000 preprocessed classical TCM pre-scriptions and those from ancient books.In the knowledge augmentation phase,enhanced by the above pattern,out-puts are generated from computation of knowledge graph,according to the schema of knowledge graph from search-ing result,which preserves the structure of prescriptions.A set of evaluations specific to prescription optimizations is proposed,including objective and subjective indicators,to evaluate the performance of the model for the task.Ex-periment shows that the model improves greatly on both subjective and objective evaluations compared with base-lines.BLEU-1 is increased by up to 0.09,while ROUGE-1 is increased by up to 0.21.Ablation study shows that,it is of vital importance for the model performance to be knowledge-augmented.BLEU-1 of augmentation-free model is decreased by about 37%compared with that of the augmented model.
作者 吉祥宇 王鑫 张鹤译 孟昭鹏 张俊华 庄朋伟 贾勇哲 徐大为 JI Xiangyu;WANG Xin;ZHANG Heyi;MENG Zhaopeng;ZHANG Junhua;ZHUANG Pengwei;JIA Yongzhe;XU Dawei(College of Intelligence and Computing,Tianjin University,Tianjin 300350,China;Tianjin University of Traditional Chinese Medicine,Tianjin 300193,China;National Clinical Research Center for Chinese Medicine Acupuncture and Moxibustion,First Teaching Hospital of Tianjin University of Traditional Chinese Medicine,Tianjin 300193,China;Tiandazhitu(Tianjin)Technology Co.,Ltd.,Tianjin 300192,China)
出处 《计算机科学与探索》 CSCD 北大核心 2024年第10期2616-2629,共14页 Journal of Frontiers of Computer Science and Technology
基金 国家自然科学基金面上项目(61972275)。
关键词 大语言模型(LLM) 中医药 方剂优化 检索增强生成 large language model(LLM) traditional Chinese medicine prescription optimization retrieval aug-mented generation
  • 相关文献

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部