摘要
针对反诈骗信息识别,对大型语言模型(LLMs)的微调技术进行了深入的实验研究。选取了3种不同规模的LLMs基础模型,并采用了LoRA和p-tuningv22种先进的微调技术,以适应特定的反诈骗信息识别任务。通过多个维度的实验评估,微调策略不仅能够显著提升模型在反诈骗信息识别上的性能,还能够在一定程度上保持模型的通用性。此外,探讨了LLMs在少样本情况下的学习能力,并分析了不同微调策略下的资源消耗情况。
Aiming at the anti-fraud information identification,it conductes in-depth experimental research on fine-tuning techniques of large language models(LLMs).It selectes three LLMs base models of different scales and employes two advanced fine tuning technologies,LoRA and p-tuning v2,to adapt to specific anti-fraud information identification tasks.Through experimental evaluations across multiple dimensions,fine-tuning strategies not only significantly enhances the models'performance in anti-fraud information identification,but also maintains the universality of the model to a certain extent.Additionally,it explores the learning capabilities of LLMs under low-sample conditions and analyzes the resource consumption under different fine-tuning strategies.
作者
彭成智
谢园园
吕光旭
Peng Chengzhi;Xie Yuanyuan;Lu Guangxu(China Information Technology Designing&Consulting Institute Co.,Ltd.,Beijing 100048,China)
出处
《邮电设计技术》
2024年第8期53-57,共5页
Designing Techniques of Posts and Telecommunications