期刊文献+

LLaMA-LoRA Neural Prompt Engineering:A Deep Tuning Framework for Automatically Generating Chinese Text Logical Reasoning Thinking Chains

原文传递
导出
摘要 The exption of Chinese natural language processing(NLP)has stimulated research in the broader NLP domain.However,existing large language models have limitations in comprehending and reasoning in Chinese.This paper addresses these limitations by enhancing Chinese language models comprehension and reasoning capabilities while minimizing resource requirements.We propose LLaMA-LoRA,a neural prompt engineering framework that builds upon the LLaMA-13B model and incorporates the Low-Rank Adaptation(LoRA)of Large Language Models technique for refinement.Chain-of-Thought(CoT)are crucial for generating intermediate reasoning chains in language models,but their effectiveness can be limited by isolated language patterns.Erroneous reasoning resulting from conventional prompts negatively impacts model performance.Automatic prompts are introduced to encourage reasoning chain generation and accurate answer inference.Training the model with an extensive corpus of Chinese CoT data enhances its comprehension and reasoning abilities.The LLaMA-LoRA model demonstrates exceptional performance across numerous Chinese language tasks,surpassing benchmark performance achieved by related language models such as GPT-3.5,Chat-GLM,and OpenAssistant,delivering accurate,comprehensive,and professional answers.The availability of our open-source model code facilitates further research in the field of Chinese text logical reasoning thinking chains.
出处 《Data Intelligence》 EI 2024年第2期375-408,共34页 数据智能(英文)
基金 supported by the the Science and Technology Program of Sichuan Province(Grant no.2023YFS0424) the"Open bidding for selecting the best candidates"Science and Technology Project of Chengdu(Grant no.2023-JB00-00020-GX) the National Natural Science Foundation(Grant nos.61902324,11426179,and 61872298).
  • 相关文献

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部