期刊文献+
共找到3篇文章
< 1 >
每页显示 20 50 100
LLaMA-LoRA Neural Prompt Engineering:A Deep Tuning Framework for Automatically Generating Chinese Text Logical Reasoning Thinking Chains
1
作者 Songlin Chen Weicheng Wang +3 位作者 Xiaoliang Chen Peng lu Zaiyan Yang Yajun Du 《Data Intelligence》 EI 2024年第2期375-408,共34页
The exption of Chinese natural language processing(NLP)has stimulated research in the broader NLP domain.However,existing large language models have limitations in comprehending and reasoning in Chinese.This paper add... The exption of Chinese natural language processing(NLP)has stimulated research in the broader NLP domain.However,existing large language models have limitations in comprehending and reasoning in Chinese.This paper addresses these limitations by enhancing Chinese language models comprehension and reasoning capabilities while minimizing resource requirements.We propose LLaMA-LoRA,a neural prompt engineering framework that builds upon the LLaMA-13B model and incorporates the Low-Rank Adaptation(LoRA)of Large Language Models technique for refinement.Chain-of-Thought(CoT)are crucial for generating intermediate reasoning chains in language models,but their effectiveness can be limited by isolated language patterns.Erroneous reasoning resulting from conventional prompts negatively impacts model performance.Automatic prompts are introduced to encourage reasoning chain generation and accurate answer inference.Training the model with an extensive corpus of Chinese CoT data enhances its comprehension and reasoning abilities.The LLaMA-LoRA model demonstrates exceptional performance across numerous Chinese language tasks,surpassing benchmark performance achieved by related language models such as GPT-3.5,Chat-GLM,and OpenAssistant,delivering accurate,comprehensive,and professional answers.The availability of our open-source model code facilitates further research in the field of Chinese text logical reasoning thinking chains. 展开更多
关键词 Chinese natural language processing Neural prompt engineering Large language models Low-Rank adaptation Chain-of-thought
原文传递
Prompt Engineering Importance and Applicability with Generative AI
2
作者 Prashant Bansal 《Journal of Computer and Communications》 2024年第10期14-23,共10页
Prompt engineering, the art of crafting effective prompts for artificial intelligence models, has emerged as a pivotal factor in determining the quality and usefulness of AI (Artificial Intelligence)-generated outputs... Prompt engineering, the art of crafting effective prompts for artificial intelligence models, has emerged as a pivotal factor in determining the quality and usefulness of AI (Artificial Intelligence)-generated outputs. This practice involves strategically designing and structuring prompts to guide AI models toward desired outcomes, ensuring that they generate relevant, informative, and accurate responses. The significance of prompt engineering cannot be overstated. Well-crafted prompts can significantly enhance the capabilities of AI models, enabling them to perform tasks that were once thought to be exclusively human domain. By providing clear and concise instructions, prompts can guide AI models to generate creative text, translate languages, write different kinds of creative content, and answer your questions in an informative way. Moreover, prompt engineering can help mitigate biases and ensure that AI models produce outputs that are fair, equitable, and inclusive. However, prompt engineering is not without its challenges. Crafting effective prompts requires a deep understanding of both the AI model’s capabilities and the specific task at hand. Additionally, the quality of the prompts can be influenced by factors such as the model’s training data [1] and the complexity of the task. As AI models continue to evolve, prompt engineering will likely become even more critical in unlocking their full potential. 展开更多
关键词 prompt engineering AI ML prompt Zero Shot Few Shot Generative AI Chatbots AI Models
下载PDF
Improving Low-Resource Machine Translation Using Reinforcement Learning from Human Feedback
3
作者 Liqing Wang Yiheng Xiao 《Intelligent Automation & Soft Computing》 2024年第4期619-631,共13页
Neural Machine Translation is one of the key research directions in Natural Language Processing.However,limited by the scale and quality of parallel corpus,the translation quality of low-resource Neural Machine Transl... Neural Machine Translation is one of the key research directions in Natural Language Processing.However,limited by the scale and quality of parallel corpus,the translation quality of low-resource Neural Machine Translation has always been unsatisfactory.When Reinforcement Learning from Human Feedback(RLHF)is applied to lowresource machine translation,commonly encountered issues of substandard preference data quality and the higher cost associated with manual feedback data.Therefore,a more cost-effective method for obtaining feedback data is proposed.At first,optimizing the quality of preference data through the prompt engineering of the Large Language Model(LLM),then combining human feedback to complete the evaluation.In this way,the reward model could acquire more semantic information and human preferences during the training phase,thereby enhancing feedback efficiency and the result’s quality.Experimental results demonstrate that compared with the traditional RLHF method,our method has been proven effective on multiple datasets and exhibits a notable improvement of 1.07 in BLUE.Meanwhile,it is also more favorably received in the assessments conducted by human evaluators and GPT-4o. 展开更多
关键词 Low-resource neural machine translation RLHF prompt engineering LLM
下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部