Prompt engineering, the art of crafting effective prompts for artificial intelligence models, has emerged as a pivotal factor in determining the quality and usefulness of AI (Artificial Intelligence)-generated outputs...Prompt engineering, the art of crafting effective prompts for artificial intelligence models, has emerged as a pivotal factor in determining the quality and usefulness of AI (Artificial Intelligence)-generated outputs. This practice involves strategically designing and structuring prompts to guide AI models toward desired outcomes, ensuring that they generate relevant, informative, and accurate responses. The significance of prompt engineering cannot be overstated. Well-crafted prompts can significantly enhance the capabilities of AI models, enabling them to perform tasks that were once thought to be exclusively human domain. By providing clear and concise instructions, prompts can guide AI models to generate creative text, translate languages, write different kinds of creative content, and answer your questions in an informative way. Moreover, prompt engineering can help mitigate biases and ensure that AI models produce outputs that are fair, equitable, and inclusive. However, prompt engineering is not without its challenges. Crafting effective prompts requires a deep understanding of both the AI model’s capabilities and the specific task at hand. Additionally, the quality of the prompts can be influenced by factors such as the model’s training data [1] and the complexity of the task. As AI models continue to evolve, prompt engineering will likely become even more critical in unlocking their full potential.展开更多
The exption of Chinese natural language processing(NLP)has stimulated research in the broader NLP domain.However,existing large language models have limitations in comprehending and reasoning in Chinese.This paper add...The exption of Chinese natural language processing(NLP)has stimulated research in the broader NLP domain.However,existing large language models have limitations in comprehending and reasoning in Chinese.This paper addresses these limitations by enhancing Chinese language models comprehension and reasoning capabilities while minimizing resource requirements.We propose LLaMA-LoRA,a neural prompt engineering framework that builds upon the LLaMA-13B model and incorporates the Low-Rank Adaptation(LoRA)of Large Language Models technique for refinement.Chain-of-Thought(CoT)are crucial for generating intermediate reasoning chains in language models,but their effectiveness can be limited by isolated language patterns.Erroneous reasoning resulting from conventional prompts negatively impacts model performance.Automatic prompts are introduced to encourage reasoning chain generation and accurate answer inference.Training the model with an extensive corpus of Chinese CoT data enhances its comprehension and reasoning abilities.The LLaMA-LoRA model demonstrates exceptional performance across numerous Chinese language tasks,surpassing benchmark performance achieved by related language models such as GPT-3.5,Chat-GLM,and OpenAssistant,delivering accurate,comprehensive,and professional answers.The availability of our open-source model code facilitates further research in the field of Chinese text logical reasoning thinking chains.展开更多
Neural Machine Translation is one of the key research directions in Natural Language Processing.However,limited by the scale and quality of parallel corpus,the translation quality of low-resource Neural Machine Transl...Neural Machine Translation is one of the key research directions in Natural Language Processing.However,limited by the scale and quality of parallel corpus,the translation quality of low-resource Neural Machine Translation has always been unsatisfactory.When Reinforcement Learning from Human Feedback(RLHF)is applied to lowresource machine translation,commonly encountered issues of substandard preference data quality and the higher cost associated with manual feedback data.Therefore,a more cost-effective method for obtaining feedback data is proposed.At first,optimizing the quality of preference data through the prompt engineering of the Large Language Model(LLM),then combining human feedback to complete the evaluation.In this way,the reward model could acquire more semantic information and human preferences during the training phase,thereby enhancing feedback efficiency and the result’s quality.Experimental results demonstrate that compared with the traditional RLHF method,our method has been proven effective on multiple datasets and exhibits a notable improvement of 1.07 in BLUE.Meanwhile,it is also more favorably received in the assessments conducted by human evaluators and GPT-4o.展开更多
文摘Prompt engineering, the art of crafting effective prompts for artificial intelligence models, has emerged as a pivotal factor in determining the quality and usefulness of AI (Artificial Intelligence)-generated outputs. This practice involves strategically designing and structuring prompts to guide AI models toward desired outcomes, ensuring that they generate relevant, informative, and accurate responses. The significance of prompt engineering cannot be overstated. Well-crafted prompts can significantly enhance the capabilities of AI models, enabling them to perform tasks that were once thought to be exclusively human domain. By providing clear and concise instructions, prompts can guide AI models to generate creative text, translate languages, write different kinds of creative content, and answer your questions in an informative way. Moreover, prompt engineering can help mitigate biases and ensure that AI models produce outputs that are fair, equitable, and inclusive. However, prompt engineering is not without its challenges. Crafting effective prompts requires a deep understanding of both the AI model’s capabilities and the specific task at hand. Additionally, the quality of the prompts can be influenced by factors such as the model’s training data [1] and the complexity of the task. As AI models continue to evolve, prompt engineering will likely become even more critical in unlocking their full potential.
基金supported by the the Science and Technology Program of Sichuan Province(Grant no.2023YFS0424)the"Open bidding for selecting the best candidates"Science and Technology Project of Chengdu(Grant no.2023-JB00-00020-GX)the National Natural Science Foundation(Grant nos.61902324,11426179,and 61872298).
文摘The exption of Chinese natural language processing(NLP)has stimulated research in the broader NLP domain.However,existing large language models have limitations in comprehending and reasoning in Chinese.This paper addresses these limitations by enhancing Chinese language models comprehension and reasoning capabilities while minimizing resource requirements.We propose LLaMA-LoRA,a neural prompt engineering framework that builds upon the LLaMA-13B model and incorporates the Low-Rank Adaptation(LoRA)of Large Language Models technique for refinement.Chain-of-Thought(CoT)are crucial for generating intermediate reasoning chains in language models,but their effectiveness can be limited by isolated language patterns.Erroneous reasoning resulting from conventional prompts negatively impacts model performance.Automatic prompts are introduced to encourage reasoning chain generation and accurate answer inference.Training the model with an extensive corpus of Chinese CoT data enhances its comprehension and reasoning abilities.The LLaMA-LoRA model demonstrates exceptional performance across numerous Chinese language tasks,surpassing benchmark performance achieved by related language models such as GPT-3.5,Chat-GLM,and OpenAssistant,delivering accurate,comprehensive,and professional answers.The availability of our open-source model code facilitates further research in the field of Chinese text logical reasoning thinking chains.
基金supported by the National Natural Science Foundation of China under Grant No.61862064.
文摘Neural Machine Translation is one of the key research directions in Natural Language Processing.However,limited by the scale and quality of parallel corpus,the translation quality of low-resource Neural Machine Translation has always been unsatisfactory.When Reinforcement Learning from Human Feedback(RLHF)is applied to lowresource machine translation,commonly encountered issues of substandard preference data quality and the higher cost associated with manual feedback data.Therefore,a more cost-effective method for obtaining feedback data is proposed.At first,optimizing the quality of preference data through the prompt engineering of the Large Language Model(LLM),then combining human feedback to complete the evaluation.In this way,the reward model could acquire more semantic information and human preferences during the training phase,thereby enhancing feedback efficiency and the result’s quality.Experimental results demonstrate that compared with the traditional RLHF method,our method has been proven effective on multiple datasets and exhibits a notable improvement of 1.07 in BLUE.Meanwhile,it is also more favorably received in the assessments conducted by human evaluators and GPT-4o.