期刊文献+
共找到1篇文章
< 1 >
每页显示 20 50 100
A Communication Theory Perspective on Prompting Engineering Methods for Large Language Models
1
作者 yuan-feng song Yuan-Qin He +4 位作者 Xue-Fang Zhao Han-Lin Gu Di Jiang Hai-Jun Yang Li-Xin Fan 《Journal of Computer Science & Technology》 SCIE EI CSCD 2024年第4期984-1004,共21页
The springing up of large language models(LLMs)has shifted the community from single-task-orientated natural language processing(NLP)research to a holistic end-to-end multi-task learning paradigm.Along this line of re... The springing up of large language models(LLMs)has shifted the community from single-task-orientated natural language processing(NLP)research to a holistic end-to-end multi-task learning paradigm.Along this line of research endeavors in the area,LLM-based prompting methods have attracted much attention,partially due to the technological advantages brought by prompt engineering(PE)as well as the underlying NLP principles disclosed by various prompting methods.Traditional supervised learning usually requires training a model based on labeled data and then making predictions.In contrast,PE methods directly use the powerful capabilities of existing LLMs(e.g.,GPT-3 and GPT-4)via composing appropriate prompts,especially under few-shot or zero-shot scenarios.Facing the abundance of studies related to the prompting and the ever-evolving nature of this field,this article aims to 1)illustrate a novel perspective to review existing PE methods within the well-established communication theory framework,2)facilitate a better/deeper understanding of developing trends of existing PE methods used in three typical tasks,and 3)shed light on promising research directions for future PE methods. 展开更多
关键词 prompting method large language model communication theory
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部