期刊文献+

当机器人产生幻觉,它告诉我们关于人类思维的什么? 被引量:8

When Robots Hallucinate:What Does It Tell Us about Human Thinking?
下载PDF
导出
摘要 像OpenAI的ChatGPT这样的聊天机器人依靠一种叫作“大型语言模型”的人工智能来生成对话,而当人工智能作出似乎并不符合其训练数据的自信反应,人工智能研究界称之为“幻觉”。“幻觉”的命名缘于其与人类心理学中的幻觉现象相类似,但其实,用同样来自心理学的术语“虚构症”加以描述更为准确。当人的记忆出现空白,大脑会令人信服地填补其余部分,而语言模型也同样擅长编造与现实无关的事实,使人难以分辨真实的陈述与错误的陈述。更加致命的是“伊莱扎效应”,将人类水平的智力和理解力归于人工智能系统,它可能带来不可忽视的负面影响,迫使今天的每个人都努力找到确保能以负责任和合乎伦理的方式使用人工智能的方法。 Chatbots like OpenAI's ChatGPT rely on large language models to generate conversations.When an AI gives confident responses that seem to deviate from its training data,it is referred to as"hallucination"in the field of artificial intelligence research.The term draws parallels to the hallucination in human psychology;yet a similar psychological term"confabulation"might be more accurate.Just as our brains will fill in the gaps of our memories with fictional contents,language models excel at fabricating events unrelated to reality,making it difficult to discern between truth and false claims.The more significant concern is the"Eliza effect"when human intelligence and understanding rely on artificial intelligence and may lead to significant negative influences.It is necessary to find a responsible and ethical approach to use AI for today's society.
作者 胡泳 HU Yong(School of Journalism and Communication,Peking University,Beijing 100871)
出处 《文化艺术研究》 2023年第3期15-26,112,共13页 Studies in Culture and Art
关键词 ChatGPT 大型语言模型 幻觉 虚构症 伊莱扎效应 ChatGPT large language model hallucination confabulation Eliza effect
  • 相关文献

同被引文献72

二级引证文献19

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部