期刊文献+

人工智能大模型价值对齐的人文主义思考

Humanistic Considerations in Value Alignment for Large-Scale Artificial Intelligence Models
原文传递
导出
摘要 人工智能技术不断迭代,ChatGPT和Sora大模型正逐步迈向通用人工智能。为了更好地校准人工智能大模型,各国开始研究价值对齐的技术路径。然而,价值对齐绝非易事,人类社会的价值标准也有很大的差异。如何对齐?标准在哪?目前的价值对齐思路大多基于技术主义的“控制论”,但是,这样的价值对齐可能带来新的伦理风险,并加速“奇点”临近。为此,大模型对齐算法有必要打破传统的“主客体二分法”,摆脱技术主义的思路,以应对道德价值观对齐所带来的独特挑战,追求技术与人文的动态平衡,并在技术人文主义视角下构建“人机共生”关系,实现人与人工智能技术的和谐发展。 The iteration of artificial intelligence(AI)technologies continues,with models such as ChatGPT and Sora,advancing towards general AI.To better calibrate large AI models,countries are beginning to explore technical pathways for value alignment.However,value alignment is no easy task,given the significant differences in value standards across human societies.How can we achieve alignment?What should the standard be?Current approaches to value alignment mostly rely on technocratic“cybernetics”,but this may introduce new ethical risks and hasten the approach of the“singularity”.Therefore,it is necessary for large model alignment algorithms to break away from the traditional“subject-object dichotomy”and transcend the technocratic mindset,in order to address the unique challenges posed by the alignment of moral values.This pursuit seeks a dynamic balance between technology and the humanities and aims to construct a“symbiotic relationship”between humans and machines from a techno-humanist perspective,achieving harmonious development between humans and AI technology.
作者 林爱珺 常云帆 Lin Aijun;Chang Yunfan(School of Journalism&Communication,Jinan University)
出处 《新闻界》 CSSCI 北大核心 2024年第8期24-33,共10页
基金 国家社科基金重点项目“人工智能时代的新闻伦理与法规”(18ZDA308)。
关键词 人工智能 伦理风险 价值对齐 奇点社会 人文主义 artificial intelligence ethical risks value alignment singularity society humanism
  • 相关文献

参考文献9

二级参考文献21

共引文献353

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部