期刊文献+

Unlearning Descartes:Sentient AI is a Political Problem

原文传递
导出
摘要 The emergence of Large Language Models(LLMs)has renewed debate about whether Artificial Intelligence(AI)can be conscious or sentient.This paper identifies two approaches to the topic and argues:(1)A“Cartesian”approach treats consciousness,sentience,and personhood as very similar terms,and treats language use as evidence that an entity is conscious.This approach,which has been dominant in AI research,is primarily interested in what consciousness is,and whether an entity possesses it.(2)An alternative“Hobbesian”approach treats consciousness as a sociopolitical issue and is concerned with what the implications are for labeling something sentient or conscious.This both enables a political disambiguation of language,consciousness,and personhood and allows regulation to proceed in the face of intractable problems in deciding if something“really is”sentient.(3)AI systems should not be treated as conscious,for at least two reasons:(a)treating the system as an origin point tends to mask competing interests in creating it,at the expense of the most vulnerable people involved;and(b)it will tend to hinder efforts at holding someone accountable for the behavior of the systems.A major objective of this paper is accordingly to encourage a shift in thinking.In place of the Cartesian question-is AI sentient?-I propose that we confront the more Hobbesian one:Does it make sense to regulate developments in which AI systems behave as if they were sentient?
作者 Gordon Hull
出处 《Journal of Social Computing》 EI 2023年第3期193-204,共12页 社会计算(英文)
  • 相关文献

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部