摘要
The emergence of Large Language Models(LLMs)has renewed debate about whether Artificial Intelligence(AI)can be conscious or sentient.This paper identifies two approaches to the topic and argues:(1)A“Cartesian”approach treats consciousness,sentience,and personhood as very similar terms,and treats language use as evidence that an entity is conscious.This approach,which has been dominant in AI research,is primarily interested in what consciousness is,and whether an entity possesses it.(2)An alternative“Hobbesian”approach treats consciousness as a sociopolitical issue and is concerned with what the implications are for labeling something sentient or conscious.This both enables a political disambiguation of language,consciousness,and personhood and allows regulation to proceed in the face of intractable problems in deciding if something“really is”sentient.(3)AI systems should not be treated as conscious,for at least two reasons:(a)treating the system as an origin point tends to mask competing interests in creating it,at the expense of the most vulnerable people involved;and(b)it will tend to hinder efforts at holding someone accountable for the behavior of the systems.A major objective of this paper is accordingly to encourage a shift in thinking.In place of the Cartesian question-is AI sentient?-I propose that we confront the more Hobbesian one:Does it make sense to regulate developments in which AI systems behave as if they were sentient?