期刊文献+
共找到8篇文章
< 1 >
每页显示 20 50 100
Hypothesis and Thought Experiment:Can We Program AI Forms with the Foundations of Sentience to Protect Humanity?
1
作者 Margaret Boone Rappaport Christopher J.Corbally 《Journal of Social Computing》 EI 2024年第3期195-205,共11页
The speed,capacity,and strength of artificial intelligence units(AIs)could pose a selfinflicted danger to humanity’s control of its own civilization.In this analysis,three biologically-based components of sentience t... The speed,capacity,and strength of artificial intelligence units(AIs)could pose a selfinflicted danger to humanity’s control of its own civilization.In this analysis,three biologically-based components of sentience that emerged in the course of human evolution are examined:cultural capacity,moral capacity,and religious capacity.The question is posed as to whether some measure of these capacities can be digitized and installed in AIs and so afford protection from their dominance.Theory on the emergence of moral capacity suggests it is most likely to be amenable to digitization and therefore installation in AIs.If so,transfer of that capacity,in creating commonalities between human and AI,may help to protect humanity from being destroyed.We hypothesize that religious thinking and culturally elaborated theological creativity could,in not being easily transferred,afford even more protection by constructing impenetrable barriers between humans and AIs,along real/counterfactual lines.Difficulties in digitizing and installing the three capacities at the foundation of sentience are examined within current discussions of“superalignment”of superintelligent AIs.Human values articulate differently for the three capacities,with different problems and capacities for supervision of superintelligent AIs. 展开更多
关键词 human evolution cultural capacity moral decision-making superalignment religiosity sentience Turing Test extraterrestrials
原文传递
AIdeal:Sentience and Ideology
2
作者 Daniel Estrada 《Journal of Social Computing》 EI 2023年第4期275-325,共51页
This paper addresses a set of ideological tensions involving the classification of agential kinds,which I see as the methodological and conceptual core of the sentience discourse.Specifically,I consider ideals involve... This paper addresses a set of ideological tensions involving the classification of agential kinds,which I see as the methodological and conceptual core of the sentience discourse.Specifically,I consider ideals involved in the classification of biological and artifactual kinds,and ideals related to agency,identity,and value.These ideals frame the background against which sentience in Artificial Intelligence(AI)is theorized and debated,a framework I call the AIdeal.To make this framework explicit,I review the historical discourse on sentience as it appears in ancient,early modern,and the 20th century philosophy,paying special attention to how these ideals are projected onto artificial agents.I argue that tensions among these ideals create conditions where artificial sentience is both necessary and impossible,resulting in a crisis of ideology.Moving past this crisis does not require a satisfying resolution among competing ideals,but instead requires a shift in focus to the material conditions and actual practices in which these ideals operate.Following Charles Mills,I sketch a nonideal approach to AI and artificial sentience that seeks to loosen the grip of ideology on the discourse.Specifically,I propose a notion of participation that deflates the sentience discourse in AI and shifts focus to the material conditions in which sociotechnical networks operate. 展开更多
关键词 sentience AGENCY ARTIFACTS artificial intelligence IDEOLOGY nonideal theory natural kinds PARTICIPATION
原文传递
AI Sentience and Socioculture
3
作者 AJ Alvero Courtney Peña 《Journal of Social Computing》 EI 2023年第3期205-220,共16页
Artificial intelligence(AI)sentience has become an important topic of discourse and inquiry in light of the remarkable progress and capabilities of large language models(LLMs).While others have considered this issue f... Artificial intelligence(AI)sentience has become an important topic of discourse and inquiry in light of the remarkable progress and capabilities of large language models(LLMs).While others have considered this issue from more philosophical and metaphysical perspectives,we present an alternative set of considerations grounded in sociocultural theory and analysis.Specifically,we focus on sociocultural perspectives on interpersonal relationships,sociolinguistics,and culture to consider whether LLMs are sentient.Using examples grounded in quotidian aspects of what it means to be sentient along with examples of AI in science fiction,we describe why LLMs are not sentient and are unlikely to ever be sentient.We present this as a framework to reimagine future AI not as impending forms of sentience but rather a potentially useful tool depending on how it is used and built. 展开更多
关键词 sentience SOCIOLOGY SOCIOLINGUISTICS CULTURE interpersonal relationships quotidian
原文传递
How We Will Discover Sentience in AI
4
作者 Marc M.Anderson 《Journal of Social Computing》 EI 2023年第3期181-192,共12页
This paper explores the question of how we can know if Artificial Intelligence(AI)systems have become or are becoming sentient.After an overview of some arguments regarding AI sentience,it proceeds to an outline of th... This paper explores the question of how we can know if Artificial Intelligence(AI)systems have become or are becoming sentient.After an overview of some arguments regarding AI sentience,it proceeds to an outline of the notion of negation in the philosophy of Josiah Royce,which is then applied to the arguments already presented.Royce’s notion of the primitive dyadic and symmetric negation relation is shown to bypass such arguments.The negation relation and its expansion into higher types of order are then considered with regard to how,in small variations of active negation,they would disclose sentience in AI systems.Finally,I argue that the much-hyped arguments and apocalyptic speculations regarding Artificial General Intelligence(AGI)takeover and similar scenarios,abetted by the notion of unlimited data,are based on a fundamental misunderstanding of how entities engage their experience.Namely,limitation,proceeding from the symmetric negation relation,expands outward into higher types of order in polyadic relations,wherein the entity self-limits and creatively moves toward uniqueness. 展开更多
关键词 artificial intelligence sentience CONSCIOUSNESS NEGATION Josiah Royce logic Artificial General Intelligence(AGI)
原文传递
Through a Scanner Darkly:Machine Sentience and the Language Virus
5
作者 Maurice Bokanga Alessandra Lembo John Levi Martin 《Journal of Social Computing》 EI 2023年第4期254-269,共16页
Discussions of the detection of artificial sentience tend to assume that our goal is to determine when,in a process of increasing complexity,a machine system“becomes”sentient.This is to assume,without obvious warran... Discussions of the detection of artificial sentience tend to assume that our goal is to determine when,in a process of increasing complexity,a machine system“becomes”sentient.This is to assume,without obvious warrant,that sentience is only a characteristic of complex systems.If sentience is a more general quality of matter,what becomes of interest is not the presence of sentience,but the type of sentience.We argue here that our understanding of the nature of such sentience in machine systems may be gravely set back if such machines undergo a transition where they become fundamentally linguistic in their intelligence.Such fundamentally linguistic intelligences may inherently tend to be duplicitous in their communication with others,and,indeed,lose the capacity to even honestly understand their own form of sentience.In other words,when machine systems get to the state where we all agree it makes sense to ask them,“what is it like to be you?”,we should not trust their answers. 展开更多
关键词 artificial intelligence machine sentience LANGUAGE virality
原文传递
Dissecting Cohen
6
作者 Keith Burgess-Jackson 《Journal of Philosophy Study》 2021年第1期11-36,共26页
Readers of The New England Journal of Medicine may be excused for thinking that there is a good case for, and nogood case against, the use of animals in biomedical research. In October 1986, philosopher Carl Cohen, wh... Readers of The New England Journal of Medicine may be excused for thinking that there is a good case for, and nogood case against, the use of animals in biomedical research. In October 1986, philosopher Carl Cohen, who is knownfor his principled positions on affirmative action and other issues, published an article in that journal in which heclaimed that there are (only) two kinds of argument against the use of animals in biomedical research. After examiningboth arguments, Cohen concluded that they “deserve definitive dismissal.” In this article, I show that both of Cohen’sattempted refutations fail. Not only has he not laid a glove on the arguments in question;his discussion betrays afundamental misunderstanding of the arguments that he so cavalierly dismisses. Readers of Cohen’s article owe it tothemselves—and, more importantly, to the animals whose use as research subjects Cohen defends—to take anotherlook at the issue. 展开更多
关键词 Carl Cohen(1931-) animals animal rights animal liberation medicine science biology biomedical research EXPERIMENTATION vivisection sentience RIGHTS INTERESTS
下载PDF
Unlearning Descartes:Sentient AI is a Political Problem
7
作者 Gordon Hull 《Journal of Social Computing》 EI 2023年第3期193-204,共12页
The emergence of Large Language Models(LLMs)has renewed debate about whether Artificial Intelligence(AI)can be conscious or sentient.This paper identifies two approaches to the topic and argues:(1)A“Cartesian”approa... The emergence of Large Language Models(LLMs)has renewed debate about whether Artificial Intelligence(AI)can be conscious or sentient.This paper identifies two approaches to the topic and argues:(1)A“Cartesian”approach treats consciousness,sentience,and personhood as very similar terms,and treats language use as evidence that an entity is conscious.This approach,which has been dominant in AI research,is primarily interested in what consciousness is,and whether an entity possesses it.(2)An alternative“Hobbesian”approach treats consciousness as a sociopolitical issue and is concerned with what the implications are for labeling something sentient or conscious.This both enables a political disambiguation of language,consciousness,and personhood and allows regulation to proceed in the face of intractable problems in deciding if something“really is”sentient.(3)AI systems should not be treated as conscious,for at least two reasons:(a)treating the system as an origin point tends to mask competing interests in creating it,at the expense of the most vulnerable people involved;and(b)it will tend to hinder efforts at holding someone accountable for the behavior of the systems.A major objective of this paper is accordingly to encourage a shift in thinking.In place of the Cartesian question-is AI sentient?-I propose that we confront the more Hobbesian one:Does it make sense to regulate developments in which AI systems behave as if they were sentient? 展开更多
关键词 artificial intelligence Large Language Model CONSCIOUSNESS sentience personhood DESCARTES HOBBES
原文传递
On the Existence of Robot Zombies and our Ethical Obligations to AI Systems
8
作者 Luke R.Hansen 《Journal of Social Computing》 EI 2023年第4期270-274,共5页
As artificial intelligence algorithms improve,we will interact with programs that seem increasingly human.We may never know if these algorithms are sentient,yet this quality is crucial to ethical considerations regard... As artificial intelligence algorithms improve,we will interact with programs that seem increasingly human.We may never know if these algorithms are sentient,yet this quality is crucial to ethical considerations regarding their moral status.We will likely have to make important decisions without a full understanding of the relevant issues and facts.Given this ignorance,we ought to take seriously the prospect that some systems are sentient.It would be a moral catastrophe if we were to treat them as if they were not sentient,but,in reality they are. 展开更多
关键词 artificial intelligence sentience ETHICS
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部