期刊文献+

人工智能决策的道德缺失效应及其机制 被引量:1

Reactions to immoral AI decisions:The moral deficit effect and its underlying mechanism
原文传递
导出
摘要 随着技术的不断创新与发展,人工智能在人类生活中的重大决策方面发挥着日益重要的作用.然而,人工智能的广泛应用也带来了一系列道德挑战,其中最突出的一个是人们对人工智能的不道德决策往往反应(道德评估、道德愤怒和道德惩罚)较弱,表现出与对人类决策不同的行为模式.这一现象将导致严重的社会问题.心智感知理论和相关研究表明,造成这种现象的主要原因是人们认为人工智能比人类缺乏能动性和体验性;而拟人化与提高期望则是有效的干预策略,可以增强人们对人工智能决策的道德敏感度.与其他学科中“算法伦理”研究主要从设计层面探讨公平算法的原则和方法不同,心理学视角研究更关注人们对人工智能与人类决策心理反应的差异,这对于如何有效地应对算法偏见带来的社会问题,如何构建公平算法提供了新的思路,也为“算法伦理”研究提供了一个新的视角.由于人工智能具有发展性和快速迭代性,该领域的机制及干预策略有待未来研究进一步澄清. Artificial intelligence(AI)is a branch of computer science in which systems are created with intelligence that enables them to perform cognitive functions,such as perception,reasoning,learning,and decision-making.AI has formidable problemsolving abilities in various domains,such as surveillance,health care,and finance.However,its societal applications raise ethical issues,such as gender and racial discrimination.Moreover,psychological research on AI ethics has revealed that people react less morally to unethical decisions made by AI than to those made by humans,showing deficiencies in moral evaluations,punishments,and behavioral responses to AI.These moral deficits of AI decisions have serious negative impacts on individuals,organizations,and society.For example,research has confirmed that this effect leads companies to use AI in their recruitment processes to discriminate against job applicants,reduces people’s awareness of unethical behavior,enables companies to evade accountability,exacerbates the challenges of advocacy and justice for affected groups,and even undermines societal moral standards along with the widespread use of AI.These negative impacts will damage trust in justice and fairness,causing lasting harm to societal ethics.How can we explain AI moral deficit effects?We draw on the mind perception theory,which suggests that people’s moral reactions depend on how they perceive the mental attributes of an entity.These attributes can be divided into two dimensions:agency and experience.Agency refers to the belief that the entity can act,plan,self-regulate,remember,communicate,and think like a typical adult human.Experience refers to the belief that the entity can feel emotional states,such as hunger,fear,and pain.Mind perception is the process of attributing mental capabilities to an entity,while moral judgment is the process of evaluating the entity as good or bad.Entities with high agency are held responsible for their moral actions,and entities with high experience are expected to behave safely and ethically.People tend to view AI as having less agency and experience than adults do,which leads to the AI moral deficit effect.We explore the psychological reasons behind people’s lack of moral concern about immoral decisions made by AI.We contrast people’s moral responses to AI and human decision-makers and the serious social implications of this difference.Based on the theory of mind perception and moral dualism and supported by empirical evidence,we claim that two parallel factors underlie the moral deficit effect of AI decision-making:agency and experience.Additionally,we show that anthropomorphism and expectation violation can influence this effect.Thus,we propose a theoretical model of the inherent psychological mechanism of the moral deficit effect of AI decision-making from the perspective of a moral agent.This model extends the theory of mind perception and deepens the understanding of moral dualism.In sum,we investigate how people react differently to biased AI and biased humans.Unlike other disciplines(e.g.,computer science,philosophy,law,sociology)that focus on the design of fair algorithms in their“algorithmic ethics”research,we examine the human side of the problem.We contend that comprehending these differences is crucial for tackling the social challenges of biased AI and proposing a novel approach to building fair algorithms,as well as a fresh outlook on“algorithmic ethics,”which shifts the attention from algorithmic design to human responses.This approach is vital for ethics and for AI researchers to devise ethical frameworks and guidelines for AI systems.Furthermore,the findings can guide policy-making,legal frameworks and social interventions to reduce the adverse effects of AI and foster social equity and justice.However,due to the dynamic and fast-changing nature of AI,this field still needs additional research on the underlying mechanisms and intervention strategies involved.
作者 胡小勇 李穆峰 王笛新 喻丰 Xiaoyong Hu;Mufeng Li;Dixin Wang;Feng Yu(Key Laboratory of Cognition and Personality(SWU),Ministry of Education,and Faculty of Psychology,Southwest University,Chongqing 400715,China;Department of Psychology,School of Philosophy,Wuhan University,Wuhan 430072,China)
出处 《科学通报》 EI CAS CSCD 北大核心 2024年第11期1406-1416,共11页 Chinese Science Bulletin
基金 国家社会科学基金西部项目(23XSH003)资助。
关键词 人工智能 道德缺失效应 心智感知理论 拟人化 artificial intelligence moral deficit effect mind perception theory anthropomorphism
  • 相关文献

参考文献1

二级参考文献2

共引文献14

同被引文献7

引证文献1

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部