摘要
作为未来人工智能或强人工智能的一种架构,通用人工智能必然会产生一系列伦理问题。这些问题包括通用人工智能作为人工自主道德系统的主体性、安全性、伦理责任和伦理增强问题、可控性和社会化问题。这些问题都很重要,直接关系到人类未来的生存和发展。在通用人工智能还未真正来临之际,人类有责任对其可能导致的问题进行哲学和伦理反思。
As an architecture for future artificial intelligence(AI)or strong AI,artificial general intelligence(AGI)inevitably raises a series of ethical issues.These issues include the subjectivity,safety,ethical responsibility and ethical enhancement issues,controllability,and socialization of AGI as an artificial autonomous moral system.All of these issues are important and directly related to the future survival and development of human beings.At a time when AGI has not yet coming true,humans have responsibilities to reflect philosophically and ethically on the issues it may lead to.
作者
魏屹东
WEI Yidong(School of Philosophy,Shanxi University,Taiyuan 030006,China)
出处
《中国医学伦理学》
北大核心
2024年第1期1-9,共9页
Chinese Medical Ethics
基金
国家社科基金重大项目“人工认知对自然认知挑战的哲学研究”(21&ZD061)。
关键词
人工智能
通用人工智能
主体性
安全性
伦理责任
artificial intelligence
artificial general intelligence
subjectivity
safety
ethical responsibility