摘要
人工智能伦理建设的必要性已形成全球共识,但建设目标、重点任务和实现路径仍存在较大分歧,概括为六个议题。本文首先介绍AI的两大类主要技术——强力法和训练法,在此基础上总结AI现有技术的三个特性,作为AI伦理的技术依据。同时,以全球公认的福祉原则作为AI伦理的根本依据。本文立足于这两个依据,阐述AI伦理建设应具有双重目标——同时回答应该和不应该让AI做什么,进而探讨另外五个重要议题:AI的安全底线,AI功能的评价原则,AI治理责任的落实路径,AI主体状况变迁的可能性,以及一种全新的创新模式——公义创新。
Despite worldwide consensus about the necessity of developing Artificial Intelligence(AI)Ethics,there exist serious disagreements regarding its target,major tasks,and implementation,which are presented and clarified as six issues in this article.The rationale for resolving the disagreements presented here stems from the principle of well-being and the characteristics of existing AI technology,which are identified in this article from a summary of AI achievements so far.On the basis of this rationale,we argue that the target of AI Ethics is twofold:to answer what AI should do and what AI should not do.We then investigate five other issues:the AI safety bottom-line,evaluation principles for AI functions,the implementation of AI governance,the changing subjectivity of AI,and innovation in terms of public justice.
出处
《哲学研究》
CSSCI
北大核心
2020年第9期79-87,107,F0003,共11页
Philosophical Research