摘要
为了最大限度地规避人工智能系统的决策和行为带来的潜在风险,人类应当对人工智能系统的行为和决策承担起责任。因此需要构建一种对所有人负责的人工智能系统开发路径。这意味着,人工智能系统的设计应当以人类的基本价值观和道德原则作为核心指导思想,遵循问责原则、责任原则和透明性原则;人工智能系统应当具备根据人类的基本价值观和道德原则来进行道德推理的能力;人工智能系统的开发应当保持多样性和包容性。
In order to minimise the potential risks associated with the decisions and actions of AI systems, humans should take responsibility for the actions and decisions of AI systems. There is therefore a need to construct a pathway for the development of AI systems that is accountable to all. This means that AI systems should be designed with basic human values and ethical principles as core guidelines, following the principles of accountability, responsibility and transparency;AI systems should have the ability to reason ethically according to basic human values and ethical principles;and the development of AI systems should remain diverse and inclusive.
作者
王刚
WANG Gang(College of Philosophy,Nankai University Tianjin 300350,China)
出处
《科学.经济.社会》
2021年第1期55-61,共7页
Science Economy Society
基金
国家社科基金重大项目“现代归纳逻辑的新发展、理论前沿与应用研究”(15ZDB018)
关键词
人工智能
责任
设计原则
道德推理
多样性
AI
responsibility
design principles
ethical Reasoning
diversity