摘要
现代人工智能的“可信任”研究通常关注的是人对智能算法的信任问题。但在风控领域逐渐转向AI“合理怀疑”能力的要求中,算法对人的“信任”也已成为人工智能伦理的重要问题。风险防控本身就是如何信任一个人的问题。在现代未来学思维下,人把对信任与怀疑的规划都交付给技术,以降低风险率。人工智能风控技术以数据和算法作为信任人的方式,这是智能时代“后人类”的一个必然的社会深度脱域特征。因而,人工智能风控相对于其他智能技术需得到独特的哲学关注。但是这种算法信任主导下的风险防控,必然面对正义性的考量。在人工智能物逐渐被作为一种能动主体的情境下,实现其在风控场景中对人的“合理怀疑”也成为智能风控的基本伦理要求。
The research of“trustworthiness”in modern artificial intelligence(AI)usually focuses on human’s trust in intelligent algorithms.But in the requirement of the shift of the field of risk control to AI’s ability of“reasonably doubt”,algorithmic“trust”has also become an important issue of artificial intelligence ethics.Risk management itself is a question of how to trust someone.In modern futurology,the planning of trust and doubt is handed over to technology to reduce risk.The artificial intelligence risk control takes data and algorithm as a way to trust human,which is an inevitable social depth disembedding characteristic of“post-human”in the intelligent age.As a result,AI risk control needs a unique philosophical attention compared with other intelligent technologies.But the risk prevention and control under the trust of algorithm must face the consideration of justice.Under the situation that the artificial intelligence object is gradually regarded as an active agent,realizing its“reasonable doubt”in the scene of risk control has become the basic ethical requirement of intelligent risk control.
作者
李雁华
杜海涛
LI Yanhua(Department of Philosophy,East China Normal University,Shanghai 201100,China)
出处
《河海大学学报(哲学社会科学版)》
CSSCI
北大核心
2022年第4期40-46,135,共8页
Journal of Hohai University:Philosophy and Social Sciences
基金
2020年甘肃省社会科学规划项目(20YB041)。
关键词
风险技术
智能风控
技术正义
算法信任
risk technology
AI risk control
technical justice
algorithmic trust