摘要
人工智能的工程化离不开支撑其运作的复杂算法。但是复杂算法的内部结构就像不透明的黑箱,难以取信于人。为了应对这个问题,人们主要有两个提升算法透明度的方向:一个是寻求技术上的突破,从而彻底解决算法黑箱问题;另一个是通过人为解释,让不透明的算法变得可以被信任。由于人们目前的技术水平暂时无法解决算法黑箱问题,比较有希望的方案是在算法的可解释性方面进行研究。鉴于此,人们可以依据延展心灵理论确定算法的代言人,让代言人利用自己对人类和算法的双重了解,灵活多样地解释算法的输出。这一方案能够在承认黑箱存在的同时,展示算法的可解释性,进而让人们有选择地建立对算法的信任。
The continuous development of artificial intelligence(AI)has transformed AI technology from a simple tool to a complex engineering technology,and the algorithms that support their operation have also become more complex.However,the internal structures of complex algorithms are like opaque black boxes,and how to explain their outputs has become a thorny issue.There are two main approaches to enhancing the transparency of algorithms:one involves pursuing technological breakthroughs so as to completely resolve the black box problem,while the other is to make the opaque algorithms trustworthy through human interpretation.It seems that the latter one is more promising.According to this idea,people can identify a spokesperson for one specific algorithm based on the extended mind thesis,so that the spokesperson can utilize his or her dual understanding of humans and this algorithm to interpret the output of the algorithm in a flexible and diverse way.This solution can help people better evaluate algorithms and guard against potential pitfalls of algorithms while recognizing the existence of the black box problem,and thus selectively building trust in algorithms.
作者
廖新媛
周程
LIAO Xin-yuan;ZHOU Cheng(School of Marxism,Wuhan University,Wuhan 430072;Department of Philosophy,Peking University,Beijing 100871,China)
出处
《自然辩证法研究》
CSSCI
北大核心
2024年第9期20-26,共7页
Studies in Dialectics of Nature
基金
教育部哲学社会科学研究重大课题攻关项目“工程科学哲学基本理论问题研究”(23JZD006)。
关键词
人工智能
算法黑箱
深度学习
延展心灵
算法代言人
artificial intelligence
algorithmic black box
deep learning
extended mind
algorithmic spokespersons