摘要
人工智能的“黑箱”特性使得人们难以对其结果作出因果解释,这一现象对科学哲学提出了严峻挑战。具体来说,科学解释面临着事实性要求太强、无法满足不同利益相关者的需求、不利于构建用户信任等困境。为了解决这些困境,可以考虑弱化因果诉求,将科学解释引向理解进路。同时,新进路也面临被错误理解的困境,这可以通过进一步弱化因果诉求,走可靠主义辩护的进路来化解。然而,一再弱化因果诉求也将导致可靠主义必然面临着不能推进对模型认知的困境。因此,一个全新的出路是构建以用户诉求为核心的多元解释场景的方案,这样便可以解决上述所有困境。
The“black box”nature of artificial intelligence makes it difficult to provide causal explanations for its results,posing a significant challenge to the philosophy of science.Specifically,scientific explanation faces the dilemma of being excessively fact-oriented and unable to meet the needs of different stakeholders,hindering the establishment of user trust.To address these challenges,it may be beneficial to consider attenuating the demand for causal reasoning and redirecting scientific explanation towards an interpretive approach.However,this new approach also faces the challenge of being misunderstood,which could be mitigated by further attenuating the demand for causal reasoning and adopting a defense of reliabilism.Nevertheless,excessive attenuation of causal reasoning may inevitably hinder the advancement of model cognition within reliabilism.Therefore,a novel solution is to build a multi-explanation scenario that revolves around user demands,which can effectively address all of the aforementioned challenges.
出处
《福建师范大学学报(哲学社会科学版)》
CSSCI
北大核心
2024年第1期66-74,169,共10页
Journal of Fujian Normal University:Philosophy and Social Sciences Edition
基金
国家社会科学基金重大项目“负责任的人工智能及其实践的哲学研究”(21&ZD063)
福建省社科基金项目“人工智能治理的可解释性研究”(FJ2021C012)
校院重点项目“人工智能治理研究”(2021A05)。