摘要
近年来医疗人工智能得到了大量应用,与此同时其可解释性缺乏问题也被学界所关注。一些学者认为医疗人工智能的可解释性既无必要,也难以实现,理由是医学知识不依赖于因果解释,可解释性既无法完全消解算法黑箱的伦理风险,也会影响医疗人工智能的运行效率。但另一些学者认为,可解释性有助于医学研究与临床实践的开展,满足了医患之间的沟通需要,而解释技术的局限性在实际的医疗场景中被过度放大。医疗人工智能可解释性的争论根源于可解释性内涵复杂且具有跨学科属性,导致不同学者对医疗人工智能的解释目标与内容理解不同。可解释性争论的焦点不在于医疗人工智能可解释性是否必要,而在于可解释性的目标与内容该如何细化,以更好地满足不同人群的解释需求,并回应其反对意见。
Medical artificial intelligence has been applied in a large number of applications,and the problem of its insufficient explainability has been widely concerned.Some scholars believe that the explainability of medical AI is not necessary to achieve.The reason is that medical knowledge does not rely on causal explainability,and explainability cannot completely eliminate the ethical risk of algorithmic black box,and excessive requirements will affect the operational efficiency.Other scholars believe that explainability is conducive to medical research and clinical practice,and satisfies the communication needs.The accuracy of the system is not the demand of all doctors and patients,and the limitations of explainability technology are overemphasized.The debate is rooted in the complexity and interdisciplinary nature of the connotation of explainability,which in turn leads to different understanding of the interpretive goals and content of medical AI.The debate is not about whether explainability is needed,but how the goals and content of explainability should be refined to better meet the needs of different populations and respond to their objections.
作者
安然
AN Ran(Faculty of Public Administration, Hunan Normal University)
出处
《科学与社会》
CSSCI
2024年第3期156-168,共13页
Science and Society
关键词
医疗人工智能
可解释性
因果性
医患关系
medical artificial intelligence
explainability
causality
doctor-patient relationship