摘要
可解释AI(explainable AI,XAI)是可信AI技术的重要组成。当前,业界对XAI的技术点展开了深入的研究,但在工程化实施方面尚缺少系统性研究。提出了一种通用的XAI技术架构,从原子解释生成、核心能力增强、业务组件嵌入、可信解释应用4个方面入手,设计了XAI基础能力层、XAI核心能力层、XAI业务组件层、XAI应用层4个层级,通过各技术层之间的分工协作,XAI工程化的落地实施得到了全流程保障。基于该XAI架构,可以灵活地引入新的技术模块,支撑XAI的产业化应用,为XAI在行业中的推广提供了一定的参考。
XAI(explainable AI)is an important component of trusted AI.In-depth research on the technology points of XAI has been carried out in the current industry,but systematic research on engineering implementation is lacking.This paper proposed a general XAI technical architecture,which started from the follow four aspects:atomic interpretation generation,core competence enhancement,business component embedding and trusted interpretation application.We designed four levels:XAI foundation layer,XAI core competence layer,XAI business component layer and XAI application layer.Through the division of labor and cooperation among the technical layers,the implementation of XAI engineering was guaranteed throughout the whole process.Based on the XAI architecture presented in this paper,new technical modules can be introduced flexibly to support the industrialization application of XAI,providing certain reference for the promotion of XAI in the industry.
作者
夏正勋
唐剑飞
杨一帆
罗圣美
张燕
谭锋镭
谭圣儒
XIA Zhengxun;TANG Jianfei;YANG Yifan;LUO Shengmei;ZHANG Yan;TAN Fenglei;TAN Shengru(Transwarp Information Technology(Shanghai)Co.,Ltd.,Shanghai 200233,China;Zhongfu Information Inc.,Nanjing 211899,China)
出处
《大数据》
2024年第1期86-109,共24页
Big Data Research