期刊文献+

2023年可解释人工智能技术主要发展动向分析

Main Development Trends of eXplainable Artificial Intelligence in 2023
下载PDF
导出
摘要 可解释人工智能与当下人工智能研究与应用所重点关注的可信赖性和鲁棒性等问题密切相关。对2023年可解释人工智能技术领域的主要发展动向进行了综合评述。首先介绍可解释人工智能的概念内涵和实现途径,然后介绍可解释人工智能技术的当前最新进展,最后概述可解释人工智能技术的未来发展趋势。综述表明,针对当下热门的生成式人工智能模型的解释研究,已经成为当前可解释人工智能领域的重要研究热点;针对传统人工智能模型的解释研究主要聚焦于探索多视角融合的解释途径、强化无监督学习框架的可解释性;此外,可解释人工智能开源社区建设提速,助力打开模型“黑盒”的开源工具不断丰富。综述认为,突破微观神经元行为层面的可解释性瓶颈、构建兼具内在常识与外在环境交互能力的“心智系统”等研究方向,正在成为未来可解释人工智能技术的重要发展趋势。 The eXplainable Artificial Intelligence(XAI)is closely related to the issues of trustworthiness and robustness that are currently the focus of research and application in Artificial Intelligence(AI).This paper makes a comprehensive survey of the main development trends in the field of XAI.Firstly this paper introduces the concept and implementation approach of XAI,summarizes the XAI’s latest developments,and then analyzes its further development directions.This survey shows that the interpretation research of current popular Generative Artificial Intelligence(GAI)models has become an important research hotspot in the field of XAI;research on the interpretation of traditional AI models mainly focuses on developing interpretability techniques for multi perspective fusion and enhancing the interpretability of unsupervised learning frameworks;in addition,the construction of open-source communities for XAI is accelerating,and open-source tools that help open the black-box of models are constantly being enriched.This survey believes that breaking through the interpretability bottleneck at the micro neuron’s behavior,as well as constructing a Mind-System that combines internal knowledge and external environmental interaction,have become important development trends for XAI in the future.
作者 王梓屹 简萌 李彬 孙新 WANG Ziyi;JIAN Meng;LI Bin;SUN Xin(School of Computer,Beijing Institute of Technology,Beijing 100081,China;Department of Information and Communication Engineering,Beijing University of Technology,Beijing 100124,China;Beijing Finefurther Digital Technology Co.,Ltd,Beijing 100083,China)
出处 《无人系统技术》 2024年第2期113-120,共8页 Unmanned Systems Technology
基金 国家自然科学基金(U22B2061,62106243)。
关键词 可解释人工智能 深度学习 生成式人工智能 大语言模型 黑盒模型 无监督学习 开源 eXplainable Artificial Intelligence Deep Learning Generative Artificial Intelligence Large Language Model Black-box Model Unsupervised Learning Open-source
  • 相关文献

参考文献4

二级参考文献37

共引文献71

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部