期刊文献+

Interpretability of Neural Networks Based on Game-theoretic Interactions

原文传递
导出
摘要 This paper introduces the system of game-theoretic interactions,which connects both the explanation of knowledge encoded in a deep neural networks(DNN)and the explanation of the representation power of a DNN.In this system,we define two gametheoretic interaction indexes,namely the multi-order interaction and the multivariate interaction.More crucially,we use these interaction indexes to explain feature representations encoded in a DNN from the following four aspects:(1)Quantifying knowledge concepts encoded by a DNN;(2)Exploring how a DNN encodes visual concepts,and extracting prototypical concepts encoded in the DNN;(3)Learning optimal baseline values for the Shapley value,and providing a unified perspective to compare fourteen different attribution methods;(4)Theoretically explaining the representation bottleneck of DNNs.Furthermore,we prove the relationship between the interaction encoded in a DNN and the representation power of a DNN(e.g.,generalization power,adversarial transferability,and adversarial robustness).In this way,game-theoretic interactions successfully bridge the gap between“the explanation of knowledge concepts encoded in a DNN”and"the explanation of the representation capacity of a DNN"as a unified explanation.
出处 《Machine Intelligence Research》 EI CSCD 2024年第4期718-739,共22页 机器智能研究(英文版)
基金 supported by National Science and Technology Major Project(No.2021ZD0111602) the National Nature Science Foundation of China(Nos.62276165 and U19B2043) Shanghai Natural Science Foundation,China(Nos.21JC1403800 and 21ZR1434600).
  • 相关文献

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部