期刊文献+

Probing Filters to Interpret CNN Semantic Configurations by Occlusion

原文传递
导出
摘要 Deep neural networks has beenwidely used inmany fields,but there are growing concerns about its black-box nature.Previous interpretability studies provide four types of explanations including logical rules,revealing hidden semantics,sensitivity analysis,and providing examples as prototypes.In this paper,an interpretability method is proposed for revealing semantic representations at hidden layers of CNNs through lightweight annotation by occluding.First,visual semantic configurations are defined for a certain class.Then candidate filters whose activations are related to these specified visual semantics are probed by occluding.Finally,lightweight occlusion annotation and a scoring mechanism is used to screen out the filters that recognize these semantics.The method is applied to the datasets of mechanical equipment,animals and clothing images.The proposed method performs well in the experiments assessing interpretability qualitatively and quantitatively.
机构地区 School of Information
出处 《国际计算机前沿大会会议论文集》 2021年第2期103-115,共13页 International Conference of Pioneering Computer Scientists, Engineers and Educators(ICPCSEE)
基金 National Defense Science and Technology Innovation Special Zone Project(No.18-163-11-ZT-002-045-04)。
  • 相关文献

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部