期刊文献+

Analyzing Interpretability Semantically via CNN Visualization

原文传递
导出
摘要 Deep convolutional neural networks are widely used in image recognition,but the black box property is always perplexing.In this paper,a method is proposed using visual annotation to interpret the internal structure of CNN from the semantic perspective.First,filters are screened in the high layers of the CNN.For a certain category,the important filters are selected by their activation values,frequencies and classification contribution.Then,deconvolution is used to visualize the filters,and semantic interpretations of the filters are labelled by referring to the visualized activation region in the original image.Thus,the CNN model is interpreted and analyzed through these filters.Finally,the visualization results of some important filters are shown,and the semantic accuracy of filters are verified with reference to the expert feature image sets.In addition,the results verify the semantic consistency of the same important filters under similar categories,which indicates the stability of semantic annotation of these filters.
机构地区 School of Information
出处 《国际计算机前沿大会会议论文集》 2021年第2期88-102,共15页 International Conference of Pioneering Computer Scientists, Engineers and Educators(ICPCSEE)
基金 National Defense Science and Tech-nology Innovation Special Zone Project(No.18-163-11-ZT-002-045-04)。
  • 相关文献

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部