摘要
虽然深度学习技术在雷达图像处理任务中获得了部分应用,但缺乏对黑盒模型的可解释性分析和全面的性能评估,限制了该技术在雷达图像领域中的应用性能、可信度和广泛性.本文从可解释性入手,提出了雷达图像深度学习黑盒模型分析思路,在开源MSTAR雷达图像数据集上进行实验验证.从深度学习模型的迁移机制和认知机理两个方面进行分析,得到了关于迁移学习、归因方法应用和模型鲁棒性评估方面的相关结论,填补了现有研究的空白.
Although deep learning technology has been partially applied in radar image processing,the lack of interpretability analysis and comprehensive performance evaluation of black-box models limits the application performance,credibility,and universality of this technology in the radar image field.Starting from the interpretability,this paper proposes the mechanism analysis idea of deep learning black-box model in the radar image field and carries out experimental verification on the open-source MSTAR radar image dataset.It analyzes the transfer and cognitive mechanisms of the deep learning model and draws significant findings on transfer learning,attribution method application,and model robustness evaluation,filling gaps in the existing literature.
作者
李玮杰
杨威
刘永祥
黎湘
Weijie LI;Wei YANG;Yongxiang LIU;Xiang LI(College of Electronic Science and Technology,National University of Defense Technology,Changsha 410073,China)
出处
《中国科学:信息科学》
CSCD
北大核心
2022年第6期1114-1134,共21页
Scientia Sinica(Informationis)
基金
国家自然科学基金(批准号:61871384,61401486)
国家创新研究群体科学基金(批准号:61921001)
湖南省自然科学基金(批准号:2017JJ3367)资助项目。
关键词
深度学习
雷达图像
可解释性
迁移学习
归因方法
鲁棒性
deep learning
radar image
interpretability
transfer learning
attribution algorithm
robustness