摘要
合成孔径雷达(SAR)图像的目标识别对地面和海面目标获取具有重大意义。实现SAR图像目标自动解释,提高图像目标识别的准确率成为SAR图像研究的热点问题。为准确获取SAR图像中的目标信息,解决深度神经网络训练小样本SAR图像过程中细节特征丢失严重,网络易出现过拟合等问题,该研究提出一种基于RCF(ResNet101-CBAM-FPN)神经网络模型来提取SAR图像特征。将ResNet101作为主干网络模型用于特征提取,在主干网络模型中加入卷积注意力模块引导神经网络有针对性地提取SAR图像关键特征信息。然后结合特征金字塔网络,实现神经网络高层特征与底层特征融合,丰富特征信息。最后融合迁移学习思想,通过数据相对充足的仿真SAR图像对RCF网络模型进行预训练。将预训练获取的模型参数迁移至目标网络,作为目标网络的初始化参数,并使用目标网络对SAR图像进行迭代训练。实验结果表明,该方法能有效提升小样本数据SAR图像的识别精度,在MSTAR数据集上达到99.60%的识别率。
The target recognition of synthetic aperture radar(SAR) images is of great significance to the acquisition of military targets on the ground and sea. Realizing the automatic interpretation of SAR image targets and improving the accuracy of image target recognition have become a hot issue in SAR image research. To accurately obtain the target information in the SAR image, and solve the problem of serious loss of detailed features in the process of deep neural network training small sample SAR images, and the network is prone to over-fitting, we propose a neural network model based on RCF(ResNet101-CBAM-FPN) to extract SAR image features. ResNet101 is used as the backbone network model for feature extraction, and the convolutional attention module is added to the backbone network model to guide the neural network to be targeted for extraction of key feature information of SAR images. Then, combined the feature pyramid network, the fusion of high-level features and low-level features of neural network is realized to enrich feature information. Finally, the ideas of transfer learning is fused. The RCF network model is pre-trained by simulating SAR images with relatively sufficient data. The model parameters obtained by pre-training are transferred to the target network as the initialization parameters of the target network, and the target network is used to iteratively train the SAR image. The experiment shows that the proposed method can effectively improve the recognition accuracy of SAR images with small sample data, and achieve a high recognition rate of 99.60% on the MSTAR data set.
作者
崔亚楠
吴建平
朱辰龙
闫相如
CUI Ya-nan;WU Jian-ping;ZHU Chen-long;YAN Xiang-ru(School of Information Science&Engineering,Yunnan University,Kunming 650504,China;Yunnan Provincial Electronic Computing Center,Kunming 650223,China;Digital Media Technology Key Laboratory of Universities and Colleges in Yunnan Province,Kunming 650223,China)
出处
《计算机技术与发展》
2022年第5期1-6,共6页
Computer Technology and Development
基金
云南省重大科技专项计划项目(202002AD080001)
云南省科技厅应用基础研究计划重点项目(2019FA044)。