摘要
针对无人机视频跟踪过程中,目标占比较小且易受复杂背景信息干扰等问题,提出一种基于自适应融合网络的无人机目标跟踪算法。首先,基于感受野模块和残差网络构建深度网络模型,能够有效提取目标特征并增强特征的有效感受野;其次,提出一种多尺度自适应融合网络,能够自适应地融合深层网络的语义特征和浅层网络的细节特征,增强特征的表达能力;最后,将融合的目标特征输入到相关滤波模型中,计算出响应图的最大置信分数,从而确定跟踪目标位置。仿真实验结果表明,该算法在跟踪成功率和精确率上都达到了较高水平,有效提升了无人机目标跟踪算法性能。
To overcome the problems of small target occupation and vulnerability to interference of complex background information in the UAV video tracking process, an adaptive fusion network-based UAV target tracking algorithm is proposed. First, a deep network model is constructed based on the receptive field block and the residual network, which can effectively extract target features and enhance the effective receptive field of the features. Second, a multi-scale adaptive fusion network is proposed, which can adaptively fuse the semantic features of the deep network and detailed features of the shallow network to enhance the expression capability of the features. Finally, the fused target features are input into the correlation filtering model, and the maximum confidence score of the response map is calculated to determine the tracking target location. The simulation experimental results show that the algorithm achieves a high rate of tracking success and accuracy, and can effectively improve the performance of UAV target tracking algorithm.
作者
刘芳
孙亚楠
LIU Fang;SUN Yanan*(Faculty of Information Technology,BeijingUniversity of Technology,Beijing100124,China)
出处
《航空学报》
EI
CAS
CSCD
北大核心
2022年第7期359-369,共11页
Acta Aeronautica et Astronautica Sinica
基金
国家自然科学基金(61171119)。
关键词
机器视觉
无人机
目标跟踪
特征融合
相关滤波
machine vision
unmanned aerial vehicle(UAV)
target tracking
feature fusion
correlation filtering