摘要
为了解决无人机视觉跟踪目标识别算法在复杂环境下因目标快速移动、光线短时间变化、物体遮蔽以及目标形态变化等因素产生的跟踪识别准确性差的问题,提出一种基于深度全卷积孪生网络的目标识别及跟踪算法,通过放大卷积神经网络的宽度、全卷积孪生网络跟踪算法的特征提取环节,增加回归算法强化识别目标外形轮廓的准确度,提取模版帧和检测帧的特征;采取在线的深度网络跟踪算法有缘特征深度卷积模块,结合注意力信息融合,将初始帧的目标视为固定模版,应用高置信度更新算法计算动态模版,在线实时跟踪过程中,采取傅里叶快速变化识别固定模版与动态模版的目标变化值,将依据颜色特征得到目标的极大似然概率图与前期提取双模版的深度特征进一步融合,抑制背景的同时,实现目标跟踪。实验显示,提出的优化算法应用于无人机复杂场景跟踪后,准确率达到79.9%,成功率达到59.7%,对比其他算法有大幅提升,充分说明了提出算法的优越性。
In order to solve the problem of poor tracking and recognition accuracy of target recognition algorithms of unmanned aerial vehicle(UAV)visual tracking in complex environments due to factors such as rapid target movement,short-term changes in light,object occlusion,and target shape changes,a target recognition and tracking algorithm is proposed based on deep fully convolutional twin networks.By enlarging the width of the convolutional neural network,the feature extraction stage of the fully convolutional twin network tracking algorithm is increased,and regression algorithms are added to enhance the accuracy of recognizing the target's contour,extracting the features of template frames and detection frames.An online deep network tracking algorithm with edge feature deep convolution module is adopted,combined with attention information fusion,the fixed template is the initial target for the first frame,while the dynamic template is calculated using an update algorithm and a high confidence algorithm is selected.During the online real-time tracking process,Fourier transform is used to quickly identify the target change values between the fixed template and the dynamic template.The target's maximum likelihood probability graph obtained based on color features is further fused with the depth features extracted from the previous dual template to suppress the background and achieve target tracking.The experiment shows that the optimization algorithm proposed in this article can achieve an accuracy of 79.9%and a success rate of 59.7%when applied to complex UAV scene tracking.Compared with other algorithms,it shows a significant improvement,fully demonstrating its superiority.
作者
方传宝
刘光柱
FANG Chuanbao;LIU Guangzhu(School of Intelligent Manufacturing,Anhui Vocational and Technical College,Hefei 230011,China;School of Microelectronics,Hefei University of Technology,Hefei 230041,China)
出处
《太原学院学报(自然科学版)》
2024年第4期70-77,共8页
Journal of TaiYuan University:Natural Science Edition
基金
2023年安徽省教育厅自然科学重点研究项目(2023AH051439)。
关键词
无人机
视觉跟踪
深度网络
融合算法
孪生网络
模型更新
unmanned aerial vehicle(UAV)
visual tracking
deep network
fusion algorithm
twin network
model update