Due to factors such as motion blur,video out-of-focus,and occlusion,multi-frame human pose estimation is a challenging task.Exploiting temporal consistency between consecutive frames is an efficient approach for addre...Due to factors such as motion blur,video out-of-focus,and occlusion,multi-frame human pose estimation is a challenging task.Exploiting temporal consistency between consecutive frames is an efficient approach for addressing this issue.Currently,most methods explore temporal consistency through refinements of the final heatmaps.The heatmaps contain the semantics information of key points,and can improve the detection quality to a certain extent.However,they are generated by features,and feature-level refinements are rarely considered.In this paper,we propose a human pose estimation framework with refinements at the feature and semantics levels.We align auxiliary features with the features of the current frame to reduce the loss caused by different feature distributions.An attention mechanism is then used to fuse auxiliary features with current features.In terms of semantics,we use the difference information between adjacent heatmaps as auxiliary features to refine the current heatmaps.The method is validated on the large-scale benchmark datasets PoseTrack2017 and PoseTrack2018,and the results demonstrate the effectiveness of our method.展开更多
Multiple object tracking(MOT)in unmanned aerial vehicle(UAV)videos has attracted attention.Because of the observation perspectives of UAV,the object scale changes dramatically and is relatively small.Besides,most MOT ...Multiple object tracking(MOT)in unmanned aerial vehicle(UAV)videos has attracted attention.Because of the observation perspectives of UAV,the object scale changes dramatically and is relatively small.Besides,most MOT algorithms in UAV videos cannot achieve real-time due to the tracking-by-detection paradigm.We propose a feature-aligned attention network(FAANet).It mainly consists of a channel and spatial attention module and a feature-aligned aggregation module.We also improve the real-time performance using the joint-detection-embedding paradigm and structural re-parameterization technique.We validate the effectiveness with extensive experiments on UAV detection and tracking benchmark,achieving new state-of-the-art 44.0 MOTA,64.6 IDF1 with 38.24 frames per second running speed on a single 1080Ti graphics processing unit.展开更多
基金supported by the National Key Research and Development Program of China(Nos.2021YFC2009200 and 2023YFC3606100)the Special Project of Technological Innovation and Application Development of Chongqing,China(No.cstc2019jscx-msxmX0167)。
文摘Due to factors such as motion blur,video out-of-focus,and occlusion,multi-frame human pose estimation is a challenging task.Exploiting temporal consistency between consecutive frames is an efficient approach for addressing this issue.Currently,most methods explore temporal consistency through refinements of the final heatmaps.The heatmaps contain the semantics information of key points,and can improve the detection quality to a certain extent.However,they are generated by features,and feature-level refinements are rarely considered.In this paper,we propose a human pose estimation framework with refinements at the feature and semantics levels.We align auxiliary features with the features of the current frame to reduce the loss caused by different feature distributions.An attention mechanism is then used to fuse auxiliary features with current features.In terms of semantics,we use the difference information between adjacent heatmaps as auxiliary features to refine the current heatmaps.The method is validated on the large-scale benchmark datasets PoseTrack2017 and PoseTrack2018,and the results demonstrate the effectiveness of our method.
基金This work was supported by National Program on Key Basic Research Project(No.2014CB744903)National Natural Science Foundation of China(Nos.61673270 and 61973212)Key Technology Research Program of Sichuan Provincial Department of Science and Technology(No.2020YFSY0027).
文摘Multiple object tracking(MOT)in unmanned aerial vehicle(UAV)videos has attracted attention.Because of the observation perspectives of UAV,the object scale changes dramatically and is relatively small.Besides,most MOT algorithms in UAV videos cannot achieve real-time due to the tracking-by-detection paradigm.We propose a feature-aligned attention network(FAANet).It mainly consists of a channel and spatial attention module and a feature-aligned aggregation module.We also improve the real-time performance using the joint-detection-embedding paradigm and structural re-parameterization technique.We validate the effectiveness with extensive experiments on UAV detection and tracking benchmark,achieving new state-of-the-art 44.0 MOTA,64.6 IDF1 with 38.24 frames per second running speed on a single 1080Ti graphics processing unit.