摘要
【目的】在视频行为识别领域,如何有效关注视频帧中的重要区域并充分利用时空信息是一个重要的研究课题。【方法】本文提出了一种主动感知机制(APM),能够主动感知视频中的关键区域。该方法采用了一种基于时空多尺度注意机制的新型网络模型,建立了一个“审视-浏览”网络。审视分支和浏览分支各自嵌入了多尺度视觉Transformer结构,使模型在感知重要区域时具备自注意力主动性,并在数据处理的每个阶段具备时空多尺度主动性。为了保持帧间信息的一致性的同时进行数据增广以提高鲁棒性,进一步引入了多重双随机数据增强方法来实现样本扩增和数据增强。【结果】在Kinetics-400和Kinetics-600大规模人体行为识别基准数据集上,本文设计的方法取得了有竞争力的结果。
[Purpose]In the field of video behavior recognition,how to effectively focus on important regions in video frames and make full use of spatiotemporal information is a significant research issue.[Methods]This paper proposes an Active Perception Mechanism(APM)that actively perceives crucial regions in videos.Specifically,the method employs a novel network model based on a spatiotemporal multi-scale attention mechanism,which establishes a“scrutinizingbrowsing”network.The scrutinizing branch and browsing branch each embeds Multiscale Vision Transformer structures,enabling the model to equip self-attention initiative in perceiving important regions and spatiotemporal multi-scale initiative in each stage of data processing.To maintain the consistency of inter-frame information while obtaining augmented data to improve robustness,we introduce a multidual-random data augmentation method to realize sample amplification and data enhancement.[Results]On the large-scale human behavior recognition benchmarks of Kinetics-400 and Kinetics-600 datasets,our designed method achieves competitive results.
作者
晏直誉
茹一伟
孙福鹏
孙哲南
YAN Zhiyu;RU Yiwei;SUN Fupeng;SUN Zhenan(School of Computer Science,Beijing Institute of Technology,Beijing 102488,China;State Key Laboratory of Multimodal Artificial Intelligence Systems,Institute of Automation,Chinese Academy of Sciences,Beijing 100190,China;School of Mathematics and Statistics,Beijing Institute of Technology,Beijing 102488,China)
基金
国家自然科学基金面上项目“人脸识别深度特征模型的可解释性研究与应用”(62276263)。