摘要
为解决现有人体行为识别方法不能有效融合视频局部特征的问题,提高行为识别的准确率,对视频的一般特点进行分析,提出一种时序激励机制。以BN-Inception作为基础模型分别提取RGB图像序列与光流图像序列的特征,在模型中嵌入时许激励模块实现对视频局部特征序列的动态加权,突出局部特征序列中对行为识别有益的特征的作用,使融合得到的全局特征更具鉴别力。在数据集HMDB51和自建的油田生产现场行为识别数据集OilField-7上进行实验,准确率分别可达71.6%和92.8%,验证了所提方法的有效性。
To deal with the problem that the existing human action recognition method cannot effectively integrate local features of the video and to improve the accuracy of human behavior recognition,the general characteristics of the video were analyzed,and a temporal excitation mechanism was proposed.BN-Inception was used as the basic model to extract the features of RGB image sequences and optical flow image sequences.The temporal excitation model was embedded to realize the dynamic weighting of the video local feature sequences,highlighting the features of local feature sequences that were useful for action recognition.The role of the fusion of the global features was more discriminating.Experiments were carried out on the dataset HMDB51 and the self-built oilfield production action recognition dataset OilField-7.Results show that the recognition accuracy is up to 71.6%and 92.8%,and the validity of the proposed method is then verified.
作者
梁鸿
张兆雷
李传秀
钟敏
LIANG Hong;ZHANG Zhao-lei;LI Chuan-xiu;ZHONG Min(College of Computer Science and Technology,China University of Petroleum(East China),Qingdao 266580,China)
出处
《计算机工程与设计》
北大核心
2020年第10期2907-2912,共6页
Computer Engineering and Design
基金
中央高校基本科研业务费基金项目(18CX02138A)。
关键词
行为识别
局部特征
时序激励机制
加权
融合
action recognition
local feature
temporal excitation mechanism
weighting
fusion