摘要
该文受人脑视觉感知机理启发,在深度学习框架下提出融合时空双网络流和视觉注意的行为识别方法。首先,采用由粗到细Lucas-Kanade估计法逐帧提取视频中人体运动的光流特征。然后,利用预训练模型微调的GoogLeNet神经网络分别逐层卷积并聚合给定时间窗口视频中外观图像和相应光流特征。接着,利用长短时记忆多层递归网络交叉感知即得含高层显著结构的时空流语义特征序列;解码时间窗口内互相依赖的隐状态;输出空间流视觉特征描述和视频窗口中每帧标签概率分布。其次,利用相对熵计算时间维每帧注意力置信度,并融合空间网络流感知序列标签概率分布。最后,利用softmax分类视频中行为类别。实验结果表明,与其他现有方法相比,该文行为识别方法在分类准确度上具有显著优势。
Inspired by the mechanism of human brain visual perception, an action recognition approach integrating dual spatio-temporal network flow and visual attention is proposed in a deep learning framework. First, the optical flow features with body motion are extracted frame-by-frame from video with coarse-to-fine Lucas-Kanade flow estimation. Then, the GoogLeNet neural network with fine-tuned pre-trained model is applied to convoluting layer-by-layer and aggregate respectively appearance images and the related optical flow features in the selected time window. Next, the multi-layered Long Short-Term Memory (LSTM) neural networks are exploited to cross-recursively perceive the spatio-temporal semantic feature sequences with high level and significant structure. Meanwhile, the inter-dependent implicit states are decoded in the given time window, and the attention salient feature sequence is obtained from temporal stream with the visual feature descriptor in spatial stream and the label probability of each frame. Then, the temporal attention confidence for each frame with respect to human actions is calculated with the relative entropy measure and fused with the probability distributions with respect to the action categories from the given spatial perception network stream in the video sequence. Finally, the softmax classifier is exploited to identify the category of human action in the given video sequence. Experimental results show that this presented approach has significant advantages in classification accuracy compared with other methods.
作者
刘天亮
谯庆伟
万俊伟
戴修斌
罗杰波
LIU Tianliang1,QIAO Qingwei1, WAN Junwei1, DAI Xiubin1,LUO Jiebo2(1.Jiangsu Provincial Key Laboratory of Image Processing and Image Communication, Nanjing University of Posts and Telecommunications, Nanjing 210003, China;2.Department of Computer Science, University of Rochester, Rochester 14627, US)
出处
《电子与信息学报》
EI
CSCD
北大核心
2018年第10期2395-2401,共7页
Journal of Electronics & Information Technology
基金
国家自然科学基金(61001152
31200747
61071091
61071166
61172118)
江苏省自然科学基金(BK2012437)
南京邮电大学校级科研基金(NY214037)
国家留学基金~~
关键词
人体行为识别
光流
双重时空网络流
视觉注意力
卷积神经网络
长短时记忆神经网络
Human action recognition
Optical flow
Spatio-temporal dual network flow
Visual attention
Convolution Neural Network (CNN)
Long Short-Term Memory (LSTM)