摘要
随着人工智能的发展和可穿戴传感器设备的普及,基于传感器数据的人体活动识别(human activity recognition,简称HAR)得到了广泛关注,且具有巨大的应用价值.抽取良好判别力的特征,是提高HAR准确率的关键因素.利用卷积神经网络(convolutional neural networks,简称CNN)无需领域知识抽取原始数据良好特征的特点,针对现有基于传感器的HAR忽略三轴向传感器单一轴向多位置数据空间依赖性的不足,提出了两种动作图片构建方法T-2D和M-2D,构建多位置单轴传感器动作图片和非三轴传感器动作图片;进而提出了卷积网络模型T-2DCNN和M-2DCNN,抽取三组单一轴向动作图片的时空依赖性和非三轴传感器的时间依赖性,并将卷积得到的特征拼接为高层次特征用于分类;为了优化网络结构,减少卷积层训练参数数量,进一步提出了基于参数共享的卷积网络模型.在公开数据集上与现有的工作进行对比实验,默认参数情况下,该方法在公开数据集OPPORTUNITY和SKODA中F_1最大提升值分别为6.68%和1.09%;从传感器数量变化和单类识别准确性角度验证了模型的有效性;且基于共享参数模型,在保持识别效果的同时减少了训练参数.
Wearable sensor-based human activity recognition (HAR) plays a significant role in the current smart applications with the development of the theory of artificial intelligence and popularity of the wearable sensors. Salient and discriminative features improve the performance of HAR. To capture the local dependence over time and space on the same axis from multi-location sensor data on convolutional neural networks (CNN), which is ignored by existing methods with 1D kernel and 2D kernel, this study proposes two methods, T-2D and M-2D. They construct three activity images from each axis of multi-location 3D accelerometers and one activity image from the other sensors. Then it implements the CNN networks named T-2DCNN and M-2DCNN based on T-2D and M-2D respectively,which fuse the four activity image features for the classifier. To reduce the number of the CNN weight, the weight-shared CNN, TS-2DCNN and MS-2DCNN, are proposed. In the default experiment settings on public datasets, the proposed methods outperform the existing methods with the F1-value increased by 6.68% and 1.09% at most in OPPORTUNITY and SKODA respectively. It concludes that both na?ve and weight-shared model have better performance in most F1-values with different number of sensors and F1-value difference of each class.
作者
邓诗卓
王波涛
杨传贵
王国仁
DENG Shi-Zhuo;WANG Bo-Tao;YANG Chuan-Gui;WANG Guo-Ren(School of Computer Science and Engineering, Northeastern University, Shenyang 110819, China;School of Computer Science and Technology, Beijing Institute of Technology, Beijing 100081, China)
出处
《软件学报》
EI
CSCD
北大核心
2019年第3期718-737,共20页
Journal of Software
基金
国家自然科学基金(61872072
U1401256
61173030
61732003)~~
关键词
人体活动识别
卷积神经网络
穿戴式传感器
特征提取
动作图片
human activity recognition
convolutional neural network
wearable sensor
feature extraction
activity image