摘要
针对面部表情识别领域中难以同时实现低参数量与高准确率的挑战,提出了一种结合注意力机制的ShuffleNetV2网络的面部表情识别方法。该方法基于ShuffleNetV2架构,通过微调模型将Relu激活函数替换为PRelu激活函数,进一步提升了模型的特征捕获与分类能力。此外,本文创新性地引入了一种超轻量级双注意力模块LDAM,该模块结合了DCAM注意力机制与空间注意力机制,并通过捷径连接技术集成到优化后的ShuffleNetV2模型中,以增强模型对细节特征的识别能力及分类效果。在FER2013和CK+两大公认的面部表情识别数据集上的实验结果显示,本方法分别达到了69.12%和94.77%的识别准确率,同时保持了低至1.25的模型参数量。这一成果不仅展示了在保持模型轻量化的同时提升识别性能的可能性,而且通过实验验证了所提出方法的高效性和实用性。
In the field of facial expression recognition,low parameters and high accuracy are difficult to complete together,A facial expression recognition method based on ShuffleNetV2 network combined with attention mechanism is proposed in this study.Based on ShuffleNetV2 architecture,this method further improves the feature capture and classification capability of the model by fine-tuning the model and replacing the Relu activation function with the PRelu activation function.In addition,this paper introduces an ultra-lightweight dual attention module LDAM,which combines DCAM attention mechanism and spatial attention mechanism,and integrates it into the optimized ShuffleNetV2 model by shortcut connection technology,so as to enhance the model′s recognition ability and classification effect of detailed features.Experimental results on FER2013 and CK+,two widely recognized facial expression recognition datasets,show that the proposed method achieves recognition accuracy of 69.12%and 94.77%,respectively,while maintaining a model parameter count as low as 1.25.This result is not only demonstrated in the possibility of maintaining the lightweight model while improving the recognition performance,but also verifies the efficiency and practicability of the proposed method through experiments.
作者
林恩惠
王凡
谭晓玲
Lin Enhui;Wang Fan;Tan Xiaoling(College of Electronics and Information Engineering,Chongqing Three Gorges University,Chongqing 404100,China)
出处
《电子测量技术》
北大核心
2024年第10期168-174,共7页
Electronic Measurement Technology
基金
重庆市重点实验室开放基金(ZD2020A0302)项目资助。
关键词
面部表情识别方法的改进
激活函数
空间注意力机制
轻量化模型
超轻量级双注意力模块
improvement of facial expression recognition method
activation function
spatial attention mechanism
lightweight model
lightweight dual attention module