期刊文献+

人脸表情识别可解释性研究综述

A Survey on Interpretability of Facial Expression Recognition
下载PDF
导出
摘要 近年来,人脸表情识别(Facial Expression Recognition,FER)被广泛应用于医疗、社交机器人、通信、安全等诸多领域.与此同时,为加深研究者对模型本质的认识,确保模型的公平性、隐私保护性与鲁棒性,越来越多的研究者关注表情识别可解释性的研究.本文依据结果可解释、机理可解释、模型可解释的分类原则,对表情识别中的可解释性研究方法进行了分类与总结.具体而言,结果可解释表情识别主要包括基于文本描述和人脸基本结构的方法.机理可解释方法主要研究了表情识别中的注意力机制,以及基于特征解耦和概念学习方法的可解释方法.模型可解释方法主要探究了可解释性分类方法.最后,对表情识别可解释性研究进行了对比与分析,并对未来的发展方向进行了讨论与展望,包括复杂表情的可解释性、多模态情绪识别的可解释性、大模型表情与情绪识别的可解释性以及基于可解释性提升泛化能力四个方面.本文旨在为感兴趣的研究人员提供人脸表情识别可解释性问题研究现状的整理与分析,推动该领域的进一步发展. In recent years,Facial Expression Recognition(FER)has been widely used in medicine,social robotics,communication,security and many other fields.A growing number of researchers are showing interest in the FER area and have proposed useful algorithms.At the same time,the study of FER interpretability has attracted increasing attention from researchers,as it can deepen their understanding of the models and ensure fairness,privacy preservation,and robustness.In this paper,we summarized the interpretability works in the field of FER based on the classification of result interpretability,mechanism interpretability,and model interpretability.Result interpretability indicates the extent to which people with specific experience can consistently understand the results of the models.Specifically,result interpretable FER mainly includes methods based on text description and the basic structure of the face.Wherein the methods based on face structure consists of approaches based on facial action units(AU),topological modeling,caricature images and interference analysis.In addition,mechanism interpretability focuses on explanation of the internal mechanism of the models,including the attention mechanism in FER,as well as the interpretability methods based on feature decoupling and concept learning.As for model interpretability,researchers often try to find out the decision principle or rules of the models.This paper illustrates the interpretable classification methods in FER,which belong to model interpretability.Such approaches involve those based on Multi-Kernel Support Vector Machine(MKSVM)and those based on decision trees and deep forest.Additionally,we compared and analyzed the FER interpretability works.We also identified current problems in this area,including the lack of evaluation metrics for FER interpretability analysis,the challenge of balancing the accuracy and interpretability of FER models,and the limited interpretability data available for expression recognition.Afterwards,a discussion and outlook on the way forward took place.First is about the interpretability of complex expressions recognition,mainly focusing on the compound expressions and more delicate fine-grained expressions.Then it comes to the interpretability of multi-modal emotion recognition.Multimodal models can obtain better performance by complementing the information of each modality,and their interpretability analysis is also an important direction worth exploring in the future.Additionally,we believe that interpretability of expression and emotion recognition with large models is another significant future direction,including interpretability of Large Vision Models,Vision Language Models and Multi-modal Large Models.Interpretability study can help to improve the safety and reliability of large models.Finally,we address the enhancement of generalization ability based on interpretability.When the models are learning“relevance”rather than“causality”,they are easy to make wrong judgments when encountering new data or being affected by other factors,that is,the models do not have good generalization performance.The interpretability analysis helps deepen our understanding of the nature of the models,explain the causal relationship between input and output,and therefore improve the generalization performance.This paper intends to provide interested researchers with a comprehensive review and analysis of the current state of research on the interpretability of facial expression recognition,thereby promoting further advancements in this field.
作者 张淼萱 张洪刚 ZHANG Miao-Xuan;ZHANG Hong-Gang(School of Artificial Intelligence,Beijing University of Posts and Telecommunications,Beijing 100876)
出处 《计算机学报》 EI CAS CSCD 北大核心 2024年第12期2819-2851,共33页 Chinese Journal of Computers
基金 国家自然科学基金面上项目(No.62076034)资助.
关键词 人脸表情识别 可解释性 计算机视觉 情感计算 机器学习 facial expression recognition interpretability computer vision affective computing machine learning
  • 相关文献

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部