Facial emotion recognition(FER)has become a focal point of research due to its widespread applications,ranging from human-computer interaction to affective computing.While traditional FER techniques have relied on han...Facial emotion recognition(FER)has become a focal point of research due to its widespread applications,ranging from human-computer interaction to affective computing.While traditional FER techniques have relied on handcrafted features and classification models trained on image or video datasets,recent strides in artificial intelligence and deep learning(DL)have ushered in more sophisticated approaches.The research aims to develop a FER system using a Faster Region Convolutional Neural Network(FRCNN)and design a specialized FRCNN architecture tailored for facial emotion recognition,leveraging its ability to capture spatial hierarchies within localized regions of facial features.The proposed work enhances the accuracy and efficiency of facial emotion recognition.The proposed work comprises twomajor key components:Inception V3-based feature extraction and FRCNN-based emotion categorization.Extensive experimentation on Kaggle datasets validates the effectiveness of the proposed strategy,showcasing the FRCNN approach’s resilience and accuracy in identifying and categorizing facial expressions.The model’s overall performance metrics are compelling,with an accuracy of 98.4%,precision of 97.2%,and recall of 96.31%.This work introduces a perceptive deep learning-based FER method,contributing to the evolving landscape of emotion recognition technologies.The high accuracy and resilience demonstrated by the FRCNN approach underscore its potential for real-world applications.This research advances the field of FER and presents a compelling case for the practicality and efficacy of deep learning models in automating the understanding of facial emotions.展开更多
Recently,people have been paying more and more attention to mental health,such as depression,autism,and other common mental diseases.In order to achieve a mental disease diagnosis,intelligent methods have been activel...Recently,people have been paying more and more attention to mental health,such as depression,autism,and other common mental diseases.In order to achieve a mental disease diagnosis,intelligent methods have been actively studied.However,the existing models suffer the accuracy degradation caused by the clarity and occlusion of human faces in practical applications.This paper,thus,proposes a multi-scale feature fusion network that obtains feature information at three scales by locating the sentiment region in the image,and integrates global feature information and local feature information.In addition,a focal cross-entropy loss function is designed to improve the network's focus on difficult samples during training,enhance the training effect,and increase the model recognition accuracy.Experimental results on the challenging RAF_DB dataset show that the proposed model exhibits better facial expression recognition accuracy than existing techniques.展开更多
Emotion recognition based on facial expressions is one of the most critical elements of human-machine interfaces.Most conventional methods for emotion recognition using facial expressions use the entire facial image t...Emotion recognition based on facial expressions is one of the most critical elements of human-machine interfaces.Most conventional methods for emotion recognition using facial expressions use the entire facial image to extract features and then recognize specific emotions through a pre-trained model.In contrast,this paper proposes a novel feature vector extraction method using the Euclidean distance between the landmarks changing their positions according to facial expressions,especially around the eyes,eyebrows,nose,andmouth.Then,we apply a newclassifier using an ensemble network to increase emotion recognition accuracy.The emotion recognition performance was compared with the conventional algorithms using public databases.The results indicated that the proposed method achieved higher accuracy than the traditional based on facial expressions for emotion recognition.In particular,our experiments with the FER2013 database show that our proposed method is robust to lighting conditions and backgrounds,with an average of 25% higher performance than previous studies.Consequently,the proposed method is expected to recognize facial expressions,especially fear and anger,to help prevent severe accidents by detecting security-related or dangerous actions in advance.展开更多
<p align="justify"> <strong>Background</strong><strong>:</strong> Alzheimer’s sufferers (AS) are unable to visually recognize facial emotions (VRFE). However, we do not know th...<p align="justify"> <strong>Background</strong><strong>:</strong> Alzheimer’s sufferers (AS) are unable to visually recognize facial emotions (VRFE). However, we do not know the kind of emotions involved, the timeline for the onset of this loss of ability to recognize facial emotional expressions during the natural course of this disease and the existence of any correlation with other comorbid cognitive disorders. For that reason, the authors aimed to determine whether a deficit in facial emotion recognition is present at the onset of Alzheimer disease, distinctly and concurrently with the onset of cognitive impairment or is it a prodromal syndrome of Alzheimer’s Disease before the onset of cognitive decline and what emotions are involved. A secondary aim was to investigate relationships between facial emotion recognition and cognitive performance on various parameters. <strong>Method:</strong> Single Blind Case-control study. Setting in Memory clinic. <strong><span style="font-family:Verdana;">Participants: </span></strong>12 patients, (AS) and 12 control subjects (CS) were enrolled. <strong>Measurements: </strong>Quantitative information about the ability for facial emotion recognition was obtained from Method of Analysis and Research on the Integration of Emotions (MARIE). The Mini Mental Status Examination (MMSE), the Picture Naming, the Mattis Dementia Rating Scale (DRS), and the Grober & Buschke Free and Cued Selective Reminding Test (FCSRT) tests were used to measure cognitive impairment. <strong>Results:</strong> We note that the AS have a problem with the visual recognition of facial emotions with existence of a higher threshold for visual recognition. The AS is less sensitive to the visual recognition cues of facial emotions. AS is unable to distinguish anger from fear. It would be a possible explanation for some acts of aggressiveness seen in the clinical and home setting demonstrated by “<i>AS with behavioral disturbance</i>”. The anger-fear series was found to be the first affected in the course of Alzheimer’s. The appearance of the curve is sigmoid for the control group and linear for the Alzheimer’s patients with a cognitive distortion when the VRFE is represented graphically with percentage of correct recognition plotted on the “y” axis and the selected images presented as stimulus with measures of density of emotion plotted on the “x” axis. In both groups, it is intuitively and theoretically expected that correct recognition will be directly proportional to the density of represented emotion in the stimulus image. This hypothesis is true for CS but not so for AS. The MARIE (<i>see below</i>) processing of emotions seems to be strengthened by the optimal cognitive functions showing the hypothesis applies to CS but not uniformly in AS. This anomaly in the AS is evidenced by the decline of the cognitive functions contributing to abovementioned “linearization” in the graphic representation. There is a direct positive correlation between the results of MARIE and the performance on cognitive tests. <strong>Conclusion: </strong>The administration of a combination of DRS, FCSRT, and MARIE to patients screened for possibly emerging Alzheimer’s could provide a more detailed and specific approach to make a definitive early diagnosis of Alzheimer’s. The Alzheimer’s patients found it difficult to distinguish between anger and fear. </p>展开更多
Objective To study the contribution of executive function to abnormal recognition of facia expressions of emotion in schizophrenia patients. Methods Abnormal recognition of facial expressions of emotion was assayed ac...Objective To study the contribution of executive function to abnormal recognition of facia expressions of emotion in schizophrenia patients. Methods Abnormal recognition of facial expressions of emotion was assayed according to Japanese and Caucasian facial expressions of emotion (JACFEE), Wisconsin card sorting test {WCST), positive and negative symptom scale, and Hamilton anxiety and depression scale, respectively, in 88 paranoid schizophrenia patients and 75 healthy volunteers. Results Patients scored higher on the Positive and Negative Symptom Scale and the Hamilton Anxiety and Depression Scales, displayed lower JACFEE recognition accuracies and poorer WCST performances. The JACFEE recognition accuracy of contempt and disgust was negatively correlated with the negative symptom scale score while the recognition accuracy of fear was positively with the positive symptom scale score and the recognition accuracy of surprise was negatively with the general psychopathology score in patients. Moreover, the WCST could predict the JACFEE recognition accuracy of contempt, disgust, and sadness in patients, and the perseverative errors negatively predicted the recognition accuracy of sadness in healthy volunteers. The JACFEE recognition accuracy of sadness could predict the WCST categories in paranoid schizophrenia patients. Conclusion Recognition accuracy of social-/moral emotions, such as contempt, disgust and sadness is related to the executive function in paranoid schizophrenia patients, especially when regarding sadness.展开更多
Facial emotion have great significance in human-computer interaction,virtual reality and people's communication.Existing methods for facial emotion privacy mainly concentrate on the perturbation of facial emotion ...Facial emotion have great significance in human-computer interaction,virtual reality and people's communication.Existing methods for facial emotion privacy mainly concentrate on the perturbation of facial emotion images.However,cryptography-based perturbation algorithms are highly computationally expensive,and transformation-based perturbation algorithms only target specific recognition models.In this paper,we propose a universal feature vector-based privacy-preserving perturbation algorithm for facial emotion.Our method implements privacy-preserving facial emotion images on the feature space by computing tiny perturbations and adding them to the original images.In addition,the proposed algorithm can also enable expression images to be recognized as specific labels.Experiments show that the protection success rate of our method is above 95%and the image quality evaluation degrades no more than 0.003.The quantitative and qualitative results show that our proposed method has a balance between privacy and usability.展开更多
Facial emotion recognition achieves great success with the help of large neural models but also fails to be applied in practical situations due to the large model size of neural methods.To bridge this gap,in this pape...Facial emotion recognition achieves great success with the help of large neural models but also fails to be applied in practical situations due to the large model size of neural methods.To bridge this gap,in this paper,we combine two mainstream model compression methods(pruning and quantization)together,and propose a pruningthen-quantization framework to compress the neural models for facial emotion recognition tasks.Experiments on three datasets show that our model could achieve a high model compression ratio and maintain the model’s high performance well.Besides,We analyze the layer-wise compression performance of our proposed framework to explore its effect and adaptability in fine-grained modules.展开更多
A facial expression emotion recognition based human-robot interaction(FEER-HRI) system is proposed, for which a four-layer system framework is designed. The FEERHRI system enables the robots not only to recognize huma...A facial expression emotion recognition based human-robot interaction(FEER-HRI) system is proposed, for which a four-layer system framework is designed. The FEERHRI system enables the robots not only to recognize human emotions, but also to generate facial expression for adapting to human emotions. A facial emotion recognition method based on2D-Gabor, uniform local binary pattern(LBP) operator, and multiclass extreme learning machine(ELM) classifier is presented,which is applied to real-time facial expression recognition for robots. Facial expressions of robots are represented by simple cartoon symbols and displayed by a LED screen equipped in the robots, which can be easily understood by human. Four scenarios,i.e., guiding, entertainment, home service and scene simulation are performed in the human-robot interaction experiment, in which smooth communication is realized by facial expression recognition of humans and facial expression generation of robots within 2 seconds. As a few prospective applications, the FEERHRI system can be applied in home service, smart home, safe driving, and so on.展开更多
Changes in social and emotional behaviour have been consistently observed in patients with traumatic brain injury. These changes are associated with emotion recognition deficits which represent one of the major barrie...Changes in social and emotional behaviour have been consistently observed in patients with traumatic brain injury. These changes are associated with emotion recognition deficits which represent one of the major barriers to a successful familiar and social reintegration. In the present study, 32 patients with traumatic brain injury, involving the frontal lobe, and 41 age- and education-matched healthy controls were analyzed. A Go/No-Go task was designed, where each participant had to recognize faces representing three social emotions (arrogance, guilt and jealousy). Results suggested that ability to recognize two social emotions (arrogance and jealousy) was significantly reduced in patients with traumatic brain injury, indicating frontal lesion can reduce emotion recognition ability. In addition, the analysis of the results for hemispheric lesion location (right, left or bilateral) suggested the bilateral lesion sub-group showed a lower accuracy on all social emotions.展开更多
Emotional facial expressions are important cues for interaction between people. The aim of the present study was to investigate brain function when processing fearful facial expressions in offenders with two psychiatr...Emotional facial expressions are important cues for interaction between people. The aim of the present study was to investigate brain function when processing fearful facial expressions in offenders with two psychiatric disorders which include impaired emotional facial perception;autism spectrum disorder (ASD) and psychopathy (PSY). Fourteen offenders undergoing forensic psychiatric assessment (7 with ASD, and 7 psychopathic offenders) and 12 healthy controls (HC) viewed fearful and neutral faces while undergoing functional magnetic resonance imaging (fMRI). Brain activity (fearful versus neutral faces) was compared both between HC and offenders and between the two offender groups (PSY and ASD). Functional co-activation was also investigated. The offenders had increased activity bilaterally in amygdala and medial cingulate cortex as well as the left hippocampus during processing fearful facial expressions compared to HC. The two subgroups of offenders differed in five regions compared with each other. Results from functional co-activation analysis suggested a strong correlation between the amygdala and anterior cingulate cortex (ACC) in the left hemisphere only in the PSY group. These findings suggest enhanced neural processing of fearful faces in the amygdala as well as in other facial processing brain areas in offenders compared to HC. Moreover, the co-activation between amygdala and ACC in the PSY but not the ASD group suggested qualitative differences in amygdala activity in the two groups. Since the sample size is small the study should be regarded as a pilot study.展开更多
Artificial entities,such as virtual agents,have become more pervasive.Their long-term presence among humans requires the virtual agent’s ability to express appropriate emotions to elicit the necessary empathy from th...Artificial entities,such as virtual agents,have become more pervasive.Their long-term presence among humans requires the virtual agent’s ability to express appropriate emotions to elicit the necessary empathy from the users.Affective empathy involves behavioral mimicry,a synchronized co-movement between dyadic pairs.However,the characteristics of such synchrony between humans and virtual agents remain unclear in empathic interactions.Our study evaluates the participant’s behavioral synchronization when a virtual agent exhibits an emotional expression congruent with the emotional context through facial expressions,behavioral gestures,and voice.Participants viewed an emotion-eliciting video stimulus(negative or positive)with a virtual agent.The participants then conversed with the virtual agent about the video,such as how the participant felt about the content.The virtual agent expressed emotions congruent with the video or neutral emotion during the dialog.The participants’facial expressions,such as the facial expressive intensity and facial muscle movement,were measured during the dialog using a camera.The results showed the participants’significant behavioral synchronization(i.e.,cosine similarity≥.05)in both the negative and positive emotion conditions,evident in the participant’s facial mimicry with the virtual agent.Additionally,the participants’facial expressions,both movement and intensity,were significantly stronger in the emotional virtual agent than in the neutral virtual agent.In particular,we found that the facial muscle intensity of AU45(Blink)is an effective index to assess the participant’s synchronization that differs by the individual’s empathic capability(low,mid,high).Based on the results,we suggest an appraisal criterion to provide empirical conditions to validate empathic interaction based on the facial expression measures.展开更多
The symptoms of autism spectrum disorder(ASD) have been hypothesized to be caused by changes in brain connectivity. From the clinical perspective, the‘‘disconnectivity'' hypothesis has been used to explain chara...The symptoms of autism spectrum disorder(ASD) have been hypothesized to be caused by changes in brain connectivity. From the clinical perspective, the‘‘disconnectivity'' hypothesis has been used to explain characteristic impairments in ‘‘socio-emotional'' function.Therefore, in this study we compared the facial emotional recognition(FER) feature and the integrity of socialemotional-related white-matter tracts between children and adolescents with high-functioning ASD(HFA) and their typically developing(TD) counterparts. The correlation between the two factors was explored to find out if impairment of the white-matter tracts is the neural basis of social-emotional disorders. Compared with the TD group,FER was significantly impaired and the fractional anisotropy value of the right cingulate fasciculus was increased in the HFA group(P / 0.01). In conclusion, the FER function of children and adolescents with HFA was impaired and the microstructure of the cingulate fasciculus had abnormalities.展开更多
Facial emotion recognition is an essential and important aspect of the field of human-machine interaction.Past research on facial emotion recognition focuses on the laboratory environment.However,it faces many challen...Facial emotion recognition is an essential and important aspect of the field of human-machine interaction.Past research on facial emotion recognition focuses on the laboratory environment.However,it faces many challenges in real-world conditions,i.e.,illumination changes,large pose variations and partial or full occlusions.Those challenges lead to different face areas with different degrees of sharpness and completeness.Inspired by this fact,we focus on the authenticity of predictions generated by different<emotion,region>pairs.For example,if only the mouth areas are available and the emotion classifier predicts happiness,then there is a question of how to judge the authenticity of predictions.This problem can be converted into the contribution of different face areas to different emotions.In this paper,we divide the whole face into six areas:nose areas,mouth areas,eyes areas,nose to mouth areas,nose to eyes areas and mouth to eyes areas.To obtain more convincing results,our experiments are conducted on three different databases:facial expression recognition+(FER+),real-world affective faces database(RAF-DB)and expression in-the-wild(ExpW)dataset.Through analysis of the classification accuracy,the confusion matrix and the class activation map(CAM),we can establish convincing results.To sum up,the contributions of this paper lie in two areas:1)We visualize concerned areas of human faces in emotion recognition;2)We analyze the contribution of different face areas to different emotions in real-world conditions through experimental analysis.Our findings can be combined with findings in psychology to promote the understanding of emotional expressions.展开更多
文摘Facial emotion recognition(FER)has become a focal point of research due to its widespread applications,ranging from human-computer interaction to affective computing.While traditional FER techniques have relied on handcrafted features and classification models trained on image or video datasets,recent strides in artificial intelligence and deep learning(DL)have ushered in more sophisticated approaches.The research aims to develop a FER system using a Faster Region Convolutional Neural Network(FRCNN)and design a specialized FRCNN architecture tailored for facial emotion recognition,leveraging its ability to capture spatial hierarchies within localized regions of facial features.The proposed work enhances the accuracy and efficiency of facial emotion recognition.The proposed work comprises twomajor key components:Inception V3-based feature extraction and FRCNN-based emotion categorization.Extensive experimentation on Kaggle datasets validates the effectiveness of the proposed strategy,showcasing the FRCNN approach’s resilience and accuracy in identifying and categorizing facial expressions.The model’s overall performance metrics are compelling,with an accuracy of 98.4%,precision of 97.2%,and recall of 96.31%.This work introduces a perceptive deep learning-based FER method,contributing to the evolving landscape of emotion recognition technologies.The high accuracy and resilience demonstrated by the FRCNN approach underscore its potential for real-world applications.This research advances the field of FER and presents a compelling case for the practicality and efficacy of deep learning models in automating the understanding of facial emotions.
文摘Recently,people have been paying more and more attention to mental health,such as depression,autism,and other common mental diseases.In order to achieve a mental disease diagnosis,intelligent methods have been actively studied.However,the existing models suffer the accuracy degradation caused by the clarity and occlusion of human faces in practical applications.This paper,thus,proposes a multi-scale feature fusion network that obtains feature information at three scales by locating the sentiment region in the image,and integrates global feature information and local feature information.In addition,a focal cross-entropy loss function is designed to improve the network's focus on difficult samples during training,enhance the training effect,and increase the model recognition accuracy.Experimental results on the challenging RAF_DB dataset show that the proposed model exhibits better facial expression recognition accuracy than existing techniques.
基金supported by the Healthcare AI Convergence R&D Program through the National IT Industry Promotion Agency of Korea(NIPA)funded by the Ministry of Science and ICT(No.S0102-23-1007)the Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(NRF-2017R1A6A1A03015496).
文摘Emotion recognition based on facial expressions is one of the most critical elements of human-machine interfaces.Most conventional methods for emotion recognition using facial expressions use the entire facial image to extract features and then recognize specific emotions through a pre-trained model.In contrast,this paper proposes a novel feature vector extraction method using the Euclidean distance between the landmarks changing their positions according to facial expressions,especially around the eyes,eyebrows,nose,andmouth.Then,we apply a newclassifier using an ensemble network to increase emotion recognition accuracy.The emotion recognition performance was compared with the conventional algorithms using public databases.The results indicated that the proposed method achieved higher accuracy than the traditional based on facial expressions for emotion recognition.In particular,our experiments with the FER2013 database show that our proposed method is robust to lighting conditions and backgrounds,with an average of 25% higher performance than previous studies.Consequently,the proposed method is expected to recognize facial expressions,especially fear and anger,to help prevent severe accidents by detecting security-related or dangerous actions in advance.
文摘<p align="justify"> <strong>Background</strong><strong>:</strong> Alzheimer’s sufferers (AS) are unable to visually recognize facial emotions (VRFE). However, we do not know the kind of emotions involved, the timeline for the onset of this loss of ability to recognize facial emotional expressions during the natural course of this disease and the existence of any correlation with other comorbid cognitive disorders. For that reason, the authors aimed to determine whether a deficit in facial emotion recognition is present at the onset of Alzheimer disease, distinctly and concurrently with the onset of cognitive impairment or is it a prodromal syndrome of Alzheimer’s Disease before the onset of cognitive decline and what emotions are involved. A secondary aim was to investigate relationships between facial emotion recognition and cognitive performance on various parameters. <strong>Method:</strong> Single Blind Case-control study. Setting in Memory clinic. <strong><span style="font-family:Verdana;">Participants: </span></strong>12 patients, (AS) and 12 control subjects (CS) were enrolled. <strong>Measurements: </strong>Quantitative information about the ability for facial emotion recognition was obtained from Method of Analysis and Research on the Integration of Emotions (MARIE). The Mini Mental Status Examination (MMSE), the Picture Naming, the Mattis Dementia Rating Scale (DRS), and the Grober & Buschke Free and Cued Selective Reminding Test (FCSRT) tests were used to measure cognitive impairment. <strong>Results:</strong> We note that the AS have a problem with the visual recognition of facial emotions with existence of a higher threshold for visual recognition. The AS is less sensitive to the visual recognition cues of facial emotions. AS is unable to distinguish anger from fear. It would be a possible explanation for some acts of aggressiveness seen in the clinical and home setting demonstrated by “<i>AS with behavioral disturbance</i>”. The anger-fear series was found to be the first affected in the course of Alzheimer’s. The appearance of the curve is sigmoid for the control group and linear for the Alzheimer’s patients with a cognitive distortion when the VRFE is represented graphically with percentage of correct recognition plotted on the “y” axis and the selected images presented as stimulus with measures of density of emotion plotted on the “x” axis. In both groups, it is intuitively and theoretically expected that correct recognition will be directly proportional to the density of represented emotion in the stimulus image. This hypothesis is true for CS but not so for AS. The MARIE (<i>see below</i>) processing of emotions seems to be strengthened by the optimal cognitive functions showing the hypothesis applies to CS but not uniformly in AS. This anomaly in the AS is evidenced by the decline of the cognitive functions contributing to abovementioned “linearization” in the graphic representation. There is a direct positive correlation between the results of MARIE and the performance on cognitive tests. <strong>Conclusion: </strong>The administration of a combination of DRS, FCSRT, and MARIE to patients screened for possibly emerging Alzheimer’s could provide a more detailed and specific approach to make a definitive early diagnosis of Alzheimer’s. The Alzheimer’s patients found it difficult to distinguish between anger and fear. </p>
基金supported by the Natural Science Foundation of China (Nos. 30971042 and 91132715)the Innovative Research Team for Translational Neuropsychiatric Medicine, Zhejiang Province (2011R50049)the Program for Changjiang Scholars and Innovative Research Team in University, Chinese Ministry of Education (No. IRT1038)
文摘Objective To study the contribution of executive function to abnormal recognition of facia expressions of emotion in schizophrenia patients. Methods Abnormal recognition of facial expressions of emotion was assayed according to Japanese and Caucasian facial expressions of emotion (JACFEE), Wisconsin card sorting test {WCST), positive and negative symptom scale, and Hamilton anxiety and depression scale, respectively, in 88 paranoid schizophrenia patients and 75 healthy volunteers. Results Patients scored higher on the Positive and Negative Symptom Scale and the Hamilton Anxiety and Depression Scales, displayed lower JACFEE recognition accuracies and poorer WCST performances. The JACFEE recognition accuracy of contempt and disgust was negatively correlated with the negative symptom scale score while the recognition accuracy of fear was positively with the positive symptom scale score and the recognition accuracy of surprise was negatively with the general psychopathology score in patients. Moreover, the WCST could predict the JACFEE recognition accuracy of contempt, disgust, and sadness in patients, and the perseverative errors negatively predicted the recognition accuracy of sadness in healthy volunteers. The JACFEE recognition accuracy of sadness could predict the WCST categories in paranoid schizophrenia patients. Conclusion Recognition accuracy of social-/moral emotions, such as contempt, disgust and sadness is related to the executive function in paranoid schizophrenia patients, especially when regarding sadness.
基金supported by the Foundation for Innovative Research Groups of the National Natural Science Foundation of China(62121001).
文摘Facial emotion have great significance in human-computer interaction,virtual reality and people's communication.Existing methods for facial emotion privacy mainly concentrate on the perturbation of facial emotion images.However,cryptography-based perturbation algorithms are highly computationally expensive,and transformation-based perturbation algorithms only target specific recognition models.In this paper,we propose a universal feature vector-based privacy-preserving perturbation algorithm for facial emotion.Our method implements privacy-preserving facial emotion images on the feature space by computing tiny perturbations and adding them to the original images.In addition,the proposed algorithm can also enable expression images to be recognized as specific labels.Experiments show that the protection success rate of our method is above 95%and the image quality evaluation degrades no more than 0.003.The quantitative and qualitative results show that our proposed method has a balance between privacy and usability.
基金supported in part by the Technological Breakthrough Project of Science,Technology and Innovation Commission of Shenzhen Municipality(No.JSGG20201102162000001)InnoHK Initiative of Hong Kong SAR Government,and the Laboratory for AI-Powered Financial Technologies Ltd.
文摘Facial emotion recognition achieves great success with the help of large neural models but also fails to be applied in practical situations due to the large model size of neural methods.To bridge this gap,in this paper,we combine two mainstream model compression methods(pruning and quantization)together,and propose a pruningthen-quantization framework to compress the neural models for facial emotion recognition tasks.Experiments on three datasets show that our model could achieve a high model compression ratio and maintain the model’s high performance well.Besides,We analyze the layer-wise compression performance of our proposed framework to explore its effect and adaptability in fine-grained modules.
基金supported by the National Natural Science Foundation of China(61403422,61273102)the Hubei Provincial Natural Science Foundation of China(2015CFA010)+1 种基金the Ⅲ Project(B17040)the Fundamental Research Funds for National University,China University of Geosciences(Wuhan)
文摘A facial expression emotion recognition based human-robot interaction(FEER-HRI) system is proposed, for which a four-layer system framework is designed. The FEERHRI system enables the robots not only to recognize human emotions, but also to generate facial expression for adapting to human emotions. A facial emotion recognition method based on2D-Gabor, uniform local binary pattern(LBP) operator, and multiclass extreme learning machine(ELM) classifier is presented,which is applied to real-time facial expression recognition for robots. Facial expressions of robots are represented by simple cartoon symbols and displayed by a LED screen equipped in the robots, which can be easily understood by human. Four scenarios,i.e., guiding, entertainment, home service and scene simulation are performed in the human-robot interaction experiment, in which smooth communication is realized by facial expression recognition of humans and facial expression generation of robots within 2 seconds. As a few prospective applications, the FEERHRI system can be applied in home service, smart home, safe driving, and so on.
文摘Changes in social and emotional behaviour have been consistently observed in patients with traumatic brain injury. These changes are associated with emotion recognition deficits which represent one of the major barriers to a successful familiar and social reintegration. In the present study, 32 patients with traumatic brain injury, involving the frontal lobe, and 41 age- and education-matched healthy controls were analyzed. A Go/No-Go task was designed, where each participant had to recognize faces representing three social emotions (arrogance, guilt and jealousy). Results suggested that ability to recognize two social emotions (arrogance and jealousy) was significantly reduced in patients with traumatic brain injury, indicating frontal lesion can reduce emotion recognition ability. In addition, the analysis of the results for hemispheric lesion location (right, left or bilateral) suggested the bilateral lesion sub-group showed a lower accuracy on all social emotions.
基金Financial support was provided through the regional agreement on medical training and clinical research between Stockholm County Council and the Karolinska Institutet(ALF)grants from the National Board of Forensic Medicine in Swedenfunded by grants from the Swedish Research Council.
文摘Emotional facial expressions are important cues for interaction between people. The aim of the present study was to investigate brain function when processing fearful facial expressions in offenders with two psychiatric disorders which include impaired emotional facial perception;autism spectrum disorder (ASD) and psychopathy (PSY). Fourteen offenders undergoing forensic psychiatric assessment (7 with ASD, and 7 psychopathic offenders) and 12 healthy controls (HC) viewed fearful and neutral faces while undergoing functional magnetic resonance imaging (fMRI). Brain activity (fearful versus neutral faces) was compared both between HC and offenders and between the two offender groups (PSY and ASD). Functional co-activation was also investigated. The offenders had increased activity bilaterally in amygdala and medial cingulate cortex as well as the left hippocampus during processing fearful facial expressions compared to HC. The two subgroups of offenders differed in five regions compared with each other. Results from functional co-activation analysis suggested a strong correlation between the amygdala and anterior cingulate cortex (ACC) in the left hemisphere only in the PSY group. These findings suggest enhanced neural processing of fearful faces in the amygdala as well as in other facial processing brain areas in offenders compared to HC. Moreover, the co-activation between amygdala and ACC in the PSY but not the ASD group suggested qualitative differences in amygdala activity in the two groups. Since the sample size is small the study should be regarded as a pilot study.
文摘Artificial entities,such as virtual agents,have become more pervasive.Their long-term presence among humans requires the virtual agent’s ability to express appropriate emotions to elicit the necessary empathy from the users.Affective empathy involves behavioral mimicry,a synchronized co-movement between dyadic pairs.However,the characteristics of such synchrony between humans and virtual agents remain unclear in empathic interactions.Our study evaluates the participant’s behavioral synchronization when a virtual agent exhibits an emotional expression congruent with the emotional context through facial expressions,behavioral gestures,and voice.Participants viewed an emotion-eliciting video stimulus(negative or positive)with a virtual agent.The participants then conversed with the virtual agent about the video,such as how the participant felt about the content.The virtual agent expressed emotions congruent with the video or neutral emotion during the dialog.The participants’facial expressions,such as the facial expressive intensity and facial muscle movement,were measured during the dialog using a camera.The results showed the participants’significant behavioral synchronization(i.e.,cosine similarity≥.05)in both the negative and positive emotion conditions,evident in the participant’s facial mimicry with the virtual agent.Additionally,the participants’facial expressions,both movement and intensity,were significantly stronger in the emotional virtual agent than in the neutral virtual agent.In particular,we found that the facial muscle intensity of AU45(Blink)is an effective index to assess the participant’s synchronization that differs by the individual’s empathic capability(low,mid,high).Based on the results,we suggest an appraisal criterion to provide empirical conditions to validate empathic interaction based on the facial expression measures.
基金supported by The National Key Research and Development Program of China (2016YFC1306200)the National Natural Science Foundation of China (91132750)+1 种基金Major Projects of the National Social Science Foundation of China (14ZDB161)the Key Research and Development Program of Jiangsu Province, China (BE2016616)
文摘The symptoms of autism spectrum disorder(ASD) have been hypothesized to be caused by changes in brain connectivity. From the clinical perspective, the‘‘disconnectivity'' hypothesis has been used to explain characteristic impairments in ‘‘socio-emotional'' function.Therefore, in this study we compared the facial emotional recognition(FER) feature and the integrity of socialemotional-related white-matter tracts between children and adolescents with high-functioning ASD(HFA) and their typically developing(TD) counterparts. The correlation between the two factors was explored to find out if impairment of the white-matter tracts is the neural basis of social-emotional disorders. Compared with the TD group,FER was significantly impaired and the fractional anisotropy value of the right cingulate fasciculus was increased in the HFA group(P / 0.01). In conclusion, the FER function of children and adolescents with HFA was impaired and the microstructure of the cingulate fasciculus had abnormalities.
基金supported by the National Key Research & Development Plan of China (No. 2017YFB1002804)National Natural Science Foundation of China (Nos. 61425017, 61773379, 61332017, 61603390 and 61771472)the Major Program for the 325 National Social Science Fund of China (No. 13&ZD189)
文摘Facial emotion recognition is an essential and important aspect of the field of human-machine interaction.Past research on facial emotion recognition focuses on the laboratory environment.However,it faces many challenges in real-world conditions,i.e.,illumination changes,large pose variations and partial or full occlusions.Those challenges lead to different face areas with different degrees of sharpness and completeness.Inspired by this fact,we focus on the authenticity of predictions generated by different<emotion,region>pairs.For example,if only the mouth areas are available and the emotion classifier predicts happiness,then there is a question of how to judge the authenticity of predictions.This problem can be converted into the contribution of different face areas to different emotions.In this paper,we divide the whole face into six areas:nose areas,mouth areas,eyes areas,nose to mouth areas,nose to eyes areas and mouth to eyes areas.To obtain more convincing results,our experiments are conducted on three different databases:facial expression recognition+(FER+),real-world affective faces database(RAF-DB)and expression in-the-wild(ExpW)dataset.Through analysis of the classification accuracy,the confusion matrix and the class activation map(CAM),we can establish convincing results.To sum up,the contributions of this paper lie in two areas:1)We visualize concerned areas of human faces in emotion recognition;2)We analyze the contribution of different face areas to different emotions in real-world conditions through experimental analysis.Our findings can be combined with findings in psychology to promote the understanding of emotional expressions.