期刊文献+
共找到13篇文章
< 1 >
每页显示 20 50 100
Faster Region Convolutional Neural Network(FRCNN)Based Facial Emotion Recognition
1
作者 J.Sheril Angel A.Diana Andrushia +3 位作者 TMary Neebha Oussama Accouche Louai Saker N.Anand 《Computers, Materials & Continua》 SCIE EI 2024年第5期2427-2448,共22页
Facial emotion recognition(FER)has become a focal point of research due to its widespread applications,ranging from human-computer interaction to affective computing.While traditional FER techniques have relied on han... Facial emotion recognition(FER)has become a focal point of research due to its widespread applications,ranging from human-computer interaction to affective computing.While traditional FER techniques have relied on handcrafted features and classification models trained on image or video datasets,recent strides in artificial intelligence and deep learning(DL)have ushered in more sophisticated approaches.The research aims to develop a FER system using a Faster Region Convolutional Neural Network(FRCNN)and design a specialized FRCNN architecture tailored for facial emotion recognition,leveraging its ability to capture spatial hierarchies within localized regions of facial features.The proposed work enhances the accuracy and efficiency of facial emotion recognition.The proposed work comprises twomajor key components:Inception V3-based feature extraction and FRCNN-based emotion categorization.Extensive experimentation on Kaggle datasets validates the effectiveness of the proposed strategy,showcasing the FRCNN approach’s resilience and accuracy in identifying and categorizing facial expressions.The model’s overall performance metrics are compelling,with an accuracy of 98.4%,precision of 97.2%,and recall of 96.31%.This work introduces a perceptive deep learning-based FER method,contributing to the evolving landscape of emotion recognition technologies.The high accuracy and resilience demonstrated by the FRCNN approach underscore its potential for real-world applications.This research advances the field of FER and presents a compelling case for the practicality and efficacy of deep learning models in automating the understanding of facial emotions. 展开更多
关键词 facial emotions FRCNN deep learning emotion recognition FACE CNN
下载PDF
A Multi-scale Attention-based Facial Emotion Recognition Method Based on Deep Learning 被引量:1
2
作者 ZHANG Ning ZHANG Xiufeng +1 位作者 FU Xingkui QI Guobin 《Instrumentation》 2022年第3期51-58,共8页
Recently,people have been paying more and more attention to mental health,such as depression,autism,and other common mental diseases.In order to achieve a mental disease diagnosis,intelligent methods have been activel... Recently,people have been paying more and more attention to mental health,such as depression,autism,and other common mental diseases.In order to achieve a mental disease diagnosis,intelligent methods have been actively studied.However,the existing models suffer the accuracy degradation caused by the clarity and occlusion of human faces in practical applications.This paper,thus,proposes a multi-scale feature fusion network that obtains feature information at three scales by locating the sentiment region in the image,and integrates global feature information and local feature information.In addition,a focal cross-entropy loss function is designed to improve the network's focus on difficult samples during training,enhance the training effect,and increase the model recognition accuracy.Experimental results on the challenging RAF_DB dataset show that the proposed model exhibits better facial expression recognition accuracy than existing techniques. 展开更多
关键词 Mental Health facial emotion Recognition Deep Learning Multiscale Loss Function
下载PDF
Deep Facial Emotion Recognition Using Local Features Based on Facial Landmarks for Security System
3
作者 Youngeun An Jimin Lee +1 位作者 Eunsang Bak Sungbum Pan 《Computers, Materials & Continua》 SCIE EI 2023年第8期1817-1832,共16页
Emotion recognition based on facial expressions is one of the most critical elements of human-machine interfaces.Most conventional methods for emotion recognition using facial expressions use the entire facial image t... Emotion recognition based on facial expressions is one of the most critical elements of human-machine interfaces.Most conventional methods for emotion recognition using facial expressions use the entire facial image to extract features and then recognize specific emotions through a pre-trained model.In contrast,this paper proposes a novel feature vector extraction method using the Euclidean distance between the landmarks changing their positions according to facial expressions,especially around the eyes,eyebrows,nose,andmouth.Then,we apply a newclassifier using an ensemble network to increase emotion recognition accuracy.The emotion recognition performance was compared with the conventional algorithms using public databases.The results indicated that the proposed method achieved higher accuracy than the traditional based on facial expressions for emotion recognition.In particular,our experiments with the FER2013 database show that our proposed method is robust to lighting conditions and backgrounds,with an average of 25% higher performance than previous studies.Consequently,the proposed method is expected to recognize facial expressions,especially fear and anger,to help prevent severe accidents by detecting security-related or dangerous actions in advance. 展开更多
关键词 facial emotion recognition landmark-based feature extraction ensemble network robustness to the changes in illumination and background dangerous situation detection accident prevention
下载PDF
The Ability to Visually Recognize Facial Emotions during the Initial Stages of Alzheimer’s
4
作者 Philippe Granato Shreekumar Vinekar +2 位作者 Olivier Godefroy Jean-Pierre Van Gansberghe Raymond Bruyer 《Open Journal of Psychiatry》 2020年第4期187-204,共18页
<p align="justify"> <strong>Background</strong><strong>:</strong> Alzheimer’s sufferers (AS) are unable to visually recognize facial emotions (VRFE). However, we do not know th... <p align="justify"> <strong>Background</strong><strong>:</strong> Alzheimer’s sufferers (AS) are unable to visually recognize facial emotions (VRFE). However, we do not know the kind of emotions involved, the timeline for the onset of this loss of ability to recognize facial emotional expressions during the natural course of this disease and the existence of any correlation with other comorbid cognitive disorders. For that reason, the authors aimed to determine whether a deficit in facial emotion recognition is present at the onset of Alzheimer disease, distinctly and concurrently with the onset of cognitive impairment or is it a prodromal syndrome of Alzheimer’s Disease before the onset of cognitive decline and what emotions are involved. A secondary aim was to investigate relationships between facial emotion recognition and cognitive performance on various parameters. <strong>Method:</strong> Single Blind Case-control study. Setting in Memory clinic. <strong><span style="font-family:Verdana;">Participants: </span></strong>12 patients, (AS) and 12 control subjects (CS) were enrolled. <strong>Measurements: </strong>Quantitative information about the ability for facial emotion recognition was obtained from Method of Analysis and Research on the Integration of Emotions (MARIE). The Mini Mental Status Examination (MMSE), the Picture Naming, the Mattis Dementia Rating Scale (DRS), and the Grober & Buschke Free and Cued Selective Reminding Test (FCSRT) tests were used to measure cognitive impairment. <strong>Results:</strong> We note that the AS have a problem with the visual recognition of facial emotions with existence of a higher threshold for visual recognition. The AS is less sensitive to the visual recognition cues of facial emotions. AS is unable to distinguish anger from fear. It would be a possible explanation for some acts of aggressiveness seen in the clinical and home setting demonstrated by “<i>AS with behavioral disturbance</i>”. The anger-fear series was found to be the first affected in the course of Alzheimer’s. The appearance of the curve is sigmoid for the control group and linear for the Alzheimer’s patients with a cognitive distortion when the VRFE is represented graphically with percentage of correct recognition plotted on the “y” axis and the selected images presented as stimulus with measures of density of emotion plotted on the “x” axis. In both groups, it is intuitively and theoretically expected that correct recognition will be directly proportional to the density of represented emotion in the stimulus image. This hypothesis is true for CS but not so for AS. The MARIE (<i>see below</i>) processing of emotions seems to be strengthened by the optimal cognitive functions showing the hypothesis applies to CS but not uniformly in AS. This anomaly in the AS is evidenced by the decline of the cognitive functions contributing to abovementioned “linearization” in the graphic representation. There is a direct positive correlation between the results of MARIE and the performance on cognitive tests. <strong>Conclusion: </strong>The administration of a combination of DRS, FCSRT, and MARIE to patients screened for possibly emerging Alzheimer’s could provide a more detailed and specific approach to make a definitive early diagnosis of Alzheimer’s. The Alzheimer’s patients found it difficult to distinguish between anger and fear. </p> 展开更多
关键词 Recognizing facial emotions Alzheimer Disease Early Diagnosis
下载PDF
Processing Environmental Stimuli in Paranoid Schizophrenia:Recognizing Facial Emotions and Performing Executive Functions 被引量:4
5
作者 YU Shao Hua ZHU Jun Peng +6 位作者 XU You ZHENG Lei Lei CHAI Hao HE Wei LIU Wei Bo LI Hui Chun WANG Wei 《Biomedical and Environmental Sciences》 SCIE CAS CSCD 2012年第6期697-705,共9页
Objective To study the contribution of executive function to abnormal recognition of facia expressions of emotion in schizophrenia patients. Methods Abnormal recognition of facial expressions of emotion was assayed ac... Objective To study the contribution of executive function to abnormal recognition of facia expressions of emotion in schizophrenia patients. Methods Abnormal recognition of facial expressions of emotion was assayed according to Japanese and Caucasian facial expressions of emotion (JACFEE), Wisconsin card sorting test {WCST), positive and negative symptom scale, and Hamilton anxiety and depression scale, respectively, in 88 paranoid schizophrenia patients and 75 healthy volunteers. Results Patients scored higher on the Positive and Negative Symptom Scale and the Hamilton Anxiety and Depression Scales, displayed lower JACFEE recognition accuracies and poorer WCST performances. The JACFEE recognition accuracy of contempt and disgust was negatively correlated with the negative symptom scale score while the recognition accuracy of fear was positively with the positive symptom scale score and the recognition accuracy of surprise was negatively with the general psychopathology score in patients. Moreover, the WCST could predict the JACFEE recognition accuracy of contempt, disgust, and sadness in patients, and the perseverative errors negatively predicted the recognition accuracy of sadness in healthy volunteers. The JACFEE recognition accuracy of sadness could predict the WCST categories in paranoid schizophrenia patients. Conclusion Recognition accuracy of social-/moral emotions, such as contempt, disgust and sadness is related to the executive function in paranoid schizophrenia patients, especially when regarding sadness. 展开更多
关键词 Executive function Japanese and Caucasian facial expressions of emotion Paranoidschizophrenia Wisconsin card sorting test
下载PDF
Pri-EMO:A universal perturbation method for privacy preserving facial emotion recognition 被引量:1
6
作者 Yong Zeng Zhenyu Zhang +2 位作者 Jiale Liu Jianfeng Ma Zhihong Liu 《Journal of Information and Intelligence》 2023年第4期330-340,共11页
Facial emotion have great significance in human-computer interaction,virtual reality and people's communication.Existing methods for facial emotion privacy mainly concentrate on the perturbation of facial emotion ... Facial emotion have great significance in human-computer interaction,virtual reality and people's communication.Existing methods for facial emotion privacy mainly concentrate on the perturbation of facial emotion images.However,cryptography-based perturbation algorithms are highly computationally expensive,and transformation-based perturbation algorithms only target specific recognition models.In this paper,we propose a universal feature vector-based privacy-preserving perturbation algorithm for facial emotion.Our method implements privacy-preserving facial emotion images on the feature space by computing tiny perturbations and adding them to the original images.In addition,the proposed algorithm can also enable expression images to be recognized as specific labels.Experiments show that the protection success rate of our method is above 95%and the image quality evaluation degrades no more than 0.003.The quantitative and qualitative results show that our proposed method has a balance between privacy and usability. 展开更多
关键词 facial emotion recognition Privacy preserving PERTURBATION Universal algorithm Feature space
原文传递
A pruning-then-quantization model compression framework for facial emotion recognition
7
作者 Han Sun Wei Shao +3 位作者 Tao Li Jiayu Zhao Weitao Xu Linqi Song 《Intelligent and Converged Networks》 EI 2023年第3期225-236,共12页
Facial emotion recognition achieves great success with the help of large neural models but also fails to be applied in practical situations due to the large model size of neural methods.To bridge this gap,in this pape... Facial emotion recognition achieves great success with the help of large neural models but also fails to be applied in practical situations due to the large model size of neural methods.To bridge this gap,in this paper,we combine two mainstream model compression methods(pruning and quantization)together,and propose a pruningthen-quantization framework to compress the neural models for facial emotion recognition tasks.Experiments on three datasets show that our model could achieve a high model compression ratio and maintain the model’s high performance well.Besides,We analyze the layer-wise compression performance of our proposed framework to explore its effect and adaptability in fine-grained modules. 展开更多
关键词 model compression facial emotion recognition Resnet
原文传递
A Facial Expression Emotion Recognition Based Human-robot Interaction System 被引量:5
8
作者 Zhentao Liu Min Wu +5 位作者 Weihua Cao Luefeng Chen Jianping Xu Ri Zhang Mengtian Zhou Junwei Mao 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2017年第4期668-676,共9页
A facial expression emotion recognition based human-robot interaction(FEER-HRI) system is proposed, for which a four-layer system framework is designed. The FEERHRI system enables the robots not only to recognize huma... A facial expression emotion recognition based human-robot interaction(FEER-HRI) system is proposed, for which a four-layer system framework is designed. The FEERHRI system enables the robots not only to recognize human emotions, but also to generate facial expression for adapting to human emotions. A facial emotion recognition method based on2D-Gabor, uniform local binary pattern(LBP) operator, and multiclass extreme learning machine(ELM) classifier is presented,which is applied to real-time facial expression recognition for robots. Facial expressions of robots are represented by simple cartoon symbols and displayed by a LED screen equipped in the robots, which can be easily understood by human. Four scenarios,i.e., guiding, entertainment, home service and scene simulation are performed in the human-robot interaction experiment, in which smooth communication is realized by facial expression recognition of humans and facial expression generation of robots within 2 seconds. As a few prospective applications, the FEERHRI system can be applied in home service, smart home, safe driving, and so on. 展开更多
关键词 emotion generation facial expression emotion recognition(FEER) human-robot interaction(HRI) system design
下载PDF
Changes in social emotion recognition following traumatic frontal lobe injury
9
作者 Ana Teresa Martins Luis Faísca +4 位作者 Francisco Esteves Cláudia Simo Mariline Gomes Justo Angélica Muresan Alexandra Reis 《Neural Regeneration Research》 SCIE CAS CSCD 2012年第2期101-108,共8页
Changes in social and emotional behaviour have been consistently observed in patients with traumatic brain injury. These changes are associated with emotion recognition deficits which represent one of the major barrie... Changes in social and emotional behaviour have been consistently observed in patients with traumatic brain injury. These changes are associated with emotion recognition deficits which represent one of the major barriers to a successful familiar and social reintegration. In the present study, 32 patients with traumatic brain injury, involving the frontal lobe, and 41 age- and education-matched healthy controls were analyzed. A Go/No-Go task was designed, where each participant had to recognize faces representing three social emotions (arrogance, guilt and jealousy). Results suggested that ability to recognize two social emotions (arrogance and jealousy) was significantly reduced in patients with traumatic brain injury, indicating frontal lesion can reduce emotion recognition ability. In addition, the analysis of the results for hemispheric lesion location (right, left or bilateral) suggested the bilateral lesion sub-group showed a lower accuracy on all social emotions. 展开更多
关键词 traumatic brain injury facial emotion recognition social emotions
下载PDF
Brain Processing of Fearful Facial Expression in Mentally Disordered Offenders
10
作者 Katarina Howner Hakan Fischer +5 位作者 Thomas Dierks Andrea Federspiel Lars-Olof Wahlund Tomas Jonsson Maria Kristoffersen Wiberg Marianne Kristiansson 《Journal of Behavioral and Brain Science》 2011年第3期115-123,共9页
Emotional facial expressions are important cues for interaction between people. The aim of the present study was to investigate brain function when processing fearful facial expressions in offenders with two psychiatr... Emotional facial expressions are important cues for interaction between people. The aim of the present study was to investigate brain function when processing fearful facial expressions in offenders with two psychiatric disorders which include impaired emotional facial perception;autism spectrum disorder (ASD) and psychopathy (PSY). Fourteen offenders undergoing forensic psychiatric assessment (7 with ASD, and 7 psychopathic offenders) and 12 healthy controls (HC) viewed fearful and neutral faces while undergoing functional magnetic resonance imaging (fMRI). Brain activity (fearful versus neutral faces) was compared both between HC and offenders and between the two offender groups (PSY and ASD). Functional co-activation was also investigated. The offenders had increased activity bilaterally in amygdala and medial cingulate cortex as well as the left hippocampus during processing fearful facial expressions compared to HC. The two subgroups of offenders differed in five regions compared with each other. Results from functional co-activation analysis suggested a strong correlation between the amygdala and anterior cingulate cortex (ACC) in the left hemisphere only in the PSY group. These findings suggest enhanced neural processing of fearful faces in the amygdala as well as in other facial processing brain areas in offenders compared to HC. Moreover, the co-activation between amygdala and ACC in the PSY but not the ASD group suggested qualitative differences in amygdala activity in the two groups. Since the sample size is small the study should be regarded as a pilot study. 展开更多
关键词 PSYCHOPATHY Autism Spectrum Disorder OFFENDERS fMRI emotional facial Processing
下载PDF
Empathic Responses of Behavioral-Synchronization in Human-Agent Interaction
11
作者 Sung Park Seongeon Park Mincheol Whang 《Computers, Materials & Continua》 SCIE EI 2022年第5期3761-3784,共24页
Artificial entities,such as virtual agents,have become more pervasive.Their long-term presence among humans requires the virtual agent’s ability to express appropriate emotions to elicit the necessary empathy from th... Artificial entities,such as virtual agents,have become more pervasive.Their long-term presence among humans requires the virtual agent’s ability to express appropriate emotions to elicit the necessary empathy from the users.Affective empathy involves behavioral mimicry,a synchronized co-movement between dyadic pairs.However,the characteristics of such synchrony between humans and virtual agents remain unclear in empathic interactions.Our study evaluates the participant’s behavioral synchronization when a virtual agent exhibits an emotional expression congruent with the emotional context through facial expressions,behavioral gestures,and voice.Participants viewed an emotion-eliciting video stimulus(negative or positive)with a virtual agent.The participants then conversed with the virtual agent about the video,such as how the participant felt about the content.The virtual agent expressed emotions congruent with the video or neutral emotion during the dialog.The participants’facial expressions,such as the facial expressive intensity and facial muscle movement,were measured during the dialog using a camera.The results showed the participants’significant behavioral synchronization(i.e.,cosine similarity≥.05)in both the negative and positive emotion conditions,evident in the participant’s facial mimicry with the virtual agent.Additionally,the participants’facial expressions,both movement and intensity,were significantly stronger in the emotional virtual agent than in the neutral virtual agent.In particular,we found that the facial muscle intensity of AU45(Blink)is an effective index to assess the participant’s synchronization that differs by the individual’s empathic capability(low,mid,high).Based on the results,we suggest an appraisal criterion to provide empirical conditions to validate empathic interaction based on the facial expression measures. 展开更多
关键词 facial emotion recognition facial expression virtual agent virtual human embodied conversational agent EMPATHY human-computer interaction
下载PDF
A Fiber Tractography Study of Social-Emotional Related Fiber Tracts in Children and Adolescents with Autism Spectrum Disorder 被引量:5
12
作者 Yun Li Hui Fang +7 位作者 Wenming Zheng Lu Qian Yunhua Xiao Qiaorong Wu Chen Chang Chaoyong Xiao Kangkang Chu Xiaoyan Ke 《Neuroscience Bulletin》 SCIE CAS CSCD 2017年第6期722-730,共9页
The symptoms of autism spectrum disorder(ASD) have been hypothesized to be caused by changes in brain connectivity. From the clinical perspective, the‘‘disconnectivity'' hypothesis has been used to explain chara... The symptoms of autism spectrum disorder(ASD) have been hypothesized to be caused by changes in brain connectivity. From the clinical perspective, the‘‘disconnectivity'' hypothesis has been used to explain characteristic impairments in ‘‘socio-emotional'' function.Therefore, in this study we compared the facial emotional recognition(FER) feature and the integrity of socialemotional-related white-matter tracts between children and adolescents with high-functioning ASD(HFA) and their typically developing(TD) counterparts. The correlation between the two factors was explored to find out if impairment of the white-matter tracts is the neural basis of social-emotional disorders. Compared with the TD group,FER was significantly impaired and the fractional anisotropy value of the right cingulate fasciculus was increased in the HFA group(P / 0.01). In conclusion, the FER function of children and adolescents with HFA was impaired and the microstructure of the cingulate fasciculus had abnormalities. 展开更多
关键词 Autism spectrum disorder facial emotional recognition Social-emotional related white matter fiber tracts Diffusion tensor imaging Tractography
原文传递
Expression Analysis Based on Face Regions in Real-world Conditions 被引量:3
13
作者 Zheng Lian Ya Li +2 位作者 Jian-Hua Tao Jian Huang Ming-Yue Niu 《International Journal of Automation and computing》 EI CSCD 2020年第1期96-107,共12页
Facial emotion recognition is an essential and important aspect of the field of human-machine interaction.Past research on facial emotion recognition focuses on the laboratory environment.However,it faces many challen... Facial emotion recognition is an essential and important aspect of the field of human-machine interaction.Past research on facial emotion recognition focuses on the laboratory environment.However,it faces many challenges in real-world conditions,i.e.,illumination changes,large pose variations and partial or full occlusions.Those challenges lead to different face areas with different degrees of sharpness and completeness.Inspired by this fact,we focus on the authenticity of predictions generated by different<emotion,region>pairs.For example,if only the mouth areas are available and the emotion classifier predicts happiness,then there is a question of how to judge the authenticity of predictions.This problem can be converted into the contribution of different face areas to different emotions.In this paper,we divide the whole face into six areas:nose areas,mouth areas,eyes areas,nose to mouth areas,nose to eyes areas and mouth to eyes areas.To obtain more convincing results,our experiments are conducted on three different databases:facial expression recognition+(FER+),real-world affective faces database(RAF-DB)and expression in-the-wild(ExpW)dataset.Through analysis of the classification accuracy,the confusion matrix and the class activation map(CAM),we can establish convincing results.To sum up,the contributions of this paper lie in two areas:1)We visualize concerned areas of human faces in emotion recognition;2)We analyze the contribution of different face areas to different emotions in real-world conditions through experimental analysis.Our findings can be combined with findings in psychology to promote the understanding of emotional expressions. 展开更多
关键词 facial emotion analysis face areas class activation map confusion matrix concerned area
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部