期刊文献+
共找到68篇文章
< 1 2 4 >
每页显示 20 50 100
Probing the processing of facial expressions in monkeys via time perception and eye tracking 被引量:1
1
作者 Xin-He Liu Lu Gan +2 位作者 Zhi-Ting Zhang Pan-Ke Yu Ji Dai 《Zoological Research》 SCIE CSCD 2023年第5期882-893,共12页
Accurately recognizing facial expressions is essential for effective social interactions.Non-human primates(NHPs)are widely used in the study of the neural mechanisms underpinning facial expression processing,yet it r... Accurately recognizing facial expressions is essential for effective social interactions.Non-human primates(NHPs)are widely used in the study of the neural mechanisms underpinning facial expression processing,yet it remains unclear how well monkeys can recognize the facial expressions of other species such as humans.In this study,we systematically investigated how monkeys process the facial expressions of conspecifics and humans using eye-tracking technology and sophisticated behavioral tasks,namely the temporal discrimination task(TDT)and face scan task(FST).We found that monkeys showed prolonged subjective time perception in response to Negative facial expressions in monkeys while showing longer reaction time to Negative facial expressions in humans.Monkey faces also reliably induced divergent pupil contraction in response to different expressions,while human faces and scrambled monkey faces did not.Furthermore,viewing patterns in the FST indicated that monkeys only showed bias toward emotional expressions upon observing monkey faces.Finally,masking the eye region marginally decreased the viewing duration for monkey faces but not for human faces.By probing facial expression processing in monkeys,our study demonstrates that monkeys are more sensitive to the facial expressions of conspecifics than those of humans,thus shedding new light on inter-species communication through facial expressions between NHPs and humans. 展开更多
关键词 MONKEY facial expression Time perception EYE-TRACKING Pupil size
下载PDF
A Modified CNN Network for Automatic Pain Identification Using Facial Expressions 被引量:1
2
作者 Ioannis Karamitsos IIham Seladji Sanjay Modak 《Journal of Software Engineering and Applications》 2021年第8期400-417,共18页
Pain is a strong symptom of diseases. Being an involuntary unpleasant feeling, it can be considered a reliable indicator of health issues. Pain has always been expressed verbally, but in some cases, traditional patien... Pain is a strong symptom of diseases. Being an involuntary unpleasant feeling, it can be considered a reliable indicator of health issues. Pain has always been expressed verbally, but in some cases, traditional patient self-reporting is not efficient. On one side, there are patients who have neurological disorders and cannot express themselves accurately, as well as patients who suddenly lose consciousness due to an abrupt faintness. On another side, medical staff working in crowded hospitals need to focus on emergencies and would opt for the automation of the task of looking after hospitalized patients during their entire stay, in order to notice any pain-related emergency. These issues can be tackled with deep learning. Knowing that pain is generally followed by spontaneous facial behaviors, facial expressions can be used as a substitute to verbal reporting, to express pain. In this paper, a convolutional neural network (CNN) model was built and trained to detect pain through patients’ facial expressions, using the UNBC-McMaster Shoulder Pain dataset. First, faces were detected from images using the Haarcascade Frontal Face Detector provided by OpenCV, and preprocessed through gray scaling, histogram equalization, face detection, image cropping, mean filtering, and normalization. Next, preprocessed images were fed into a CNN model which was built based on a modified version of the VGG16 architecture. The model was finally evaluated and fine-tuned in a continuous way based on its accuracy, which reached 92.5%. 展开更多
关键词 CNN Computer Vision facial expressions Image Processing Pain Assessment
下载PDF
On the Importance of Bodily Gestures,Facial Expressions,and Intonations to Thinking Expression and Interpretation
3
作者 石桐 蒋翃遐 《海外英语》 2021年第17期288-289,共2页
Bodily gestures,facial expressions,and intonations are argued to be notably important features of spoken languagewhich are opposed to written language.Bodily gestures with or without spoken words can influence the cla... Bodily gestures,facial expressions,and intonations are argued to be notably important features of spoken languagewhich are opposed to written language.Bodily gestures with or without spoken words can influence the clarity and density of expres-sion and involvement of listeners.Facial expressions whether or not correspond with exact thought could be"decoded"to influencethe extent of intelligibility of expression.Intonation can always reflect the mutual beliefs concerning the propositional content andstates of consciousness relating to the expression and interpretation.Therefore,these can considerably improve or abate the accura-cy of expression and interpretation of thought. 展开更多
关键词 Bodily gestures facial expressions intonations THOUGHT
下载PDF
EEG Mapping of Cortical Activation Related to Emotional Stroop with Facial Expressions: A TREFACE Study
4
作者 Edward Prada Maria C. H. Tavares +4 位作者 Ana Garcia Corina Satler Lia Martinez Cândida H. L. Alves Carlos Tomaz 《Journal of Behavioral and Brain Science》 CAS 2022年第10期514-532,共19页
TREFACE (Test for Recognition of Facial Expressions with Emotional Conflict) is a computerized model for investigating the emotional factor in executive functions based on the Stroop paradigm, for the recognition of e... TREFACE (Test for Recognition of Facial Expressions with Emotional Conflict) is a computerized model for investigating the emotional factor in executive functions based on the Stroop paradigm, for the recognition of emotional expressions in human faces. To investigate the influence of the emotional component at the cortical level, the electroencephalographic (EEG) recording technique was used to measure the involvement of cortical areas during the execution of certain tasks. Thirty Brazilian native Portuguese-speaking graduate students were evaluated on their anxiety and depression levels and on their well-being at the time of the session. The EEG recording was performed in 19 channels during the execution of the TREFACE test in the 3 stages established by the model-guided training, reading, and recognition—both with congruent conditions, when the image corresponds to the word shown, and incongruent condition, when there is no correspondence. The results showed better performance in the reading stage and in congruent conditions, while greater intensity of cortical activation in the recognition stage and in incongruent conditions. In a complementary way, specific frontal activations were observed: intense theta frequency activation in the left extension representing the frontal recruitment of posterior regions in information processing;also, activation in alpha frequency in the right frontotemporal line, illustrating the executive processing in the control of attention, in addition to the dorsal manifestation of the prefrontal side, for emotional performance. Activations in beta and gamma frequencies were displayed in a more intensely distributed way in the recognition stage. The results of this mapping of cortical activity in our study can help to understand how words and images of faces can be regulated in everyday life and in clinical contexts, suggesting an integrated model that includes the neural bases of the regulation strategy. 展开更多
关键词 EEG EMOTION facial expressions Executive Functions STROOP TREFACE
下载PDF
Generation of Performance-Driven Facial Expressions
5
作者 GOU Ye WANG Xiao-kan 《Computer Aided Drafting,Design and Manufacturing》 2009年第2期56-62,共7页
Coordinates of the key facial feature points can be captured by motion capture system OPTOTRAK with real-time character and high accuracy. The facial model is considered as an undirected weighted graph. By iteratively... Coordinates of the key facial feature points can be captured by motion capture system OPTOTRAK with real-time character and high accuracy. The facial model is considered as an undirected weighted graph. By iteratively subdividing the related triangle edges, the geodesic distance between points on the model surface is finally obtained. The RBF (Radial Basis Functions) interpolation technique based on geodesic distance is applied to generate deformation of the facial mesh model. Experimental results demonstrate that the geodesic distance can explore the complex topology of human face models perfectly and the method can generate realistic facial expressions. 展开更多
关键词 facial expression performance-driven RBF geodesic distance
下载PDF
The use of facial expressions in measuring students’interaction with distance learning environments during the COVID-19 crisis
6
作者 Waleed Maqableh Faisal Y.Alzyoud Jamal Zraqou 《Visual Informatics》 EI 2023年第1期1-17,共17页
Digital learning is becoming increasingly important in the crisis COVID-19 and is widespread in most countries.The proliferation of smart devices and 5G telecommunications systems are contributing to the development o... Digital learning is becoming increasingly important in the crisis COVID-19 and is widespread in most countries.The proliferation of smart devices and 5G telecommunications systems are contributing to the development of digital learning systems as an alternative to traditional learning systems.Digital learning includes blended learning,online learning,and personalized learning which mainly depends on the use of new technologies and strategies,so digital learning is widely developed to improve education and combat emerging disasters such as COVID-19 diseases.Despite the tremendous benefits of digital learning,there are many obstacles related to the lack of digitized curriculum and collaboration between teachers and students.Therefore,many attempts have been made to improve the learning outcomes through the following strategies:collaboration,teacher convenience,personalized learning,cost and time savings through professional development,and modeling.In this study,facial expressions and heart rates are used to measure the effectiveness of digital learning systems and the level of learners’engagement in learning environments.The results showed that the proposed approach outperformed the known related works in terms of learning effectiveness.The results of this research can be used to develop a digital learning environment. 展开更多
关键词 E-LEARNING COVID-19 Face-to-face learning facial expressions Heart pulse
原文传递
How Facial Expressions of Recipients Influence Online Prosocial Behaviors?-Evidence from Big Data Analysis on Tencent Gongyi Platform
7
作者 Lihan He Tianguang Meng 《Journal of Social Computing》 EI 2023年第4期337-356,共20页
Cyberspace has significantly influenced people’s perceptions of social interactions and communication.As a result,the conventional theories of kin selection and reciprocal altruism fall short in completely elucidatin... Cyberspace has significantly influenced people’s perceptions of social interactions and communication.As a result,the conventional theories of kin selection and reciprocal altruism fall short in completely elucidating online prosocial behavior.Based on the social information processing model,we propose an analytical framework to explain the donation behaviors on online platform.Through collecting textual and visual data from Tencent Gongyi platform pertaining to disease relief projects,and employing techniques encompassing text analysis,image analysis,and propensity score matching,we investigate the impact of both internal emotional cues and external contextual cues on donation behaviors.It is found that positive emotions tend to attract a larger number of donations,while negative emotions tend to result in higher per capita donation amounts.Furthermore,these effects manifest differently under distinct external contextual conditions. 展开更多
关键词 online prosocial behavior donation behavior facial expression big data image analysis
原文传递
Facial Expression Recognition Based on Multi-Channel Attention Residual Network 被引量:1
8
作者 Tongping Shen Huanqing Xu 《Computer Modeling in Engineering & Sciences》 SCIE EI 2023年第4期539-560,共22页
For the problems of complex model structure and too many training parameters in facial expression recognition algorithms,we proposed a residual network structure with a multi-headed channel attention(MCA)module.The mi... For the problems of complex model structure and too many training parameters in facial expression recognition algorithms,we proposed a residual network structure with a multi-headed channel attention(MCA)module.The migration learning algorithm is used to pre-train the convolutional layer parameters and mitigate the overfitting caused by the insufficient number of training samples.The designed MCA module is integrated into the ResNet18 backbone network.The attention mechanism highlights important information and suppresses irrelevant information by assigning different coefficients or weights,and the multi-head structure focuses more on the local features of the pictures,which improves the efficiency of facial expression recognition.Experimental results demonstrate that the model proposed in this paper achieves excellent recognition results in Fer2013,CK+and Jaffe datasets,with accuracy rates of 72.7%,98.8%and 93.33%,respectively. 展开更多
关键词 facial expression recognition channel attention ResNet18 DATASET
下载PDF
Human-Computer Interaction Using Deep Fusion Model-Based Facial Expression Recognition System
9
作者 Saiyed Umer Ranjeet Kumar Rout +3 位作者 Shailendra Tiwari Ahmad Ali AlZubi Jazem Mutared Alanazi Kulakov Yurii 《Computer Modeling in Engineering & Sciences》 SCIE EI 2023年第5期1165-1185,共21页
A deep fusion model is proposed for facial expression-based human-computer Interaction system.Initially,image preprocessing,i.e.,the extraction of the facial region from the input image is utilized.Thereafter,the extr... A deep fusion model is proposed for facial expression-based human-computer Interaction system.Initially,image preprocessing,i.e.,the extraction of the facial region from the input image is utilized.Thereafter,the extraction of more discriminative and distinctive deep learning features is achieved using extracted facial regions.To prevent overfitting,in-depth features of facial images are extracted and assigned to the proposed convolutional neural network(CNN)models.Various CNN models are then trained.Finally,the performance of each CNN model is fused to obtain the final decision for the seven basic classes of facial expressions,i.e.,fear,disgust,anger,surprise,sadness,happiness,neutral.For experimental purposes,three benchmark datasets,i.e.,SFEW,CK+,and KDEF are utilized.The performance of the proposed systemis compared with some state-of-the-artmethods concerning each dataset.Extensive performance analysis reveals that the proposed system outperforms the competitive methods in terms of various performance metrics.Finally,the proposed deep fusion model is being utilized to control a music player using the recognized emotions of the users. 展开更多
关键词 Deep learning facial expression emotions RECOGNITION CNN
下载PDF
The deep spatiotemporal network with dual-flow fusion for video-oriented facial expression recognition
10
作者 Chenquan Gan Jinhui Yao +2 位作者 Shuaiying Ma Zufan Zhang Lianxiang Zhu 《Digital Communications and Networks》 SCIE CSCD 2023年第6期1441-1447,共7页
The video-oriented facial expression recognition has always been an important issue in emotion perception.At present,the key challenge in most existing methods is how to effectively extract robust features to characte... The video-oriented facial expression recognition has always been an important issue in emotion perception.At present,the key challenge in most existing methods is how to effectively extract robust features to characterize facial appearance and geometry changes caused by facial motions.On this basis,the video in this paper is divided into multiple segments,each of which is simultaneously described by optical flow and facial landmark trajectory.To deeply delve the emotional information of these two representations,we propose a Deep Spatiotemporal Network with Dual-flow Fusion(defined as DSN-DF),which highlights the region and strength of expressions by spatiotemporal appearance features and the speed of change by spatiotemporal geometry features.Finally,experiments are implemented on CKþand MMI datasets to demonstrate the superiority of the proposed method. 展开更多
关键词 facial expression recognition Deep spatiotemporal network Optical flow facial landmark trajectory Dual-flow fusion
下载PDF
MDNN:Predicting Student Engagement via Gaze Direction and Facial Expression in Collaborative Learning
11
作者 Yi Chen Jin Zhou +2 位作者 Qianting Gao Jing Gao Wei Zhang 《Computer Modeling in Engineering & Sciences》 SCIE EI 2023年第7期381-401,共21页
Prediction of students’engagement in aCollaborative Learning setting is essential to improve the quality of learning.Collaborative learning is a strategy of learning through groups or teams.When cooperative learning ... Prediction of students’engagement in aCollaborative Learning setting is essential to improve the quality of learning.Collaborative learning is a strategy of learning through groups or teams.When cooperative learning behavior occurs,each student in the group should participate in teaching activities.Researchers showed that students who are actively involved in a class gain more.Gaze behavior and facial expression are important nonverbal indicators to reveal engagement in collaborative learning environments.Previous studies require the wearing of sensor devices or eye tracker devices,which have cost barriers and technical interference for daily teaching practice.In this paper,student engagement is automatically analyzed based on computer vision.We tackle the problem of engagement in collaborative learning using a multi-modal deep neural network(MDNN).We combined facial expression and gaze direction as two individual components of MDNN to predict engagement levels in collaborative learning environments.Our multi-modal solution was evaluated in a real collaborative environment.The results show that the model can accurately predict students’performance in the collaborative learning environment. 展开更多
关键词 ENGAGEMENT facial expression deep network GAZE
下载PDF
Facial Expression Recognition Model Depending on Optimized Support Vector Machine
12
作者 Amel Ali Alhussan Fatma M.Talaat +4 位作者 El-Sayed M.El-kenawy Abdelaziz A.Abdelhamid Abdelhameed Ibrahim Doaa Sami Khafaga Mona Alnaggar 《Computers, Materials & Continua》 SCIE EI 2023年第7期499-515,共17页
In computer vision,emotion recognition using facial expression images is considered an important research issue.Deep learning advances in recent years have aided in attaining improved results in this issue.According t... In computer vision,emotion recognition using facial expression images is considered an important research issue.Deep learning advances in recent years have aided in attaining improved results in this issue.According to recent studies,multiple facial expressions may be included in facial photographs representing a particular type of emotion.It is feasible and useful to convert face photos into collections of visual words and carry out global expression recognition.The main contribution of this paper is to propose a facial expression recognitionmodel(FERM)depending on an optimized Support Vector Machine(SVM).To test the performance of the proposed model(FERM),AffectNet is used.AffectNet uses 1250 emotion-related keywords in six different languages to search three major search engines and get over 1,000,000 facial photos online.The FERM is composed of three main phases:(i)the Data preparation phase,(ii)Applying grid search for optimization,and(iii)the categorization phase.Linear discriminant analysis(LDA)is used to categorize the data into eight labels(neutral,happy,sad,surprised,fear,disgust,angry,and contempt).Due to using LDA,the performance of categorization via SVM has been obviously enhanced.Grid search is used to find the optimal values for hyperparameters of SVM(C and gamma).The proposed optimized SVM algorithm has achieved an accuracy of 99%and a 98%F1 score. 展开更多
关键词 facial expression recognition machine learning linear dis-criminant analysis(LDA) support vector machine(SVM) grid search
下载PDF
Landmarks-Driven Triplet Representation for Facial Expression Similarity
13
作者 周逸润 冯向阳 朱明 《Journal of Donghua University(English Edition)》 CAS 2023年第1期34-44,共11页
The facial landmarks can provide valuable information for expression-related tasks.However,most approaches only use landmarks for segmentation preprocessing or directly input them into the neural network for fully con... The facial landmarks can provide valuable information for expression-related tasks.However,most approaches only use landmarks for segmentation preprocessing or directly input them into the neural network for fully connection.Such simple combination not only fails to pass the spatial information to network,but also increases calculation amounts.The method proposed in this paper aims to integrate facial landmarks-driven representation into the triplet network.The spatial information provided by landmarks is introduced into the feature extraction process,so that the model can better capture the location relationship.In addition,coordinate information is also integrated into the triple loss calculation to further enhance similarity prediction.Specifically,for each image,the coordinates of 68 landmarks are detected,and then a region attention map based on these landmarks is generated.For the feature map output by the shallow convolutional layer,it will be multiplied with the attention map to correct the feature activation,so as to strengthen the key region and weaken the unimportant region.Finally,the optimized embedding output can be further used for downstream tasks.Three embeddings of three images output by the network can be regarded as a triplet representation for similarity computation.Through the CK+dataset,the effectiveness of such an optimized feature extraction is verified.After that,it is applied to facial expression similarity tasks.The results on the facial expression comparison(FEC)dataset show that the accuracy rate will be significantly improved after the landmark information is introduced. 展开更多
关键词 facial expression similarity facial landmark triplet network attention mechanism feature optimization
下载PDF
Earthworm Optimization with Improved SqueezeNet Enabled Facial Expression Recognition Model
14
作者 N.Sharmili Saud Yonbawi +5 位作者 Sultan Alahmari E.Laxmi Lydia Mohamad Khairi Ishak Hend Khalid Alkahtani Ayman Aljarbouh Samih M.Mostafa 《Computer Systems Science & Engineering》 SCIE EI 2023年第8期2247-2262,共16页
Facial expression recognition(FER)remains a hot research area among computer vision researchers and still becomes a challenge because of high intraclass variations.Conventional techniques for this problem depend on ha... Facial expression recognition(FER)remains a hot research area among computer vision researchers and still becomes a challenge because of high intraclass variations.Conventional techniques for this problem depend on hand-crafted features,namely,LBP,SIFT,and HOG,along with that a classifier trained on a database of videos or images.Many execute perform well on image datasets captured in a controlled condition;however not perform well in the more challenging dataset,which has partial faces and image variation.Recently,many studies presented an endwise structure for facial expression recognition by utilizing DL methods.Therefore,this study develops an earthworm optimization with an improved SqueezeNet-based FER(EWOISN-FER)model.The presented EWOISN-FER model primarily applies the contrast-limited adaptive histogram equalization(CLAHE)technique as a pre-processing step.In addition,the improved SqueezeNet model is exploited to derive an optimal set of feature vectors,and the hyperparameter tuning process is performed by the stochastic gradient boosting(SGB)model.Finally,EWO with sparse autoencoder(SAE)is employed for the FER process,and the EWO algorithm appropriately chooses the SAE parameters.Awide-ranging experimental analysis is carried out to examine the performance of the proposed model.The experimental outcomes indicate the supremacy of the presented EWOISN-FER technique. 展开更多
关键词 facial expression recognition deep learning computer vision earthworm optimization hyperparameter optimization
下载PDF
Hybrid Convolutional Neural Network and Long Short-Term Memory Approach for Facial Expression Recognition
15
作者 M.N.Kavitha A.RajivKannan 《Intelligent Automation & Soft Computing》 SCIE 2023年第1期689-704,共16页
Facial Expression Recognition(FER)has been an importantfield of research for several decades.Extraction of emotional characteristics is crucial to FERs,but is complex to process as they have significant intra-class va... Facial Expression Recognition(FER)has been an importantfield of research for several decades.Extraction of emotional characteristics is crucial to FERs,but is complex to process as they have significant intra-class variances.Facial characteristics have not been completely explored in static pictures.Previous studies used Convolution Neural Networks(CNNs)based on transfer learning and hyperparameter optimizations for static facial emotional recognitions.Particle Swarm Optimizations(PSOs)have also been used for tuning hyperparameters.However,these methods achieve about 92 percent in terms of accuracy.The existing algorithms have issues with FER accuracy and precision.Hence,the overall FER performance is degraded significantly.To address this issue,this work proposes a combination of CNNs and Long Short-Term Memories(LSTMs)called the HCNN-LSTMs(Hybrid CNNs and LSTMs)approach for FERs.The work is evaluated on the benchmark dataset,Facial Expression Recog Image Ver(FERC).Viola-Jones(VJ)algorithms recognize faces from preprocessed images followed by HCNN-LSTMs feature extractions and FER classifications.Further,the success rate of Deep Learning Techniques(DLTs)has increased with hyperparameter tunings like epochs,batch sizes,initial learning rates,regularization parameters,shuffling types,and momentum.This proposed work uses Improved Weight based Whale Optimization Algorithms(IWWOAs)to select near-optimal settings for these parameters using bestfitness values.The experi-mentalfindings demonstrated that the proposed HCNN-LSTMs system outper-forms the existing methods. 展开更多
关键词 facial expression recognition Gaussianfilter hyperparameter optimization improved weight-based whale optimization algorithm deep learning(DL)
下载PDF
Facial Emotion Recognition Using Swarm Optimized Multi-Dimensional DeepNets with Losses Calculated by Cross Entropy Function
16
作者 A.N.Arun P.Maheswaravenkatesh T.Jayasankar 《Computer Systems Science & Engineering》 SCIE EI 2023年第9期3285-3301,共17页
The human face forms a canvas wherein various non-verbal expressions are communicated.These expressional cues and verbal communication represent the accurate perception of the actual intent.In many cases,a person may ... The human face forms a canvas wherein various non-verbal expressions are communicated.These expressional cues and verbal communication represent the accurate perception of the actual intent.In many cases,a person may present an outward expression that might differ fromthe genuine emotion or the feeling that the person experiences.Even when people try to hide these emotions,the real emotions that are internally felt might reflect as facial expressions in the form of micro expressions.These micro expressions cannot be masked and reflect the actual emotional state of a person under study.Suchmicro expressions are on display for a tiny time frame,making it difficult for a typical person to spot and recognize them.This necessitates a place for Machine Learning,where machines can be trained to look for these micro expressions and categorize them once they are on display.The study’s primary purpose is to spot and correctly classify these micro expressions,which are very difficult for a casual observer to identify.This research improves upon the accuracy of the recognition by using a novel learning technique that not only captures and recognizes multimodal facial micro expressions but also has features for aligning,cropping,and superimposing these feature frames to produce highly accurate and consistent results.A modified variant of the deep learning architecture of Convolutional Neural Networks combined with the swarm-based optimality technique of the Artificial Bee Colony Algorithm is proposed to effectively get an accuracy of more than 85%in identifying and classifying these micro expressions in contrast to other algorithms that have relatively less accuracy.One of the main aspects of processing these expressions from video or live feeds is aligning the frames homographically and identifying these concise bursts of micro expressions,which significantly increases the accuracy of the outcomes.The proposed swarm-based technique handles this in the research to precisely align and crop the subsequent frames,resulting in much superior detection rates in identifying the micro expressions when on display. 展开更多
关键词 facial micro expression recognition deep learning CNN artificial bee colony
下载PDF
Optimizing Facial Expression Recognition through Effective Preprocessing Techniques
17
作者 Lakshminarayanan Meena Thambusamy Velmurugan 《Journal of Computer and Communications》 2023年第12期86-101,共16页
Analyzing human facial expressions using machine vision systems is indeed a challenging yet fascinating problem in the field of computer vision and artificial intelligence. Facial expressions are a primary means throu... Analyzing human facial expressions using machine vision systems is indeed a challenging yet fascinating problem in the field of computer vision and artificial intelligence. Facial expressions are a primary means through which humans convey emotions, making their automated recognition valuable for various applications including man-computer interaction, affective computing, and psychological research. Pre-processing techniques are applied to every image with the aim of standardizing the images. Frequently used techniques include scaling, blurring, rotating, altering the contour of the image, changing the color to grayscale and normalization. Followed by feature extraction and then the traditional classifiers are applied to infer facial expressions. Increasing the performance of the system is difficult in the typical machine learning approach because feature extraction and classification phases are separate. But in Deep Neural Networks (DNN), the two phases are combined into a single phase. Therefore, the Convolutional Neural Network (CNN) models give better accuracy in Facial Expression Recognition than the traditional classifiers. But still the performance of CNN is hampered by noisy and deviated images in the dataset. This work utilized the preprocessing methods such as resizing, gray-scale conversion and normalization. Also, this research work is motivated by these drawbacks to study the use of image pre-processing techniques to enhance the performance of deep learning methods to implement facial expression recognition. Also, this research aims to recognize emotions using deep learning and show the influences of data pre-processing for further processing of images. The accuracy of each pre-processing methods is compared, then combination between them is analysed and the appropriate preprocessing techniques are identified and implemented to see the variability of accuracies in predicting facial expressions. . 展开更多
关键词 facial Expression Recognition Preprocessing Techniques NORMALIZATION Convolutional Neural Network (CNN) Deep Neural Networks (DNN)
下载PDF
Processing Environmental Stimuli in Paranoid Schizophrenia:Recognizing Facial Emotions and Performing Executive Functions 被引量:3
18
作者 YU Shao Hua ZHU Jun Peng +6 位作者 XU You ZHENG Lei Lei CHAI Hao HE Wei LIU Wei Bo LI Hui Chun WANG Wei 《Biomedical and Environmental Sciences》 SCIE CAS CSCD 2012年第6期697-705,共9页
Objective To study the contribution of executive function to abnormal recognition of facia expressions of emotion in schizophrenia patients. Methods Abnormal recognition of facial expressions of emotion was assayed ac... Objective To study the contribution of executive function to abnormal recognition of facia expressions of emotion in schizophrenia patients. Methods Abnormal recognition of facial expressions of emotion was assayed according to Japanese and Caucasian facial expressions of emotion (JACFEE), Wisconsin card sorting test {WCST), positive and negative symptom scale, and Hamilton anxiety and depression scale, respectively, in 88 paranoid schizophrenia patients and 75 healthy volunteers. Results Patients scored higher on the Positive and Negative Symptom Scale and the Hamilton Anxiety and Depression Scales, displayed lower JACFEE recognition accuracies and poorer WCST performances. The JACFEE recognition accuracy of contempt and disgust was negatively correlated with the negative symptom scale score while the recognition accuracy of fear was positively with the positive symptom scale score and the recognition accuracy of surprise was negatively with the general psychopathology score in patients. Moreover, the WCST could predict the JACFEE recognition accuracy of contempt, disgust, and sadness in patients, and the perseverative errors negatively predicted the recognition accuracy of sadness in healthy volunteers. The JACFEE recognition accuracy of sadness could predict the WCST categories in paranoid schizophrenia patients. Conclusion Recognition accuracy of social-/moral emotions, such as contempt, disgust and sadness is related to the executive function in paranoid schizophrenia patients, especially when regarding sadness. 展开更多
关键词 Executive function Japanese and Caucasian facial expressions of emotion Paranoidschizophrenia Wisconsin card sorting test
下载PDF
Postoperative accurate pain assessment of children and artificial intelligence: A medical hypothesis and planned study
19
作者 Jian-Ming Yue Qi Wang +1 位作者 Bin Liu Leng Zhou 《World Journal of Clinical Cases》 SCIE 2024年第4期681-687,共7页
Although the pediatric perioperative pain management has been improved in recent years,the valid and reliable pain assessment tool in perioperative period of children remains a challenging task.Pediatric perioperative... Although the pediatric perioperative pain management has been improved in recent years,the valid and reliable pain assessment tool in perioperative period of children remains a challenging task.Pediatric perioperative pain management is intractable not only because children cannot express their emotions accurately and objectively due to their inability to describe physiological characteristics of feeling which are different from those of adults,but also because there is a lack of effective and specific assessment tool for children.In addition,exposure to repeated painful stimuli early in life is known to have short and long-term adverse sequelae.The short-term sequelae can induce a series of neurological,endocrine,cardiovascular system stress related to psychological trauma,while long-term sequelae may alter brain maturation process,which can lead to impair neurodevelopmental,behavioral,and cognitive function.Children’s facial expressions largely reflect the degree of pain,which has led to the developing of a number of pain scoring tools that will help improve the quality of pain mana-gement in children if they are continually studied in depth.The artificial inte-lligence(AI)technology represented by machine learning has reached an unprecedented level in image processing of deep facial models through deep convolutional neural networks,which can effectively identify and systematically analyze various subtle features of children’s facial expressions.Based on the construction of a large database of images of facial expressions in children with perioperative pain,this study proposes to develop and apply automatic facial pain expression recognition software using AI technology.The study aims to improve the postoperative pain management for pediatric population and the short-term and long-term quality of life for pediatric patients after operational event. 展开更多
关键词 PEDIATRIC Perioperative pain Assessment tool facial expression Machine learning Artificial intelligence
下载PDF
Robust facial expression recognition system in higher poses
20
作者 Ebenezer Owusu Justice Kwame Appati Percy Okae 《Visual Computing for Industry,Biomedicine,and Art》 EI 2022年第1期159-173,共15页
Facial expression recognition(FER)has numerous applications in computer security,neuroscience,psychology,and engineering.Owing to its non-intrusiveness,it is considered a useful technology for combating crime.However,... Facial expression recognition(FER)has numerous applications in computer security,neuroscience,psychology,and engineering.Owing to its non-intrusiveness,it is considered a useful technology for combating crime.However,FER is plagued with several challenges,the most serious of which is its poor prediction accuracy in severe head poses.The aim of this study,therefore,is to improve the recognition accuracy in severe head poses by proposing a robust 3D head-tracking algorithm based on an ellipsoidal model,advanced ensemble of AdaBoost,and saturated vector machine(SVM).The FER features are tracked from one frame to the next using the ellipsoidal tracking model,and the visible expressive facial key points are extracted using Gabor filters.The ensemble algorithm(Ada-AdaSVM)is then used for feature selection and classification.The proposed technique is evaluated using the Bosphorus,BU-3DFE,MMI,CK^(+),and BP4D-Spontaneous facial expression databases.The overall performance is outstanding. 展开更多
关键词 facial expressions Three-dimensional head pose Ellipsoidal model Gabor filters Ada-AdaSVM
下载PDF
上一页 1 2 4 下一页 到第
使用帮助 返回顶部