Facial expressions are the straight link for showing human emotions. Psychologists have established the universality of six prototypic basic facial expressions of emotions which they believe are consistent among cultu...Facial expressions are the straight link for showing human emotions. Psychologists have established the universality of six prototypic basic facial expressions of emotions which they believe are consistent among cultures and races. However, some recent cross-cultural studies have questioned and to some degree refuted this cultural universality. Therefore, in order to contribute to the theory of cultural specificity of basic expressions, from a composite viewpoint of psychology and HCI (Human Computer Interaction), this paper presents a methodical analysis of Western-Caucasian and East-Asian prototypic expressions focused on four facial regions: forehead, eyes-eyebrows, mouth and nose. Our analysis is based on facial expression recognition and visual analysis of facial expression images of two datasets composed by four standard databases CK+, JAFFE, TFEID and JACFEE. A hybrid feature extraction method based on Fourier coefficients is proposed for the recognition analysis. In addition, we present a cross-cultural human study applied to 40 subjects as a baseline, as well as one comparison of facial expression recognition performance between the previous cross-cultural tests from the literature. With this work, it is possible to clarify the prior considerations for working with multicultural facial expression recognition and contribute to identifying the specific facial expression differences between Western-Caucasian and East-Asian basic expressions of emotions.展开更多
A novel fuzzy linear discriminant analysis method by the canonical correlation analysis (fuzzy-LDA/CCA)is presented and applied to the facial expression recognition. The fuzzy method is used to evaluate the degree o...A novel fuzzy linear discriminant analysis method by the canonical correlation analysis (fuzzy-LDA/CCA)is presented and applied to the facial expression recognition. The fuzzy method is used to evaluate the degree of the class membership to which each training sample belongs. CCA is then used to establish the relationship between each facial image and the corresponding class membership vector, and the class membership vector of a test image is estimated using this relationship. Moreover, the fuzzy-LDA/CCA method is also generalized to deal with nonlinear discriminant analysis problems via kernel method. The performance of the proposed method is demonstrated using real data.展开更多
Facial Expression Recognition(FER)has been an interesting area of research in places where there is human-computer interaction.Human psychol-ogy,emotions and behaviors can be analyzed in FER.Classifiers used in FER hav...Facial Expression Recognition(FER)has been an interesting area of research in places where there is human-computer interaction.Human psychol-ogy,emotions and behaviors can be analyzed in FER.Classifiers used in FER have been perfect on normal faces but have been found to be constrained in occluded faces.Recently,Deep Learning Techniques(DLT)have gained popular-ity in applications of real-world problems including recognition of human emo-tions.The human face reflects emotional states and human intentions.An expression is the most natural and powerful way of communicating non-verbally.Systems which form communications between the two are termed Human Machine Interaction(HMI)systems.FER can improve HMI systems as human expressions convey useful information to an observer.This paper proposes a FER scheme called EECNN(Enhanced Convolution Neural Network with Atten-tion mechanism)to recognize seven types of human emotions with satisfying results in its experiments.Proposed EECNN achieved 89.8%accuracy in classi-fying the images.展开更多
In computer vision,emotion recognition using facial expression images is considered an important research issue.Deep learning advances in recent years have aided in attaining improved results in this issue.According t...In computer vision,emotion recognition using facial expression images is considered an important research issue.Deep learning advances in recent years have aided in attaining improved results in this issue.According to recent studies,multiple facial expressions may be included in facial photographs representing a particular type of emotion.It is feasible and useful to convert face photos into collections of visual words and carry out global expression recognition.The main contribution of this paper is to propose a facial expression recognitionmodel(FERM)depending on an optimized Support Vector Machine(SVM).To test the performance of the proposed model(FERM),AffectNet is used.AffectNet uses 1250 emotion-related keywords in six different languages to search three major search engines and get over 1,000,000 facial photos online.The FERM is composed of three main phases:(i)the Data preparation phase,(ii)Applying grid search for optimization,and(iii)the categorization phase.Linear discriminant analysis(LDA)is used to categorize the data into eight labels(neutral,happy,sad,surprised,fear,disgust,angry,and contempt).Due to using LDA,the performance of categorization via SVM has been obviously enhanced.Grid search is used to find the optimal values for hyperparameters of SVM(C and gamma).The proposed optimized SVM algorithm has achieved an accuracy of 99%and a 98%F1 score.展开更多
The learning status of learners directly affects the quality of learning.Compared with offline teachers,it is difficult for online teachers to capture the learning status of students in the whole class,and it is even ...The learning status of learners directly affects the quality of learning.Compared with offline teachers,it is difficult for online teachers to capture the learning status of students in the whole class,and it is even more difficult to continue to pay attention to studentswhile teaching.Therefore,this paper proposes an online learning state analysis model based on a convolutional neural network and multi-dimensional information fusion.Specifically,a facial expression recognition model and an eye state recognition model are constructed to detect students’emotions and fatigue,respectively.By integrating the detected data with the homework test score data after online learning,an analysis model of students’online learning status is constructed.According to the PAD model,the learning state is expressed as three dimensions of students’understanding,engagement and interest,and then analyzed from multiple perspectives.Finally,the proposed model is applied to actual teaching,and procedural analysis of 5 different types of online classroom learners is carried out,and the validity of the model is verified by comparing with the results of the manual analysis.展开更多
This study analyzes live facial videos for recognizing nonverbal learning-related facial movements and head poses to discover the learning status of students. First, color and depth facial videos captured by a Kinect ...This study analyzes live facial videos for recognizing nonverbal learning-related facial movements and head poses to discover the learning status of students. First, color and depth facial videos captured by a Kinect are analyzed for face tracking using a three-dimensional (3D) active appearance model (AAM). Second, the facial feature vector sequences are used to train hidden Markov models (HMMs) to recognize seven learning-related facial movements (smile, blink, frown, shake, nod, yawn, and talk). The final stage involves the analysis of the facial movement vector sequence to evaluate three status scores (understanding, interaction, and consciousness), each represents the learning status of a student and is helpful to both teachers and students for improving teaching and learning. Five teaching activities demonstrate that the proposed learning status analysis system promotes the interpersonal communication between teachers and students.展开更多
There are various intense forces causing customers to use evaluated data when using social media platforms and microblogging sites.Today,customers throughout the world share their points of view on all kinds of topics...There are various intense forces causing customers to use evaluated data when using social media platforms and microblogging sites.Today,customers throughout the world share their points of view on all kinds of topics through these sources.The massive volume of data created by these customers makes it impossible to analyze such data manually.Therefore,an efficient and intelligent method for evaluating social media data and their divergence needs to be developed.Today,various types of equipment and techniques are available for automatically estimating the classification of sentiments.Sentiment analysis involves determining people’s emotions using facial expressions.Sentiment analysis can be performed for any individual based on specific incidents.The present study describes the analysis of an image dataset using CNNswithPCA intended to detect people’s sentiments(specifically,whether a person is happy or sad).This process is optimized using a genetic algorithm to get better results.Further,a comparative analysis has been conducted between the different models generated by changing the mutation factor,performing batch normalization,and applying feature reduction using PCA.These steps are carried out across five experiments using theKaggledataset.The maximum accuracy obtained is 96.984%,which is associated with the Happy and Sad sentiments.展开更多
Parotid secretory protein (PSP) secreted abundantly in saliva, whose function is related with the anti-bacterial effect. The PSP cDNA has been isolated from pig parotid glands by 3′ and 5′ rapid amplification of cDN...Parotid secretory protein (PSP) secreted abundantly in saliva, whose function is related with the anti-bacterial effect. The PSP cDNA has been isolated from pig parotid glands by 3′ and 5′ rapid amplification of cDNA end (RACE), based on the conserved signal peptide region among the known mammalian PSP. The result of homologous comparison shows that pig PSP and human PSP shares the high identity at the level of the primary, secondary and tertiary protein structure. A search for functionally significant protein motifs revealed a unique amino acid sequence pattern consisting of the residues Leu-X(6)-Leu-X(6)-Leu- X(7)-Leu-X(6)-Leu-X(6)-Leu near the amino-terminal portion of the protein, which is important to its function. RT-PCR, Dot blot and Northern blot analysis demonstrated that PSP was strongly expressed in parotid glands, but not in other tissues.展开更多
This paper proposes a methodology for using multi-modal data in gameplay to detect outlier behavior.The proposedmethodology collects,synchronizes,and quantifies time-series data fromwebcams,mouses,and keyboards.Facial...This paper proposes a methodology for using multi-modal data in gameplay to detect outlier behavior.The proposedmethodology collects,synchronizes,and quantifies time-series data fromwebcams,mouses,and keyboards.Facial expressions are varied on a one-dimensional pleasure axis,and changes in expression in the mouth and eye areas are detected separately.Furthermore,the keyboard and mouse input frequencies are tracked to determine the interaction intensity of users.Then,we apply a dynamic time warp algorithm to detect outlier behavior.The detected outlier behavior graph patterns were the play patterns that the game designer did not intend or play patterns that differed greatly from those of other users.These outlier patterns can provide game designers with feedback on the actual play experiences of users of the game.Our results can be applied to the game industry as game user experience analysis,enabling a quantitative evaluation of the excitement of a game.展开更多
Statistical two-group comparisons are widely used to identify the significant differentially expressed (DE) signatures against a therapy response for microarray data analysis. We applied a rank order statistics based ...Statistical two-group comparisons are widely used to identify the significant differentially expressed (DE) signatures against a therapy response for microarray data analysis. We applied a rank order statistics based on an Autoregressive Conditional Heteroskedasticity (ARCH) residual empirical process to DE analysis. This approach was considered for simulation data and publicly available datasets, and was compared with two-group comparison by original data and Auto-regressive (AR) residual. The significant DE genes by the ARCH and AR residuals were reduced by about 20% - 30% to these genes by the original data. Almost 100% of the genes by ARCH are covered by the genes by the original data unlike the genes by AR residuals. GO enrichment and Pathway analyses indicate the consistent biological characteristics between genes by ARCH residuals and original data. ARCH residuals array data might contribute to refining the number of significant DE genes to detect the biological feature as well as ordinal microarray data.展开更多
Cyberspace has significantly influenced people’s perceptions of social interactions and communication.As a result,the conventional theories of kin selection and reciprocal altruism fall short in completely elucidatin...Cyberspace has significantly influenced people’s perceptions of social interactions and communication.As a result,the conventional theories of kin selection and reciprocal altruism fall short in completely elucidating online prosocial behavior.Based on the social information processing model,we propose an analytical framework to explain the donation behaviors on online platform.Through collecting textual and visual data from Tencent Gongyi platform pertaining to disease relief projects,and employing techniques encompassing text analysis,image analysis,and propensity score matching,we investigate the impact of both internal emotional cues and external contextual cues on donation behaviors.It is found that positive emotions tend to attract a larger number of donations,while negative emotions tend to result in higher per capita donation amounts.Furthermore,these effects manifest differently under distinct external contextual conditions.展开更多
Automatic facial expression recognition (FER) from non-frontal views is a challenging research topic which has recently started to attract the attention of the research community. Pose variations are difficult to ta...Automatic facial expression recognition (FER) from non-frontal views is a challenging research topic which has recently started to attract the attention of the research community. Pose variations are difficult to tackle and many face analysis methods require the use of sophisticated nor- malization and initialization procedures. Thus head-pose in- variant facial expression recognition continues to be an is- sue to traditional methods. In this paper, we propose a novel approach for pose-invariant FER based on pose-robust fea- tures which are learned by deep learning methods -- prin- cipal component analysis network (PCANet) and convolu- tional neural networks (CNN) (PRP-CNN). In the first stage, unlabeled frontal face images are used to learn features by PCANet. The features, in the second stage, are used as the tar- get of CNN to learn a feature mapping between frontal faces and non-frontal faces. We then describe the non-frontal face images using the novel descriptions generated by the maps, and get unified descriptors for arbitrary face images. Finally, the pose-robust features are used to train a single classifier for FER instead of training multiple models for each spe- cific pose. Our method, on the whole, does not require pose/ landmark annotation and can recognize facial expression in a wide range of orientations. Extensive experiments on two public databases show that our framework yields dramatic improvements in facial expression analysis.展开更多
文摘Facial expressions are the straight link for showing human emotions. Psychologists have established the universality of six prototypic basic facial expressions of emotions which they believe are consistent among cultures and races. However, some recent cross-cultural studies have questioned and to some degree refuted this cultural universality. Therefore, in order to contribute to the theory of cultural specificity of basic expressions, from a composite viewpoint of psychology and HCI (Human Computer Interaction), this paper presents a methodical analysis of Western-Caucasian and East-Asian prototypic expressions focused on four facial regions: forehead, eyes-eyebrows, mouth and nose. Our analysis is based on facial expression recognition and visual analysis of facial expression images of two datasets composed by four standard databases CK+, JAFFE, TFEID and JACFEE. A hybrid feature extraction method based on Fourier coefficients is proposed for the recognition analysis. In addition, we present a cross-cultural human study applied to 40 subjects as a baseline, as well as one comparison of facial expression recognition performance between the previous cross-cultural tests from the literature. With this work, it is possible to clarify the prior considerations for working with multicultural facial expression recognition and contribute to identifying the specific facial expression differences between Western-Caucasian and East-Asian basic expressions of emotions.
基金The National Natural Science Foundation of China (No.60503023,60872160)the Natural Science Foundation for Universities ofJiangsu Province (No.08KJD520009)the Intramural Research Foundationof Nanjing University of Information Science and Technology(No.Y603)
文摘A novel fuzzy linear discriminant analysis method by the canonical correlation analysis (fuzzy-LDA/CCA)is presented and applied to the facial expression recognition. The fuzzy method is used to evaluate the degree of the class membership to which each training sample belongs. CCA is then used to establish the relationship between each facial image and the corresponding class membership vector, and the class membership vector of a test image is estimated using this relationship. Moreover, the fuzzy-LDA/CCA method is also generalized to deal with nonlinear discriminant analysis problems via kernel method. The performance of the proposed method is demonstrated using real data.
文摘Facial Expression Recognition(FER)has been an interesting area of research in places where there is human-computer interaction.Human psychol-ogy,emotions and behaviors can be analyzed in FER.Classifiers used in FER have been perfect on normal faces but have been found to be constrained in occluded faces.Recently,Deep Learning Techniques(DLT)have gained popular-ity in applications of real-world problems including recognition of human emo-tions.The human face reflects emotional states and human intentions.An expression is the most natural and powerful way of communicating non-verbally.Systems which form communications between the two are termed Human Machine Interaction(HMI)systems.FER can improve HMI systems as human expressions convey useful information to an observer.This paper proposes a FER scheme called EECNN(Enhanced Convolution Neural Network with Atten-tion mechanism)to recognize seven types of human emotions with satisfying results in its experiments.Proposed EECNN achieved 89.8%accuracy in classi-fying the images.
文摘In computer vision,emotion recognition using facial expression images is considered an important research issue.Deep learning advances in recent years have aided in attaining improved results in this issue.According to recent studies,multiple facial expressions may be included in facial photographs representing a particular type of emotion.It is feasible and useful to convert face photos into collections of visual words and carry out global expression recognition.The main contribution of this paper is to propose a facial expression recognitionmodel(FERM)depending on an optimized Support Vector Machine(SVM).To test the performance of the proposed model(FERM),AffectNet is used.AffectNet uses 1250 emotion-related keywords in six different languages to search three major search engines and get over 1,000,000 facial photos online.The FERM is composed of three main phases:(i)the Data preparation phase,(ii)Applying grid search for optimization,and(iii)the categorization phase.Linear discriminant analysis(LDA)is used to categorize the data into eight labels(neutral,happy,sad,surprised,fear,disgust,angry,and contempt).Due to using LDA,the performance of categorization via SVM has been obviously enhanced.Grid search is used to find the optimal values for hyperparameters of SVM(C and gamma).The proposed optimized SVM algorithm has achieved an accuracy of 99%and a 98%F1 score.
基金supported by the Chongqing Normal University Graduate Scientific Research Innovation Project (Grants YZH21014 and YZH21010).
文摘The learning status of learners directly affects the quality of learning.Compared with offline teachers,it is difficult for online teachers to capture the learning status of students in the whole class,and it is even more difficult to continue to pay attention to studentswhile teaching.Therefore,this paper proposes an online learning state analysis model based on a convolutional neural network and multi-dimensional information fusion.Specifically,a facial expression recognition model and an eye state recognition model are constructed to detect students’emotions and fatigue,respectively.By integrating the detected data with the homework test score data after online learning,an analysis model of students’online learning status is constructed.According to the PAD model,the learning state is expressed as three dimensions of students’understanding,engagement and interest,and then analyzed from multiple perspectives.Finally,the proposed model is applied to actual teaching,and procedural analysis of 5 different types of online classroom learners is carried out,and the validity of the model is verified by comparing with the results of the manual analysis.
文摘This study analyzes live facial videos for recognizing nonverbal learning-related facial movements and head poses to discover the learning status of students. First, color and depth facial videos captured by a Kinect are analyzed for face tracking using a three-dimensional (3D) active appearance model (AAM). Second, the facial feature vector sequences are used to train hidden Markov models (HMMs) to recognize seven learning-related facial movements (smile, blink, frown, shake, nod, yawn, and talk). The final stage involves the analysis of the facial movement vector sequence to evaluate three status scores (understanding, interaction, and consciousness), each represents the learning status of a student and is helpful to both teachers and students for improving teaching and learning. Five teaching activities demonstrate that the proposed learning status analysis system promotes the interpersonal communication between teachers and students.
文摘There are various intense forces causing customers to use evaluated data when using social media platforms and microblogging sites.Today,customers throughout the world share their points of view on all kinds of topics through these sources.The massive volume of data created by these customers makes it impossible to analyze such data manually.Therefore,an efficient and intelligent method for evaluating social media data and their divergence needs to be developed.Today,various types of equipment and techniques are available for automatically estimating the classification of sentiments.Sentiment analysis involves determining people’s emotions using facial expressions.Sentiment analysis can be performed for any individual based on specific incidents.The present study describes the analysis of an image dataset using CNNswithPCA intended to detect people’s sentiments(specifically,whether a person is happy or sad).This process is optimized using a genetic algorithm to get better results.Further,a comparative analysis has been conducted between the different models generated by changing the mutation factor,performing batch normalization,and applying feature reduction using PCA.These steps are carried out across five experiments using theKaggledataset.The maximum accuracy obtained is 96.984%,which is associated with the Happy and Sad sentiments.
基金supported by the National Major Basic Research Development Program(Grant No.G20000161)Beijing Natural Science Foundation(Grant No.5030001).
文摘Parotid secretory protein (PSP) secreted abundantly in saliva, whose function is related with the anti-bacterial effect. The PSP cDNA has been isolated from pig parotid glands by 3′ and 5′ rapid amplification of cDNA end (RACE), based on the conserved signal peptide region among the known mammalian PSP. The result of homologous comparison shows that pig PSP and human PSP shares the high identity at the level of the primary, secondary and tertiary protein structure. A search for functionally significant protein motifs revealed a unique amino acid sequence pattern consisting of the residues Leu-X(6)-Leu-X(6)-Leu- X(7)-Leu-X(6)-Leu-X(6)-Leu near the amino-terminal portion of the protein, which is important to its function. RT-PCR, Dot blot and Northern blot analysis demonstrated that PSP was strongly expressed in parotid glands, but not in other tissues.
基金This research was supported by the Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(2021R1I1A3058103).
文摘This paper proposes a methodology for using multi-modal data in gameplay to detect outlier behavior.The proposedmethodology collects,synchronizes,and quantifies time-series data fromwebcams,mouses,and keyboards.Facial expressions are varied on a one-dimensional pleasure axis,and changes in expression in the mouth and eye areas are detected separately.Furthermore,the keyboard and mouse input frequencies are tracked to determine the interaction intensity of users.Then,we apply a dynamic time warp algorithm to detect outlier behavior.The detected outlier behavior graph patterns were the play patterns that the game designer did not intend or play patterns that differed greatly from those of other users.These outlier patterns can provide game designers with feedback on the actual play experiences of users of the game.Our results can be applied to the game industry as game user experience analysis,enabling a quantitative evaluation of the excitement of a game.
文摘Statistical two-group comparisons are widely used to identify the significant differentially expressed (DE) signatures against a therapy response for microarray data analysis. We applied a rank order statistics based on an Autoregressive Conditional Heteroskedasticity (ARCH) residual empirical process to DE analysis. This approach was considered for simulation data and publicly available datasets, and was compared with two-group comparison by original data and Auto-regressive (AR) residual. The significant DE genes by the ARCH and AR residuals were reduced by about 20% - 30% to these genes by the original data. Almost 100% of the genes by ARCH are covered by the genes by the original data unlike the genes by AR residuals. GO enrichment and Pathway analyses indicate the consistent biological characteristics between genes by ARCH residuals and original data. ARCH residuals array data might contribute to refining the number of significant DE genes to detect the biological feature as well as ordinal microarray data.
文摘Cyberspace has significantly influenced people’s perceptions of social interactions and communication.As a result,the conventional theories of kin selection and reciprocal altruism fall short in completely elucidating online prosocial behavior.Based on the social information processing model,we propose an analytical framework to explain the donation behaviors on online platform.Through collecting textual and visual data from Tencent Gongyi platform pertaining to disease relief projects,and employing techniques encompassing text analysis,image analysis,and propensity score matching,we investigate the impact of both internal emotional cues and external contextual cues on donation behaviors.It is found that positive emotions tend to attract a larger number of donations,while negative emotions tend to result in higher per capita donation amounts.Furthermore,these effects manifest differently under distinct external contextual conditions.
文摘Automatic facial expression recognition (FER) from non-frontal views is a challenging research topic which has recently started to attract the attention of the research community. Pose variations are difficult to tackle and many face analysis methods require the use of sophisticated nor- malization and initialization procedures. Thus head-pose in- variant facial expression recognition continues to be an is- sue to traditional methods. In this paper, we propose a novel approach for pose-invariant FER based on pose-robust fea- tures which are learned by deep learning methods -- prin- cipal component analysis network (PCANet) and convolu- tional neural networks (CNN) (PRP-CNN). In the first stage, unlabeled frontal face images are used to learn features by PCANet. The features, in the second stage, are used as the tar- get of CNN to learn a feature mapping between frontal faces and non-frontal faces. We then describe the non-frontal face images using the novel descriptions generated by the maps, and get unified descriptors for arbitrary face images. Finally, the pose-robust features are used to train a single classifier for FER instead of training multiple models for each spe- cific pose. Our method, on the whole, does not require pose/ landmark annotation and can recognize facial expression in a wide range of orientations. Extensive experiments on two public databases show that our framework yields dramatic improvements in facial expression analysis.