With the rapid development in the field of artificial intelligence and natural language processing(NLP),research on music retrieval has gained importance.Music messages express emotional signals.The emotional classifi...With the rapid development in the field of artificial intelligence and natural language processing(NLP),research on music retrieval has gained importance.Music messages express emotional signals.The emotional classification of music can help in conveniently organizing and retrieving music.It is also the premise of using music for psychological intervention and physiological adjustment.A new chord-to-vector method was proposed,which converted the chord information of music into a chord vector of music and combined the weight of the Mel-frequency cepstral coefficient(MFCC) and residual phase(RP) with the feature fusion of a cochleogram.The music emotion recognition and classification training was carried out using the fusion of a convolution neural network and bidirectional long short-term memory(BiLSTM).In addition,based on the self-collected dataset,a comparison of the proposed model with other model structures was performed.The results show that the proposed method achieved a higher recognition accuracy compared with other models.展开更多
Emotion cause extraction(ECE)task that aims at extracting potential trigger events of certain emotions has attracted extensive attention recently.However,current work neglects the implicit emotion expressed without an...Emotion cause extraction(ECE)task that aims at extracting potential trigger events of certain emotions has attracted extensive attention recently.However,current work neglects the implicit emotion expressed without any explicit emotional keywords,which appears more frequently in application scenarios.The lack of explicit emotion information makes it extremely hard to extract emotion causes only with the local context.Moreover,an entire event is usually across multiple clauses,while existing work merely extracts cause events at clause level and cannot effectively capture complete cause event information.To address these issues,the events are first redefined at the tuple level and a span-based tuple-level algorithm is proposed to extract events from different clauses.Based on it,a corpus for implicit emotion cause extraction that tries to extract causes of implicit emotions is constructed.The authors propose a knowledge-enriched jointlearning model of implicit emotion recognition and implicit emotion cause extraction tasks(KJ-IECE),which leverages commonsense knowledge from ConceptNet and NRC_VAD to better capture connections between emotion and corresponding cause events.Experiments on both implicit and explicit emotion cause extraction datasets demonstrate the effectiveness of the proposed model.展开更多
With the popularity of online learning and due to the significant influence of emotion on the learning effect,more and more researches focus on emotion recognition in online learning.Most of the current research uses ...With the popularity of online learning and due to the significant influence of emotion on the learning effect,more and more researches focus on emotion recognition in online learning.Most of the current research uses the comments of the learning platform or the learner’s expression for emotion recognition.The research data on other modalities are scarce.Most of the studies also ignore the impact of instructional videos on learners and the guidance of knowledge on data.Because of the need for other modal research data,we construct a synchronous multimodal data set for analyzing learners’emotional states in online learning scenarios.The data set recorded the eye movement data and photoplethysmography(PPG)signals of 68 subjects and the instructional video they watched.For the problem of ignoring the instructional videos on learners and ignoring the knowledge,a multimodal emotion recognition method in video learning based on knowledge enhancement is proposed.This method uses the knowledge-based features extracted from instructional videos,such as brightness,hue,saturation,the videos’clickthrough rate,and emotion generation time,to guide the emotion recognition process of physiological signals.This method uses Convolutional Neural Networks(CNN)and Long Short-Term Memory(LSTM)networks to extract deeper emotional representation and spatiotemporal information from shallow features.The model uses multi-head attention(MHA)mechanism to obtain critical information in the extracted deep features.Then,Temporal Convolutional Network(TCN)is used to learn the information in the deep features and knowledge-based features.Knowledge-based features are used to supplement and enhance the deep features of physiological signals.Finally,the fully connected layer is used for emotion recognition,and the recognition accuracy reaches 97.51%.Compared with two recent researches,the accuracy improved by 8.57%and 2.11%,respectively.On the four public data sets,our proposed method also achieves better results compared with the two recent researches.The experiment results show that the proposed multimodal emotion recognition method based on knowledge enhancement has good performance and robustness.展开更多
In recent years,research on facial expression recognition(FER)under mask is trending.Wearing a mask for protection from Covid 19 has become a compulsion and it hides the facial expressions that is why FER under the ma...In recent years,research on facial expression recognition(FER)under mask is trending.Wearing a mask for protection from Covid 19 has become a compulsion and it hides the facial expressions that is why FER under the mask is a difficult task.The prevailing unimodal techniques for facial recognition are not up to the mark in terms of good results for the masked face,however,a multi-modal technique can be employed to generate better results.We proposed a multi-modal methodology based on deep learning for facial recognition under a masked face using facial and vocal expressions.The multimodal has been trained on a facial and vocal dataset.We have used two standard datasets,M-LFW for the masked dataset and CREMA-D and TESS dataset for vocal expressions.The vocal expressions are in the form of audio while the faces data is in image form that is why the data is heterogenous.In order to make the data homogeneous,the voice data is converted into images by taking spectrogram.A spectrogram embeds important features of the voice and it converts the audio format into the images.Later,the dataset is passed to the multimodal for training.neural network and the experimental results demonstrate that the proposed multimodal algorithm outsets unimodal methods and other state-of-the-art deep neural network models.展开更多
Arabic is one of the most spoken languages across the globe.However,there are fewer studies concerning Sentiment Analysis(SA)in Arabic.In recent years,the detected sentiments and emotions expressed in tweets have rece...Arabic is one of the most spoken languages across the globe.However,there are fewer studies concerning Sentiment Analysis(SA)in Arabic.In recent years,the detected sentiments and emotions expressed in tweets have received significant interest.The substantial role played by the Arab region in international politics and the global economy has urged the need to examine the sentiments and emotions in the Arabic language.Two common models are available:Machine Learning and lexicon-based approaches to address emotion classification problems.With this motivation,the current research article develops a Teaching and Learning Optimization with Machine Learning Based Emotion Recognition and Classification(TLBOML-ERC)model for Sentiment Analysis on tweets made in the Arabic language.The presented TLBOML-ERC model focuses on recognising emotions and sentiments expressed in Arabic tweets.To attain this,the proposed TLBOMLERC model initially carries out data pre-processing and a Continuous Bag Of Words(CBOW)-based word embedding process.In addition,Denoising Autoencoder(DAE)model is also exploited to categorise different emotions expressed in Arabic tweets.To improve the efficacy of the DAE model,the Teaching and Learning-based Optimization(TLBO)algorithm is utilized to optimize the parameters.The proposed TLBOML-ERC method was experimentally validated with the help of an Arabic tweets dataset.The obtained results show the promising performance of the proposed TLBOML-ERC model on Arabic emotion classification.展开更多
At present, the channels used for EEG acquisition are more than 16, which makes it difficult to wear EEG caps and has poor contact. Therefore, it brings difficulties to the collection of brain waves is not conducive t...At present, the channels used for EEG acquisition are more than 16, which makes it difficult to wear EEG caps and has poor contact. Therefore, it brings difficulties to the collection of brain waves is not conducive to converting research into applications. It is a well worth studying work in researching how to find the key brain electrode in the existing brain wave, which will greatly reduce the number of EEG acquisition points during application, making it easier to translate the research into practical application. This paper takes emotional EEG as an example to study how to find the key brain electrode points of emotional EEG with deep learning method. Firstly, using the least square regression algorithm to calculate the characteristic coefficients of each electrode point;secondly, according to the law of the characteristic coefficient value, grouping the key EEG poles for experiment. In the grouping experiment, the Conv1d-GRU model used to train and verify the EEG data of the corresponding electrode points. Finally, from the results of various grouping experiments, it concluded that the selection method of the key EEG level points should be the electrode points with positive characteristic coefficient, and the accuracy of verification is 97.6%. With experiments, it confirmed that there are key electrode points in the detection of emotional EEG by 16-channel OpenBCI. There are only six key electrode points of emotional EEG;that is to say, the EEG data collected by only six key electrode points can identify seven kinds of emotional EEG. .展开更多
To overcome the deficiencies of high computational complexity and low convergence speed in traditional neural networks, a novel bio-inspired machine learning algorithm named brain emotional learning (BEL) is introdu...To overcome the deficiencies of high computational complexity and low convergence speed in traditional neural networks, a novel bio-inspired machine learning algorithm named brain emotional learning (BEL) is introduced. BEL mimics the emotional learning mechanism in brain which has the superior features of fast learning and quick reacting. To further improve the performance of BEL in data analysis, genetic algorithm (GA) is adopted for optimally tuning the weights and biases of amygdala and orbitofrontal cortex in BEL neural network. The integrated algorithm named GA-BEL combines the advantages of the fast learning of BEL, and the global optimum solution of GA. GA-BEL has been tested on a real-world chaotic time series of geomagnetic activity index for prediction, eight benchmark datasets of university California at Irvine (UCI) and a functional magnetic resonance imaging (fMRI) dataset for classifications. The comparisons of experimental results have shown that the proposed GA-BEL algorithm is more accurate than the original BEL in prediction, and more effective when dealing with large-scale classification problems. Further, it outperforms most other traditional algorithms in terms of accuracy and execution speed in both prediction and classification applications.展开更多
Recently,people have been paying more and more attention to mental health,such as depression,autism,and other common mental diseases.In order to achieve a mental disease diagnosis,intelligent methods have been activel...Recently,people have been paying more and more attention to mental health,such as depression,autism,and other common mental diseases.In order to achieve a mental disease diagnosis,intelligent methods have been actively studied.However,the existing models suffer the accuracy degradation caused by the clarity and occlusion of human faces in practical applications.This paper,thus,proposes a multi-scale feature fusion network that obtains feature information at three scales by locating the sentiment region in the image,and integrates global feature information and local feature information.In addition,a focal cross-entropy loss function is designed to improve the network's focus on difficult samples during training,enhance the training effect,and increase the model recognition accuracy.Experimental results on the challenging RAF_DB dataset show that the proposed model exhibits better facial expression recognition accuracy than existing techniques.展开更多
In order to improve the efficiency of speech emotion recognition across corpora,a speech emotion transfer learning method based on the deep sparse auto-encoder is proposed.The algorithm first reconstructs a small amou...In order to improve the efficiency of speech emotion recognition across corpora,a speech emotion transfer learning method based on the deep sparse auto-encoder is proposed.The algorithm first reconstructs a small amount of data in the target domain by training the deep sparse auto-encoder,so that the encoder can learn the low-dimensional structural representation of the target domain data.Then,the source domain data and the target domain data are coded by the trained deep sparse auto-encoder to obtain the reconstruction data of the low-dimensional structural representation close to the target domain.Finally,a part of the reconstructed tagged target domain data is mixed with the reconstructed source domain data to jointly train the classifier.This part of the target domain data is used to guide the source domain data.Experiments on the CASIA,SoutheastLab corpus show that the model recognition rate after a small amount of data transferred reached 89.2%and 72.4%on the DNN.Compared to the training results of the complete original corpus,it only decreased by 2%in the CASIA corpus,and only 3.4%in the SoutheastLab corpus.Experiments show that the algorithm can achieve the effect of labeling all data in the extreme case that the data set has only a small amount of data tagged.展开更多
Overseas research has shown that achievement emotions have direct relationships with "achievement outcome" and"achievement activities". The purpose of the present study aimed to compare the relationships betweenac...Overseas research has shown that achievement emotions have direct relationships with "achievement outcome" and"achievement activities". The purpose of the present study aimed to compare the relationships betweenachievement emotions, motivation, and language learning strategies of high, mid and low achievers in Englishlanguage learning at an international university in a southern province in China. Quantitative data were collectedthrough a questionnaire survey of 74 (16 males, 58 females) TESL major students. Results indicated that studentsin general experienced more positive than negative achievement emotions; more intrinsically rather thanextrinsically motivated to learn English; and quite frequently used a variety of learning strategies to overcome theirlearning difficulties. However, Year Four low-achievers experienced more negative achievement emotions. Theyseldom used metacognitive, affective and social learning strategies, and they had lower degrees of intrinsicmotivation. Implications for institutional support for at risk students are discussed.展开更多
Due to the widespread usage of social media in our recent daily lifestyles,sentiment analysis becomes an important field in pattern recognition and Natural Language Processing(NLP).In this field,users’feedback data o...Due to the widespread usage of social media in our recent daily lifestyles,sentiment analysis becomes an important field in pattern recognition and Natural Language Processing(NLP).In this field,users’feedback data on a specific issue are evaluated and analyzed.Detecting emotions within the text is therefore considered one of the important challenges of the current NLP research.Emotions have been widely studied in psychology and behavioral science as they are an integral part of the human nature.Emotions describe a state of mind of distinct behaviors,feelings,thoughts and experiences.The main objective of this paper is to propose a new model named BERT-CNN to detect emotions from text.This model is formed by a combination of the Bidirectional Encoder Representations from Transformer(BERT)and the Convolutional Neural networks(CNN)for textual classification.This model embraces the BERT to train the word semantic representation language model.According to the word context,the semantic vector is dynamically generated and then placed into the CNN to predict the output.Results of a comparative study proved that the BERT-CNN model overcomes the state-of-art baseline performance produced by different models in the literature using the semeval 2019 task3 dataset and ISEAR datasets.The BERTCNN model achieves an accuracy of 94.7%and an F1-score of 94%for semeval2019 task3 dataset and an accuracy of 75.8%and an F1-score of 76%for ISEAR dataset.展开更多
Emotion detection from the text is a challenging problem in the text analytics.The opinion mining experts are focusing on the development of emotion detection applications as they have received considerable attention ...Emotion detection from the text is a challenging problem in the text analytics.The opinion mining experts are focusing on the development of emotion detection applications as they have received considerable attention of online community including users and business organization for collecting and interpreting public emotions.However,most of the existing works on emotion detection used less efficient machine learning classifiers with limited datasets,resulting in performance degradation.To overcome this issue,this work aims at the evaluation of the performance of different machine learning classifiers on a benchmark emotion dataset.The experimental results show the performance of different machine learning classifiers in terms of different evaluation metrics like precision,recall ad f-measure.Finally,a classifier with the best performance is recommended for the emotion classification.展开更多
Due to the lack of large-scale emotion databases,it is hard to obtain comparable improvement in multimodal emotion recognition of the deep neural network by deep learning,which has made great progress in other areas.W...Due to the lack of large-scale emotion databases,it is hard to obtain comparable improvement in multimodal emotion recognition of the deep neural network by deep learning,which has made great progress in other areas.We use transfer learning to improve its performance with pretrained models on largescale data.Audio is encoded using deep speech recognition networks with 500 hours’speech and video is encoded using convolutional neural networks with over 110,000 images.The extracted audio and visual features are fed into Long Short-Term Memory to train models respectively.Logistic regression and ensemble method are performed in decision level fusion.The experiment results indicate that 1)audio features extracted from deep speech recognition networks achieve better performance than handcrafted audio features;2)the visual emotion recognition obtains better performance than audio emotion recognition;3)the ensemble method gets better performance than logistic regression and prior knowledge from micro-F1 value further improves the performance and robustness,achieving accuracy of 67.00%for“happy”,54.90%for“an?gry”,and 51.69%for“sad”.展开更多
Textual Emotion Analysis(TEA)aims to extract and analyze user emotional states in texts.Various Deep Learning(DL)methods have developed rapidly,and they have proven to be successful in many fields such as audio,image,...Textual Emotion Analysis(TEA)aims to extract and analyze user emotional states in texts.Various Deep Learning(DL)methods have developed rapidly,and they have proven to be successful in many fields such as audio,image,and natural language processing.This trend has drawn increasing researchers away from traditional machine learning to DL for their scientific research.In this paper,we provide an overview of TEA based on DL methods.After introducing a background for emotion analysis that includes defining emotion,emotion classification methods,and application domains of emotion analysis,we summarize DL technology,and the word/sentence representation learning method.We then categorize existing TEA methods based on text structures and linguistic types:text-oriented monolingual methods,text conversations-oriented monolingual methods,text-oriented cross-linguistic methods,and emoji-oriented cross-linguistic methods.We close by discussing emotion analysis challenges and future research trends.We hope that our survey will assist readers in understanding the relationship between TEA and DL methods while also improving TEA development.展开更多
Nowadays, millions of users use many social media systems every day. These services produce massive messages, which play a vital role in the social networking paradigm. As we see, an intelligent learning emotion syste...Nowadays, millions of users use many social media systems every day. These services produce massive messages, which play a vital role in the social networking paradigm. As we see, an intelligent learning emotion system is desperately needed for detecting emotion among these messages. This system could be suitable in understanding users’ feelings towards particular discussion. This paper proposes a text-based emotion recognition approach that uses personal text data to recognize user’s current emotion. The proposed approach applies Dominant Meaning Technique to recognize user’s emotion. The paper reports promising experiential results on the tested dataset based on the proposed algorithm.展开更多
At the beginning of 2020,the“COVID-19”came out.Affected by the outbreaks,the universities have to carry out online teaching.Online learning provides students with full freedom and personalized learning space,but at ...At the beginning of 2020,the“COVID-19”came out.Affected by the outbreaks,the universities have to carry out online teaching.Online learning provides students with full freedom and personalized learning space,but at the same time,it also brings problems such as weak feelings between teachers and students and lack of learning experience.To solve these problems,this paper adopts the methods of questionnaire survey,experimental control and behavioral modeling.This paper studies how teachers’emotional support behavior affects students’learning process and learning emotion in online learning environment,and proposes that teachers’emotional support behavior is appealed and desired by students.Positive teachers’emotional support behavior can promote students’learning process and improve students’learning emotion.展开更多
Facial emotion recognition(FER)has become a focal point of research due to its widespread applications,ranging from human-computer interaction to affective computing.While traditional FER techniques have relied on han...Facial emotion recognition(FER)has become a focal point of research due to its widespread applications,ranging from human-computer interaction to affective computing.While traditional FER techniques have relied on handcrafted features and classification models trained on image or video datasets,recent strides in artificial intelligence and deep learning(DL)have ushered in more sophisticated approaches.The research aims to develop a FER system using a Faster Region Convolutional Neural Network(FRCNN)and design a specialized FRCNN architecture tailored for facial emotion recognition,leveraging its ability to capture spatial hierarchies within localized regions of facial features.The proposed work enhances the accuracy and efficiency of facial emotion recognition.The proposed work comprises twomajor key components:Inception V3-based feature extraction and FRCNN-based emotion categorization.Extensive experimentation on Kaggle datasets validates the effectiveness of the proposed strategy,showcasing the FRCNN approach’s resilience and accuracy in identifying and categorizing facial expressions.The model’s overall performance metrics are compelling,with an accuracy of 98.4%,precision of 97.2%,and recall of 96.31%.This work introduces a perceptive deep learning-based FER method,contributing to the evolving landscape of emotion recognition technologies.The high accuracy and resilience demonstrated by the FRCNN approach underscore its potential for real-world applications.This research advances the field of FER and presents a compelling case for the practicality and efficacy of deep learning models in automating the understanding of facial emotions.展开更多
STEAM(science,technology,engineering,arts,and mathematics)education aims to cultivate innovative talents with multidimensional literacy through interdisciplinary integration and innovative practice.However,lack of stu...STEAM(science,technology,engineering,arts,and mathematics)education aims to cultivate innovative talents with multidimensional literacy through interdisciplinary integration and innovative practice.However,lack of student motivation has emerged as a key factor hindering its effectiveness.This study explores the integrated application of positive emotions and flow experience in STEAM education from the perspective of positive psychology.It systematically explains how these factors enhance learning motivation and promote knowledge internalization,proposing feasible pathways for instructional design,resource provision,environment creation,and team building.The study provides theoretical insights and practical guidance for transforming STEAM education in the new era.展开更多
Saul Bellow'work mainly describes the human who cannot find the standpoint in American society, and they dangled between the ideal and the reality, insisting the belief of love. Seize the Day presents a several-ho...Saul Bellow'work mainly describes the human who cannot find the standpoint in American society, and they dangled between the ideal and the reality, insisting the belief of love. Seize the Day presents a several-hour experience of a middle-aged man who had lost his job, was estranged from his wife and children, and deserted by his father. With the double writing methods of narrative and recollection, the author reveals thriftily how individual was threatened by social prejudice and the important position of money. He extremely pays attention to the conflict between emotion and rationality to show people's anxiety and contradiction in today's America.展开更多
基金National Natural Science Foundation of China (No.61801106)。
文摘With the rapid development in the field of artificial intelligence and natural language processing(NLP),research on music retrieval has gained importance.Music messages express emotional signals.The emotional classification of music can help in conveniently organizing and retrieving music.It is also the premise of using music for psychological intervention and physiological adjustment.A new chord-to-vector method was proposed,which converted the chord information of music into a chord vector of music and combined the weight of the Mel-frequency cepstral coefficient(MFCC) and residual phase(RP) with the feature fusion of a cochleogram.The music emotion recognition and classification training was carried out using the fusion of a convolution neural network and bidirectional long short-term memory(BiLSTM).In addition,based on the self-collected dataset,a comparison of the proposed model with other model structures was performed.The results show that the proposed method achieved a higher recognition accuracy compared with other models.
基金National Natural Science Foundation of China,Grant/Award Numbers:61671064,61732005National Key Research&Development Program,Grant/Award Number:2018YFC0831700。
文摘Emotion cause extraction(ECE)task that aims at extracting potential trigger events of certain emotions has attracted extensive attention recently.However,current work neglects the implicit emotion expressed without any explicit emotional keywords,which appears more frequently in application scenarios.The lack of explicit emotion information makes it extremely hard to extract emotion causes only with the local context.Moreover,an entire event is usually across multiple clauses,while existing work merely extracts cause events at clause level and cannot effectively capture complete cause event information.To address these issues,the events are first redefined at the tuple level and a span-based tuple-level algorithm is proposed to extract events from different clauses.Based on it,a corpus for implicit emotion cause extraction that tries to extract causes of implicit emotions is constructed.The authors propose a knowledge-enriched jointlearning model of implicit emotion recognition and implicit emotion cause extraction tasks(KJ-IECE),which leverages commonsense knowledge from ConceptNet and NRC_VAD to better capture connections between emotion and corresponding cause events.Experiments on both implicit and explicit emotion cause extraction datasets demonstrate the effectiveness of the proposed model.
基金supported by the National Science Foundation of China (Grant Nos.62267001,61906051)。
文摘With the popularity of online learning and due to the significant influence of emotion on the learning effect,more and more researches focus on emotion recognition in online learning.Most of the current research uses the comments of the learning platform or the learner’s expression for emotion recognition.The research data on other modalities are scarce.Most of the studies also ignore the impact of instructional videos on learners and the guidance of knowledge on data.Because of the need for other modal research data,we construct a synchronous multimodal data set for analyzing learners’emotional states in online learning scenarios.The data set recorded the eye movement data and photoplethysmography(PPG)signals of 68 subjects and the instructional video they watched.For the problem of ignoring the instructional videos on learners and ignoring the knowledge,a multimodal emotion recognition method in video learning based on knowledge enhancement is proposed.This method uses the knowledge-based features extracted from instructional videos,such as brightness,hue,saturation,the videos’clickthrough rate,and emotion generation time,to guide the emotion recognition process of physiological signals.This method uses Convolutional Neural Networks(CNN)and Long Short-Term Memory(LSTM)networks to extract deeper emotional representation and spatiotemporal information from shallow features.The model uses multi-head attention(MHA)mechanism to obtain critical information in the extracted deep features.Then,Temporal Convolutional Network(TCN)is used to learn the information in the deep features and knowledge-based features.Knowledge-based features are used to supplement and enhance the deep features of physiological signals.Finally,the fully connected layer is used for emotion recognition,and the recognition accuracy reaches 97.51%.Compared with two recent researches,the accuracy improved by 8.57%and 2.11%,respectively.On the four public data sets,our proposed method also achieves better results compared with the two recent researches.The experiment results show that the proposed multimodal emotion recognition method based on knowledge enhancement has good performance and robustness.
文摘In recent years,research on facial expression recognition(FER)under mask is trending.Wearing a mask for protection from Covid 19 has become a compulsion and it hides the facial expressions that is why FER under the mask is a difficult task.The prevailing unimodal techniques for facial recognition are not up to the mark in terms of good results for the masked face,however,a multi-modal technique can be employed to generate better results.We proposed a multi-modal methodology based on deep learning for facial recognition under a masked face using facial and vocal expressions.The multimodal has been trained on a facial and vocal dataset.We have used two standard datasets,M-LFW for the masked dataset and CREMA-D and TESS dataset for vocal expressions.The vocal expressions are in the form of audio while the faces data is in image form that is why the data is heterogenous.In order to make the data homogeneous,the voice data is converted into images by taking spectrogram.A spectrogram embeds important features of the voice and it converts the audio format into the images.Later,the dataset is passed to the multimodal for training.neural network and the experimental results demonstrate that the proposed multimodal algorithm outsets unimodal methods and other state-of-the-art deep neural network models.
基金Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2022R263)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.The authors would like to thank the Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code:22UQU4340237DSR36The authors are thankful to the Deanship of Scientific Research at Najran University for funding thiswork under theResearch Groups Funding program grant code(NU/RG/SERC/11/7).
文摘Arabic is one of the most spoken languages across the globe.However,there are fewer studies concerning Sentiment Analysis(SA)in Arabic.In recent years,the detected sentiments and emotions expressed in tweets have received significant interest.The substantial role played by the Arab region in international politics and the global economy has urged the need to examine the sentiments and emotions in the Arabic language.Two common models are available:Machine Learning and lexicon-based approaches to address emotion classification problems.With this motivation,the current research article develops a Teaching and Learning Optimization with Machine Learning Based Emotion Recognition and Classification(TLBOML-ERC)model for Sentiment Analysis on tweets made in the Arabic language.The presented TLBOML-ERC model focuses on recognising emotions and sentiments expressed in Arabic tweets.To attain this,the proposed TLBOMLERC model initially carries out data pre-processing and a Continuous Bag Of Words(CBOW)-based word embedding process.In addition,Denoising Autoencoder(DAE)model is also exploited to categorise different emotions expressed in Arabic tweets.To improve the efficacy of the DAE model,the Teaching and Learning-based Optimization(TLBO)algorithm is utilized to optimize the parameters.The proposed TLBOML-ERC method was experimentally validated with the help of an Arabic tweets dataset.The obtained results show the promising performance of the proposed TLBOML-ERC model on Arabic emotion classification.
文摘At present, the channels used for EEG acquisition are more than 16, which makes it difficult to wear EEG caps and has poor contact. Therefore, it brings difficulties to the collection of brain waves is not conducive to converting research into applications. It is a well worth studying work in researching how to find the key brain electrode in the existing brain wave, which will greatly reduce the number of EEG acquisition points during application, making it easier to translate the research into practical application. This paper takes emotional EEG as an example to study how to find the key brain electrode points of emotional EEG with deep learning method. Firstly, using the least square regression algorithm to calculate the characteristic coefficients of each electrode point;secondly, according to the law of the characteristic coefficient value, grouping the key EEG poles for experiment. In the grouping experiment, the Conv1d-GRU model used to train and verify the EEG data of the corresponding electrode points. Finally, from the results of various grouping experiments, it concluded that the selection method of the key EEG level points should be the electrode points with positive characteristic coefficient, and the accuracy of verification is 97.6%. With experiments, it confirmed that there are key electrode points in the detection of emotional EEG by 16-channel OpenBCI. There are only six key electrode points of emotional EEG;that is to say, the EEG data collected by only six key electrode points can identify seven kinds of emotional EEG. .
基金Project(61403422)supported by the National Natural Science Foundation of ChinaProject(17C1084)supported by Hunan Education Department Science Foundation of ChinaProject(17ZD02)supported by Hunan University of Arts and Science,China
文摘To overcome the deficiencies of high computational complexity and low convergence speed in traditional neural networks, a novel bio-inspired machine learning algorithm named brain emotional learning (BEL) is introduced. BEL mimics the emotional learning mechanism in brain which has the superior features of fast learning and quick reacting. To further improve the performance of BEL in data analysis, genetic algorithm (GA) is adopted for optimally tuning the weights and biases of amygdala and orbitofrontal cortex in BEL neural network. The integrated algorithm named GA-BEL combines the advantages of the fast learning of BEL, and the global optimum solution of GA. GA-BEL has been tested on a real-world chaotic time series of geomagnetic activity index for prediction, eight benchmark datasets of university California at Irvine (UCI) and a functional magnetic resonance imaging (fMRI) dataset for classifications. The comparisons of experimental results have shown that the proposed GA-BEL algorithm is more accurate than the original BEL in prediction, and more effective when dealing with large-scale classification problems. Further, it outperforms most other traditional algorithms in terms of accuracy and execution speed in both prediction and classification applications.
文摘Recently,people have been paying more and more attention to mental health,such as depression,autism,and other common mental diseases.In order to achieve a mental disease diagnosis,intelligent methods have been actively studied.However,the existing models suffer the accuracy degradation caused by the clarity and occlusion of human faces in practical applications.This paper,thus,proposes a multi-scale feature fusion network that obtains feature information at three scales by locating the sentiment region in the image,and integrates global feature information and local feature information.In addition,a focal cross-entropy loss function is designed to improve the network's focus on difficult samples during training,enhance the training effect,and increase the model recognition accuracy.Experimental results on the challenging RAF_DB dataset show that the proposed model exhibits better facial expression recognition accuracy than existing techniques.
基金The National Natural Science Foundation of China(No.61871213,61673108,61571106)Six Talent Peaks Project in Jiangsu Province(No.2016-DZXX-023)
文摘In order to improve the efficiency of speech emotion recognition across corpora,a speech emotion transfer learning method based on the deep sparse auto-encoder is proposed.The algorithm first reconstructs a small amount of data in the target domain by training the deep sparse auto-encoder,so that the encoder can learn the low-dimensional structural representation of the target domain data.Then,the source domain data and the target domain data are coded by the trained deep sparse auto-encoder to obtain the reconstruction data of the low-dimensional structural representation close to the target domain.Finally,a part of the reconstructed tagged target domain data is mixed with the reconstructed source domain data to jointly train the classifier.This part of the target domain data is used to guide the source domain data.Experiments on the CASIA,SoutheastLab corpus show that the model recognition rate after a small amount of data transferred reached 89.2%and 72.4%on the DNN.Compared to the training results of the complete original corpus,it only decreased by 2%in the CASIA corpus,and only 3.4%in the SoutheastLab corpus.Experiments show that the algorithm can achieve the effect of labeling all data in the extreme case that the data set has only a small amount of data tagged.
文摘Overseas research has shown that achievement emotions have direct relationships with "achievement outcome" and"achievement activities". The purpose of the present study aimed to compare the relationships betweenachievement emotions, motivation, and language learning strategies of high, mid and low achievers in Englishlanguage learning at an international university in a southern province in China. Quantitative data were collectedthrough a questionnaire survey of 74 (16 males, 58 females) TESL major students. Results indicated that studentsin general experienced more positive than negative achievement emotions; more intrinsically rather thanextrinsically motivated to learn English; and quite frequently used a variety of learning strategies to overcome theirlearning difficulties. However, Year Four low-achievers experienced more negative achievement emotions. Theyseldom used metacognitive, affective and social learning strategies, and they had lower degrees of intrinsicmotivation. Implications for institutional support for at risk students are discussed.
文摘Due to the widespread usage of social media in our recent daily lifestyles,sentiment analysis becomes an important field in pattern recognition and Natural Language Processing(NLP).In this field,users’feedback data on a specific issue are evaluated and analyzed.Detecting emotions within the text is therefore considered one of the important challenges of the current NLP research.Emotions have been widely studied in psychology and behavioral science as they are an integral part of the human nature.Emotions describe a state of mind of distinct behaviors,feelings,thoughts and experiences.The main objective of this paper is to propose a new model named BERT-CNN to detect emotions from text.This model is formed by a combination of the Bidirectional Encoder Representations from Transformer(BERT)and the Convolutional Neural networks(CNN)for textual classification.This model embraces the BERT to train the word semantic representation language model.According to the word context,the semantic vector is dynamically generated and then placed into the CNN to predict the output.Results of a comparative study proved that the BERT-CNN model overcomes the state-of-art baseline performance produced by different models in the literature using the semeval 2019 task3 dataset and ISEAR datasets.The BERTCNN model achieves an accuracy of 94.7%and an F1-score of 94%for semeval2019 task3 dataset and an accuracy of 75.8%and an F1-score of 76%for ISEAR dataset.
基金This work has partially been sponsored by the Hungarian National Scientific Fund under contract OTKA 129374the Research&Development Operational Program for the project“Modernization and Improvement of Technical Infrastructure for Research and Development of J.Selye University in the Fields of Nanotechnology and Intelligent Space”,ITMS 26210120042,co-funded by the European Regional Development Fund.
文摘Emotion detection from the text is a challenging problem in the text analytics.The opinion mining experts are focusing on the development of emotion detection applications as they have received considerable attention of online community including users and business organization for collecting and interpreting public emotions.However,most of the existing works on emotion detection used less efficient machine learning classifiers with limited datasets,resulting in performance degradation.To overcome this issue,this work aims at the evaluation of the performance of different machine learning classifiers on a benchmark emotion dataset.The experimental results show the performance of different machine learning classifiers in terms of different evaluation metrics like precision,recall ad f-measure.Finally,a classifier with the best performance is recommended for the emotion classification.
文摘Due to the lack of large-scale emotion databases,it is hard to obtain comparable improvement in multimodal emotion recognition of the deep neural network by deep learning,which has made great progress in other areas.We use transfer learning to improve its performance with pretrained models on largescale data.Audio is encoded using deep speech recognition networks with 500 hours’speech and video is encoded using convolutional neural networks with over 110,000 images.The extracted audio and visual features are fed into Long Short-Term Memory to train models respectively.Logistic regression and ensemble method are performed in decision level fusion.The experiment results indicate that 1)audio features extracted from deep speech recognition networks achieve better performance than handcrafted audio features;2)the visual emotion recognition obtains better performance than audio emotion recognition;3)the ensemble method gets better performance than logistic regression and prior knowledge from micro-F1 value further improves the performance and robustness,achieving accuracy of 67.00%for“happy”,54.90%for“an?gry”,and 51.69%for“sad”.
基金This work is partially supported by the National Natural Science Foundation of China under Grant Nos.61876205 and 61877013the Ministry of Education of Humanities and Social Science project under Grant Nos.19YJAZH128 and 20YJAZH118+1 种基金the Science and Technology Plan Project of Guangzhou under Grant No.201804010433the Bidding Project of Laboratory of Language Engineering and Computing under Grant No.LEC2017ZBKT001.
文摘Textual Emotion Analysis(TEA)aims to extract and analyze user emotional states in texts.Various Deep Learning(DL)methods have developed rapidly,and they have proven to be successful in many fields such as audio,image,and natural language processing.This trend has drawn increasing researchers away from traditional machine learning to DL for their scientific research.In this paper,we provide an overview of TEA based on DL methods.After introducing a background for emotion analysis that includes defining emotion,emotion classification methods,and application domains of emotion analysis,we summarize DL technology,and the word/sentence representation learning method.We then categorize existing TEA methods based on text structures and linguistic types:text-oriented monolingual methods,text conversations-oriented monolingual methods,text-oriented cross-linguistic methods,and emoji-oriented cross-linguistic methods.We close by discussing emotion analysis challenges and future research trends.We hope that our survey will assist readers in understanding the relationship between TEA and DL methods while also improving TEA development.
文摘Nowadays, millions of users use many social media systems every day. These services produce massive messages, which play a vital role in the social networking paradigm. As we see, an intelligent learning emotion system is desperately needed for detecting emotion among these messages. This system could be suitable in understanding users’ feelings towards particular discussion. This paper proposes a text-based emotion recognition approach that uses personal text data to recognize user’s current emotion. The proposed approach applies Dominant Meaning Technique to recognize user’s emotion. The paper reports promising experiential results on the tested dataset based on the proposed algorithm.
基金Higher Education Society of Shaanxi Province 2019 Higher Education Science Research Project(XGH19120:Wisdom Teaching Scene in Cloud model evaluation system key technology research)2019 school-level Higher Education Science Research Project(GJY-2019-YB-20).
文摘At the beginning of 2020,the“COVID-19”came out.Affected by the outbreaks,the universities have to carry out online teaching.Online learning provides students with full freedom and personalized learning space,but at the same time,it also brings problems such as weak feelings between teachers and students and lack of learning experience.To solve these problems,this paper adopts the methods of questionnaire survey,experimental control and behavioral modeling.This paper studies how teachers’emotional support behavior affects students’learning process and learning emotion in online learning environment,and proposes that teachers’emotional support behavior is appealed and desired by students.Positive teachers’emotional support behavior can promote students’learning process and improve students’learning emotion.
文摘Facial emotion recognition(FER)has become a focal point of research due to its widespread applications,ranging from human-computer interaction to affective computing.While traditional FER techniques have relied on handcrafted features and classification models trained on image or video datasets,recent strides in artificial intelligence and deep learning(DL)have ushered in more sophisticated approaches.The research aims to develop a FER system using a Faster Region Convolutional Neural Network(FRCNN)and design a specialized FRCNN architecture tailored for facial emotion recognition,leveraging its ability to capture spatial hierarchies within localized regions of facial features.The proposed work enhances the accuracy and efficiency of facial emotion recognition.The proposed work comprises twomajor key components:Inception V3-based feature extraction and FRCNN-based emotion categorization.Extensive experimentation on Kaggle datasets validates the effectiveness of the proposed strategy,showcasing the FRCNN approach’s resilience and accuracy in identifying and categorizing facial expressions.The model’s overall performance metrics are compelling,with an accuracy of 98.4%,precision of 97.2%,and recall of 96.31%.This work introduces a perceptive deep learning-based FER method,contributing to the evolving landscape of emotion recognition technologies.The high accuracy and resilience demonstrated by the FRCNN approach underscore its potential for real-world applications.This research advances the field of FER and presents a compelling case for the practicality and efficacy of deep learning models in automating the understanding of facial emotions.
基金Key Scientific Research Project of Henan Provincial Colleges and Universities“Construction of an Innovation and Entrepreneurship Education Ecosystem Model in Colleges and Universities Based on Ecological Theory”(24B880048)Research and Practice Project on Education and Teaching Reform in Henan Provincial Colleges and Universities(Employment and Innovation and Entrepreneurship Education)“Construction and Practice of a‘3+N’Practical Education System Based on Employment and Education Orientation”(2024SJGLX1083)+1 种基金Research and Practice Project on Teaching Reform in Higher Education in Henan Province“Practical Exploration of the‘3+3+X’Collaborative Education Model for Mental Health Education in Medical Schools”(2024SJGLX0142)Research and Practice Project on Education and Teaching Reform at Xinxiang Medical University“Practical Exploration of Conflicts and Countermeasures in Medical Students’Internships,Postgraduate Entrance Exams,and Employment from the Perspective of the Conflict Between Work and Study”(2021-XYJG-98)。
文摘STEAM(science,technology,engineering,arts,and mathematics)education aims to cultivate innovative talents with multidimensional literacy through interdisciplinary integration and innovative practice.However,lack of student motivation has emerged as a key factor hindering its effectiveness.This study explores the integrated application of positive emotions and flow experience in STEAM education from the perspective of positive psychology.It systematically explains how these factors enhance learning motivation and promote knowledge internalization,proposing feasible pathways for instructional design,resource provision,environment creation,and team building.The study provides theoretical insights and practical guidance for transforming STEAM education in the new era.
文摘Saul Bellow'work mainly describes the human who cannot find the standpoint in American society, and they dangled between the ideal and the reality, insisting the belief of love. Seize the Day presents a several-hour experience of a middle-aged man who had lost his job, was estranged from his wife and children, and deserted by his father. With the double writing methods of narrative and recollection, the author reveals thriftily how individual was threatened by social prejudice and the important position of money. He extremely pays attention to the conflict between emotion and rationality to show people's anxiety and contradiction in today's America.