期刊文献+
共找到6篇文章
< 1 >
每页显示 20 50 100
Evaluation of Scholar’s Contribution to Team Based on Weighted Co-author Network
1
作者 Xinmeng Zhang Xinguang Li +2 位作者 Shengyi Jiang Xia Li Bolin Xie 《国际计算机前沿大会会议论文集》 2019年第1期59-61,共3页
The contributions of scientific researchers include personal influence and talent training achievements. In this paper, using 9964 high-quality coauthor scientific papers in English teaching research from China citati... The contributions of scientific researchers include personal influence and talent training achievements. In this paper, using 9964 high-quality coauthor scientific papers in English teaching research from China citation database from 1997 to 2016, a weighted coauthor network with variety factors is constructed. A model was proposed to calculate the author’s contribution to the research team by combining personal and network characteristics. The results reveal a variety of characteristics of the co-author networks in English teaching research field, including statistical properties, community features, and authors’ contribution to teams in this discipline. 展开更多
关键词 Social NETWORK analysis CO-AUTHOR NETWORK Research TEAM ACADEMIC CONTRIBUTION
下载PDF
A survey on deep learning for textual emotion analysis in social networks 被引量:1
2
作者 Sancheng Peng Lihong Cao +5 位作者 Yongmei Zhou Zhouhao Ouyang Aimin Yang Xinguang Li Weijia Ji Shui Yu 《Digital Communications and Networks》 SCIE CSCD 2022年第5期745-762,共18页
Textual Emotion Analysis(TEA)aims to extract and analyze user emotional states in texts.Various Deep Learning(DL)methods have developed rapidly,and they have proven to be successful in many fields such as audio,image,... Textual Emotion Analysis(TEA)aims to extract and analyze user emotional states in texts.Various Deep Learning(DL)methods have developed rapidly,and they have proven to be successful in many fields such as audio,image,and natural language processing.This trend has drawn increasing researchers away from traditional machine learning to DL for their scientific research.In this paper,we provide an overview of TEA based on DL methods.After introducing a background for emotion analysis that includes defining emotion,emotion classification methods,and application domains of emotion analysis,we summarize DL technology,and the word/sentence representation learning method.We then categorize existing TEA methods based on text structures and linguistic types:text-oriented monolingual methods,text conversations-oriented monolingual methods,text-oriented cross-linguistic methods,and emoji-oriented cross-linguistic methods.We close by discussing emotion analysis challenges and future research trends.We hope that our survey will assist readers in understanding the relationship between TEA and DL methods while also improving TEA development. 展开更多
关键词 TEXT Emotion analysis Deep learning Sentiment analysis Pre-training
下载PDF
Deep Broad Learning for Emotion Classification in Textual Conversations
3
作者 Sancheng Peng Rong Zeng +3 位作者 Hongzhan Liu Lihong Cao Guojun Wang Jianguo Xie 《Tsinghua Science and Technology》 SCIE EI CAS CSCD 2024年第2期481-491,共11页
Emotion classification in textual conversations focuses on classifying the emotion of each utterance from textual conversations.It is becoming one of the most important tasks for natural language processing in recent ... Emotion classification in textual conversations focuses on classifying the emotion of each utterance from textual conversations.It is becoming one of the most important tasks for natural language processing in recent years.However,it is a challenging task for machines to conduct emotion classification in textual conversations because emotions rely heavily on textual context.To address the challenge,we propose a method to classify emotion in textual conversations,by integrating the advantages of deep learning and broad learning,namely DBL.It aims to provide a more effective solution to capture local contextual information(i.e.,utterance-level)in an utterance,as well as global contextual information(i.e.,speaker-level)in a conversation,based on Convolutional Neural Network(CNN),Bidirectional Long Short-Term Memory(Bi-LSTM),and broad learning.Extensive experiments have been conducted on three public textual conversation datasets,which show that the context in both utterance-level and speaker-level is consistently beneficial to the performance of emotion classification.In addition,the results show that our proposed method outperforms the baseline methods on most of the testing datasets in weighted-average F1. 展开更多
关键词 emotion classification textual conversation Convolutional Neural Network(CNN) Bidirectional Long Short-Term Memory(Bi-LSTM) broad learning
原文传递
CNN-Based Broad Learning for Cross-Domain Emotion Classification 被引量:1
4
作者 Rong Zeng Hongzhan Liu +4 位作者 Sancheng Peng Lihong Cao Aimin Yang Chengqing Zong Guodong Zhou 《Tsinghua Science and Technology》 SCIE EI CAS CSCD 2023年第2期360-369,共10页
Cross-domain emotion classification aims to leverage useful information in a source domain to help predict emotion polarity in a target domain in a unsupervised or semi-supervised manner.Due to the domain discrepancy,... Cross-domain emotion classification aims to leverage useful information in a source domain to help predict emotion polarity in a target domain in a unsupervised or semi-supervised manner.Due to the domain discrepancy,an emotion classifier trained on source domain may not work well on target domain.Many researchers have focused on traditional cross-domain sentiment classification,which is coarse-grained emotion classification.However,the problem of emotion classification for cross-domain is rarely involved.In this paper,we propose a method,called convolutional neural network(CNN)based broad learning,for cross-domain emotion classification by combining the strength of CNN and broad learning.We first utilized CNN to extract domain-invariant and domain-specific features simultaneously,so as to train two more efficient classifiers by employing broad learning.Then,to take advantage of these two classifiers,we designed a co-training model to boost together for them.Finally,we conducted comparative experiments on four datasets for verifying the effectiveness of our proposed method.The experimental results show that the proposed method can improve the performance of emotion classification more effectively than those baseline methods. 展开更多
关键词 cross-domain emotion classification CNN broad learning CLASSIFIER CO-TRAINING
原文传递
A Classifier Using Online Bagging Ensemble Method for Big Data Stream Learning 被引量:5
5
作者 Yanxia Lv Sancheng Peng +4 位作者 Ying Yuan Cong Wang Pengfei Yin Jiemin Liu Cuirong Wang 《Tsinghua Science and Technology》 SCIE EI CAS CSCD 2019年第4期379-388,共10页
By combining multiple weak learners with concept drift in the classification of big data stream learning, the ensemble learning can achieve better generalization performance than the single learning approach. In this ... By combining multiple weak learners with concept drift in the classification of big data stream learning, the ensemble learning can achieve better generalization performance than the single learning approach. In this paper,we present an efficient classifier using the online bagging ensemble method for big data stream learning. In this classifier, we introduce an efficient online resampling mechanism on the training instances, and use a robust coding method based on error-correcting output codes. This is done in order to reduce the effects of correlations between the classifiers and increase the diversity of the ensemble. A dynamic updating model based on classification performance is adopted to reduce the unnecessary updating operations and improve the efficiency of learning.We implement a parallel version of EoBag, which runs faster than the serial version, and results indicate that the classification performance is almost the same as the serial one. Finally, we compare the performance of classification and the usage of resources with other state-of-the-art algorithms using the artificial and the actual data sets, respectively. Results show that the proposed algorithm can obtain better accuracy and more feasible usage of resources for the classification of big data stream. 展开更多
关键词 big data STREAM classification ONLINE BAGGING ensemble LEARNING concept DRIFT
原文传递
p-Norm Broad Learning for Negative Emotion Classification in Social Networks 被引量:1
6
作者 Guanghao Chen Sancheng Peng +5 位作者 Rong Zeng Zhongwang Hu Lihong Cao Yongmei Zhou Zhouhao Ouyang Xiangyu Nie 《Big Data Mining and Analytics》 EI 2022年第3期245-256,共12页
Negative emotion classification refers to the automatic classification of negative emotion of texts in social networks.Most existing methods are based on deep learning models,facing challenges such as complex structur... Negative emotion classification refers to the automatic classification of negative emotion of texts in social networks.Most existing methods are based on deep learning models,facing challenges such as complex structures and too many hyperparameters.To meet these challenges,in this paper,we propose a method for negative emotion classification utilizing a Robustly Optimized BERT Pretraining Approach(RoBERTa)and p-norm Broad Learning(p-BL).Specifically,there are mainly three contributions in this paper.Firstly,we fine-tune the RoBERTa to adapt it to the task of negative emotion classification.Then,we employ the fine-tuned RoBERTa to extract features of original texts and generate sentence vectors.Secondly,we adopt p-BL to construct a classifier and then predict negative emotions of texts using the classifier.Compared with deep learning models,p-BL has advantages such as a simple structure that is only 3-layer and fewer parameters to be trained.Moreover,it can suppress the adverse effects of more outliers and noise in data by flexibly changing the value of p.Thirdly,we conduct extensive experiments on the public datasets,and the experimental results show that our proposed method outperforms the baseline methods on the tested datasets. 展开更多
关键词 social networks negative emotion RoBERTa broad learning p-norm
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部