期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
Towards Understanding Creative Language in Tweets
1
作者 Linrui Zhang Yisheng Zhou +1 位作者 Yang Yu dan moldovan 《Journal of Software Engineering and Applications》 2019年第11期447-459,共13页
Extracting fine-grained information from social media is traditionally a challenging task, since the language used in social media messages is usually informal, with creative genre-specific terminology and expression.... Extracting fine-grained information from social media is traditionally a challenging task, since the language used in social media messages is usually informal, with creative genre-specific terminology and expression. How to handle such a challenge so as to automatically understand the opinions that people are communicating has become a hot subject of research. In this paper, we aim to show that leveraging the pre-learned knowledge can help neural network models understand the creative language in Tweets. In order to address this idea, we present a transfer learning model based on BERT. We fine-turned the pre-trained BERT model and applied the customized model to two downstream tasks described in SemEval-2018: Irony Detection task and Emoji Prediction task of Tweets. Our model could achieve an F-score of 38.52 (ranked 1/49) in Emoji Prediction task and 67.52 (ranked 2/43) and 51.35 (ranked 1/31) in Irony Detection subtask A and subtask B. The experimental results validate the effectiveness of our idea. 展开更多
关键词 NATURAL LANGUAGE Processing DEEP LEARNING TRANSFER LEARNING
下载PDF
Multi-Task Learning for Semantic Relatedness and Textual Entailment
2
作者 Linrui Zhang dan moldovan 《Journal of Software Engineering and Applications》 2019年第6期199-214,共16页
Recently, several deep learning models have been successfully proposed and have been applied to solve different Natural Language Processing (NLP) tasks. However, these models solve the problem based on single-task sup... Recently, several deep learning models have been successfully proposed and have been applied to solve different Natural Language Processing (NLP) tasks. However, these models solve the problem based on single-task supervised learning and do not consider the correlation between the tasks. Based on this observation, in this paper, we implemented a multi-task learning model to joint learn two related NLP tasks simultaneously and conducted experiments to evaluate if learning these tasks jointly can improve the system performance compared with learning them individually. In addition, a comparison of our model with the state-of-the-art learning models, including multi-task learning, transfer learning, unsupervised learning and feature based traditional machine learning models is presented. This paper aims to 1) show the advantage of multi-task learning over single-task learning in training related NLP tasks, 2) illustrate the influence of various encoding structures to the proposed single- and multi-task learning models, and 3) compare the performance between multi-task learning and other learning models in literature on textual entailment task and semantic relatedness task. 展开更多
关键词 DEEP LEARNING MULTI-TASK LEARNING TEXT UNDERSTANDING
下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部