摘要
文本分类问题是NLP领域的经典问题。当前大部分文本分类网络中所使用的RNN网络存在着短期记忆问题,对于长文本无法进行准确分类。为此,首先将语言模型与分类网络两部分工作解耦,将NLP预训练模型应用于文本分类的任务上,并提出TextCGA文本分类网络。网络用预训练模型作为语言模型,使用预训练模型的强大的语义表示能力对文本进行表示;同时为了解决RNN网络在序列长度较长时的短期记忆问题,使用卷积层、RNN层以及Self-At⁃tention层搭建了CGA模块,有效解决长序列建模问题;在网络中设置多个CGA模块,使得模型可以从多个感受野捕捉文本特征。实验结果表明,使用预训练模型的TextCGA文本分类网络能够达到较好的文本分类效果,在测试中比对照方法普遍提高1~2%的准确率。
Text classification is a classic task in NLP field.At present,most of the classification networks are using RNNs while they have the issue of short-term memory,which means long text will not be accurately categorized.The language model and classification network are decou⁃pled,and the NLP pre-trained model is applied to the task of text classification,and the TextCGA network is proposed.The network uses the pre-trained model the language model,and uses the strong semantic representation ability of the pre-trained model to represent the text.Besides,in order to solve the short-term memory problem of RNN when the sequence is long,the CGA block is constructed by using convolution layer,RNN layer and Self-Attention layer,which effectively solves the problem of long-sequence modeling.Multiple CGA blocks are set up in the network to capture text features from multiple receptive fields.The experimental results show that the TextCGA net⁃work using the pre-trained model can achieve better text classification result,and the accuracy is generally improved by 1-2 percentage points compared with other methods in our experiment.
作者
杨玮祺
杜晔
YANG Wei-qi;DU Ye(School of Computer Science and Information Technology,Beijing Jiaotong University,Beijing 100044)
出处
《现代计算机》
2020年第12期52-57,共6页
Modern Computer