摘要
大规模多标签文本分类是自然语言处理领域的一项挑战性任务。该任务存在标签数据长尾分布的情况,在这种情况下,模型学习尾部标签分类能力不佳,导致模型的整体分类效果不理想。为解决以上问题,提出采用平衡函数的大规模多标签文本分类方法。该方法使用BERT预训练模型对文本进行词嵌入处理,进一步使用预训练模型中多层编码器的拼接输出作为文本向量表示,获取了丰富的文本语义信息,提高了模型收敛速度。最后采用平衡函数针对预测标签的训练损失赋予不同的衰减权重,提高了方法在尾部标签分类上的学习能力。在Eurlex-4K和Wiki10-31K数据集上的实验结果表明,评价指标P@1、P@3和P@5上分别达到86.95%、74.12%、61.43%和88.57%、77.46%、67.90%。
Extreme multi-label text classification is a challenging task in the field of natural language processing.In this task,there is a long-tailed distribution situation of labeled data.In this situation,model has a poor ability to learn tail labels classification,which results the overall classification effect is not good.In order to address the above problems,an extreme multi-label text classification method based on balance function is proposed.Firstly,the BERT pre-training model is used for word embedding.Further,the concatenated output of the multi-layer encoder in the pre-trained model is used as the text vector representation to obtain richer text semantic information and improves the model convergence speed.Finally,the balance function is used to assign different attenuation weights to the training losses of different prediction labels,which improves the learning ability of the method on tail label classification.The experimental results on Eurlex-4K and Wiki10-31K datasets show that the evaluation indicators P@1,P@3 and P@5 respectively reach 86.95%,74.12%,61.43%and 88.57%,77.46%and 67.90%.
作者
陈钊鸿
洪智勇
余文华
张昕
CHEN Zhaohong;HONG Zhiyong;YU Wenhua;ZHANG Xin(Faculty of Intelligent Manufacturing,Wuyi University,Jiangmen,Guangdong 529020,China)
出处
《计算机工程与应用》
CSCD
北大核心
2024年第4期163-172,共10页
Computer Engineering and Applications
基金
五邑大学港澳联合研发基金(2019WGALH21)
广东省基础与应用基础研究基金(2020A1515011468)
广东省普通高校特色创新类项目(2019KTSCX189)。
关键词
自然语言处理
大规模多标签文本分类
BERT
平衡函数
深度学习
natural language processing(NLP)
extreme multi-label text classification
BERT
balance function
deep learning