期刊文献+

基于平滑采样和改进损失的不平衡文本分类

Unbalanced Text Classification Based on Smooth Sampling and Improved Loss
下载PDF
导出
摘要 在不平衡数据下,文本分类模型容易把样本数量较少的类别错分成数量较多的类别。在采样层面上提出一种平滑采样方法,在损失函数层面上根据不平衡分布改进交叉熵损失和标签平滑。复旦文本数据集上的实验表明,每种层面的改进都较基准模型有一定提高。当结合使用采样和损失函数的改进时,TextCNN、BiLSTM+Attention、TextRCNN和HAN模型在宏平均F_(1)值上分别提高4.17%、5.13%、5.06%和6.21%,在G-mean上分别提高6.56%、3.03%、3.92%和5.32%,较好解决了不平衡数据下文本分类任务。 When data are imbalanced,text classification models are easy to misclassify minority class to majority class.This paper proposes a smooth sampling method at the sampling level,and improves the cross entropy loss and label smoothing based on the imbalanced distribution at the loss function level.Experiments on the Fudan text corpus show that the improved method in each level outperforms the benchmark method.With the combination of the improved method in sampling level and loss function level,the TextCNN,BiLSTM+Attention,TextRCNN and HAN models can obtain 4.17%,5.31%,5.06%,and 6.21%macro F_(1) improvement and increase 6.56%,3.03%,3.92%,and 5.32%on G mean respectively.The methods proposed in this paper have been verified the effectiveness on imbalanced corpora.
作者 梁健力 商豪 LIANG Jianli;SHANG Hao(School of Science,Hubei Univ.of Tech.,Wuhan 430068,China)
出处 《湖北工业大学学报》 2023年第2期33-39,73,共8页 Journal of Hubei University of Technology
关键词 文本分类 不平衡比例 平滑采样 损失函数 text classification unbalanced percent sampling loss function
  • 相关文献

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部