In the realm of Multi-Label Text Classification(MLTC),the dual challenges of extracting rich semantic features from text and discerning inter-label relationships have spurred innovative approaches.Many studies in sema...In the realm of Multi-Label Text Classification(MLTC),the dual challenges of extracting rich semantic features from text and discerning inter-label relationships have spurred innovative approaches.Many studies in semantic feature extraction have turned to external knowledge to augment the model’s grasp of textual content,often overlooking intrinsic textual cues such as label statistical features.In contrast,these endogenous insights naturally align with the classification task.In our paper,to complement this focus on intrinsic knowledge,we introduce a novel Gate-Attention mechanism.This mechanism adeptly integrates statistical features from the text itself into the semantic fabric,enhancing the model’s capacity to understand and represent the data.Additionally,to address the intricate task of mining label correlations,we propose a Dual-end enhancement mechanism.This mechanism effectively mitigates the challenges of information loss and erroneous transmission inherent in traditional long short term memory propagation.We conducted an extensive battery of experiments on the AAPD and RCV1-2 datasets.These experiments serve the dual purpose of confirming the efficacy of both the Gate-Attention mechanism and the Dual-end enhancement mechanism.Our final model unequivocally outperforms the baseline model,attesting to its robustness.These findings emphatically underscore the imperativeness of taking into account not just external knowledge but also the inherent intricacies of textual data when crafting potent MLTC models.展开更多
基于《中国图书馆分类法》(下简称《中图法》)的文献深层分类蕴含着两个经典的自然语言处理问题:极限多标签文本分类(Extreme Multi-label Text Classification,XMC)和层次文本分类(Hierarchical Text Classification,HTC)。然而目前基...基于《中国图书馆分类法》(下简称《中图法》)的文献深层分类蕴含着两个经典的自然语言处理问题:极限多标签文本分类(Extreme Multi-label Text Classification,XMC)和层次文本分类(Hierarchical Text Classification,HTC)。然而目前基于《中图法》的文献分类研究普遍将其视为普通的文本分类问题,由于没有充分挖掘问题的核心特点,这些研究在深层分类上的效果普遍不理想甚至不可行。相较于同类研究,本文基于对《中图法》文献分类特点和难点的深入分析,从XMC和HTC两个角度对基于《中图法》的文献深层分类和相关的解决方案进行了考察和研究,并针对该场景下的特点进行应用和创新,不仅提高了分类的准确度,还扩展了分类的深度和广度。本文模型首先通过适用于XMC问题的轻量深度学习模型提取了文本的语义特征作为分类的基础依据,而后针对《中图法》分类中的HTC问题,利用LTR(Learning to Rank)框架融入包括层级结构信息等多元特征作为分类的辅助依据,极大化地挖掘了蕴含在文本语义及分类体系中的信息和知识。本模型兼具深度学习模型强大的语义理解能力与机器学习模型的可解释性,同时具备良好的可扩展性,后期可较为便捷地融入专家定制的新特征进行提高,并且模型较为轻量,可在有限计算资源下轻松应对数万级别的分类标签,为基于《中图法》的全深度分类奠定良好的基础。展开更多
Text classification means to assign a document to one or more classes or categories according to content. Text classification provides convenience for users to obtain data. Because of the polysemy of text data, multi-...Text classification means to assign a document to one or more classes or categories according to content. Text classification provides convenience for users to obtain data. Because of the polysemy of text data, multi-label classification can handle text data more comprehensively. Multi-label text classification become the key problem in the data mining. To improve the performances of multi-label text classification, semantic analysis is embedded into the classification model to complete label correlation analysis, and the structure, objective function and optimization strategy of this model is designed. Then, the convolution neural network(CNN) model based on semantic embedding is introduced. In the end, Zhihu dataset is used for evaluation. The result shows that this model outperforms the related work in terms of recall and area under curve(AUC) metrics.展开更多
Multi-label text categorization refers to the problem of categorizing text througha multi-label learning algorithm. Text classification for Asian languages such as Chinese isdifferent from work for other languages suc...Multi-label text categorization refers to the problem of categorizing text througha multi-label learning algorithm. Text classification for Asian languages such as Chinese isdifferent from work for other languages such as English which use spaces to separate words.Before classifying text, it is necessary to perform a word segmentation operation to converta continuous language into a list of separate words and then convert it into a vector of acertain dimension. Generally, multi-label learning algorithms can be divided into twocategories, problem transformation methods and adapted algorithms. This work will usecustomer's comments about some hotels as a training data set, which contains labels for allaspects of the hotel evaluation, aiming to analyze and compare the performance of variousmulti-label learning algorithms on Chinese text classification. The experiment involves threebasic methods of problem transformation methods: Support Vector Machine, Random Forest,k-Nearest-Neighbor;and one adapted algorithm of Convolutional Neural Network. Theexperimental results show that the Support Vector Machine has better performance.展开更多
Without explicit description of map application themes,it is difficult for users to discover desired map resources from massive online Web Map Services(WMS).However,metadata-based map application theme extraction is a...Without explicit description of map application themes,it is difficult for users to discover desired map resources from massive online Web Map Services(WMS).However,metadata-based map application theme extraction is a challenging multi-label text classification task due to limited training samples,mixed vocabularies,variable length and content arbitrariness of text fields.In this paper,we propose a novel multi-label text classification method,Text GCN-SW-KNN,based on geographic semantics and collaborative training to improve classifica-tion accuracy.The semi-supervised collaborative training adopts two base models,i.e.a modified Text Graph Convolutional Network(Text GCN)by utilizing Semantic Web,named Text GCN-SW,and widely-used Multi-Label K-Nearest Neighbor(ML-KNN).Text GCN-SW is improved from Text GCN by adjusting the adjacency matrix of the heterogeneous word document graph with the shortest semantic distances between themes and words in metadata text.The distances are calculated with the Semantic Web of Earth and Environmental Terminology(SWEET)and WordNet dictionaries.Experiments on both the WMS and layer metadata show that the proposed methods can achieve higher F1-score and accuracy than state-of-the-art baselines,and demonstrate better stability in repeating experiments and robustness to less training data.Text GCN-SW-KNN can be extended to other multi-label text classification scenario for better supporting metadata enhancement and geospatial resource discovery in Earth Science domain.展开更多
大规模多标签文本分类(XMTC)是从一个庞大且复杂的标签集合中查找与文本样本最相关标签的一项具有挑战性的任务。目前,基于Transformer模型的深度学习方法在XMTC上取得了巨大的成功。然而,现有方法都没能充分利用Transformer模型的优势...大规模多标签文本分类(XMTC)是从一个庞大且复杂的标签集合中查找与文本样本最相关标签的一项具有挑战性的任务。目前,基于Transformer模型的深度学习方法在XMTC上取得了巨大的成功。然而,现有方法都没能充分利用Transformer模型的优势,忽略了文本不同粒度下细微的局部语义信息,同时标签与文本之间的潜在关联尚未得到稳健的建立与利用。对此,提出了一种基于语义特征与关联注意力的大规模多标签文本分类模型SemFA(An Extreme Multi-Label Text Classification Model Based on Semantic Features and Association-Attention)。在SemFA中,首先拼接多层编码器顶层输出作为全局特征。其次,结合卷积神经网络从多层编码器浅层向量中获取局部特征。综合丰富的全局信息和不同粒度下细微的局部信息获得更丰富、更准确的语义特征。最后,通过关联注意力机制建立标签特征与文本特征之间的潜在关联,引入关联损失作为潜在关联不断优化模型。在Eurlex-4K和Wiki10-31K两个公开数据集上的实验结果表明,SemFA优于大多数现有的XMTC模型,能有效地融合语义特征与关联注意力,提升整体的分类性能。展开更多
基金supported by National Natural Science Foundation of China(NSFC)(Grant Nos.62162022,62162024)the Key Research and Development Program of Hainan Province(Grant Nos.ZDYF2020040,ZDYF2021GXJS003)+2 种基金the Major Science and Technology Project of Hainan Province(Grant No.ZDKJ2020012)Hainan Provincial Natural Science Foundation of China(Grant Nos.620MS021,621QN211)Science and Technology Development Center of the Ministry of Education Industry-University-Research Innovation Fund(2021JQR017).
文摘In the realm of Multi-Label Text Classification(MLTC),the dual challenges of extracting rich semantic features from text and discerning inter-label relationships have spurred innovative approaches.Many studies in semantic feature extraction have turned to external knowledge to augment the model’s grasp of textual content,often overlooking intrinsic textual cues such as label statistical features.In contrast,these endogenous insights naturally align with the classification task.In our paper,to complement this focus on intrinsic knowledge,we introduce a novel Gate-Attention mechanism.This mechanism adeptly integrates statistical features from the text itself into the semantic fabric,enhancing the model’s capacity to understand and represent the data.Additionally,to address the intricate task of mining label correlations,we propose a Dual-end enhancement mechanism.This mechanism effectively mitigates the challenges of information loss and erroneous transmission inherent in traditional long short term memory propagation.We conducted an extensive battery of experiments on the AAPD and RCV1-2 datasets.These experiments serve the dual purpose of confirming the efficacy of both the Gate-Attention mechanism and the Dual-end enhancement mechanism.Our final model unequivocally outperforms the baseline model,attesting to its robustness.These findings emphatically underscore the imperativeness of taking into account not just external knowledge but also the inherent intricacies of textual data when crafting potent MLTC models.
文摘基于《中国图书馆分类法》(下简称《中图法》)的文献深层分类蕴含着两个经典的自然语言处理问题:极限多标签文本分类(Extreme Multi-label Text Classification,XMC)和层次文本分类(Hierarchical Text Classification,HTC)。然而目前基于《中图法》的文献分类研究普遍将其视为普通的文本分类问题,由于没有充分挖掘问题的核心特点,这些研究在深层分类上的效果普遍不理想甚至不可行。相较于同类研究,本文基于对《中图法》文献分类特点和难点的深入分析,从XMC和HTC两个角度对基于《中图法》的文献深层分类和相关的解决方案进行了考察和研究,并针对该场景下的特点进行应用和创新,不仅提高了分类的准确度,还扩展了分类的深度和广度。本文模型首先通过适用于XMC问题的轻量深度学习模型提取了文本的语义特征作为分类的基础依据,而后针对《中图法》分类中的HTC问题,利用LTR(Learning to Rank)框架融入包括层级结构信息等多元特征作为分类的辅助依据,极大化地挖掘了蕴含在文本语义及分类体系中的信息和知识。本模型兼具深度学习模型强大的语义理解能力与机器学习模型的可解释性,同时具备良好的可扩展性,后期可较为便捷地融入专家定制的新特征进行提高,并且模型较为轻量,可在有限计算资源下轻松应对数万级别的分类标签,为基于《中图法》的全深度分类奠定良好的基础。
文摘Text classification means to assign a document to one or more classes or categories according to content. Text classification provides convenience for users to obtain data. Because of the polysemy of text data, multi-label classification can handle text data more comprehensively. Multi-label text classification become the key problem in the data mining. To improve the performances of multi-label text classification, semantic analysis is embedded into the classification model to complete label correlation analysis, and the structure, objective function and optimization strategy of this model is designed. Then, the convolution neural network(CNN) model based on semantic embedding is introduced. In the end, Zhihu dataset is used for evaluation. The result shows that this model outperforms the related work in terms of recall and area under curve(AUC) metrics.
基金supported by the NSFC (Grant Nos. 61772281,61703212, 61602254)Jiangsu Province Natural Science Foundation [grant numberBK2160968]the Priority Academic Program Development of Jiangsu Higher Edu-cationInstitutions (PAPD) and Jiangsu Collaborative Innovation Center on AtmosphericEnvironment and Equipment Technology (CICAEET).
文摘Multi-label text categorization refers to the problem of categorizing text througha multi-label learning algorithm. Text classification for Asian languages such as Chinese isdifferent from work for other languages such as English which use spaces to separate words.Before classifying text, it is necessary to perform a word segmentation operation to converta continuous language into a list of separate words and then convert it into a vector of acertain dimension. Generally, multi-label learning algorithms can be divided into twocategories, problem transformation methods and adapted algorithms. This work will usecustomer's comments about some hotels as a training data set, which contains labels for allaspects of the hotel evaluation, aiming to analyze and compare the performance of variousmulti-label learning algorithms on Chinese text classification. The experiment involves threebasic methods of problem transformation methods: Support Vector Machine, Random Forest,k-Nearest-Neighbor;and one adapted algorithm of Convolutional Neural Network. Theexperimental results show that the Support Vector Machine has better performance.
基金supported by National Natural Science Foundation of China[No.41971349,No.41930107,No.42090010 and No.41501434]National Key Research and Development Program of China[No.2017YFB0503704 and No.2018YFC0809806].
文摘Without explicit description of map application themes,it is difficult for users to discover desired map resources from massive online Web Map Services(WMS).However,metadata-based map application theme extraction is a challenging multi-label text classification task due to limited training samples,mixed vocabularies,variable length and content arbitrariness of text fields.In this paper,we propose a novel multi-label text classification method,Text GCN-SW-KNN,based on geographic semantics and collaborative training to improve classifica-tion accuracy.The semi-supervised collaborative training adopts two base models,i.e.a modified Text Graph Convolutional Network(Text GCN)by utilizing Semantic Web,named Text GCN-SW,and widely-used Multi-Label K-Nearest Neighbor(ML-KNN).Text GCN-SW is improved from Text GCN by adjusting the adjacency matrix of the heterogeneous word document graph with the shortest semantic distances between themes and words in metadata text.The distances are calculated with the Semantic Web of Earth and Environmental Terminology(SWEET)and WordNet dictionaries.Experiments on both the WMS and layer metadata show that the proposed methods can achieve higher F1-score and accuracy than state-of-the-art baselines,and demonstrate better stability in repeating experiments and robustness to less training data.Text GCN-SW-KNN can be extended to other multi-label text classification scenario for better supporting metadata enhancement and geospatial resource discovery in Earth Science domain.
文摘大规模多标签文本分类(XMTC)是从一个庞大且复杂的标签集合中查找与文本样本最相关标签的一项具有挑战性的任务。目前,基于Transformer模型的深度学习方法在XMTC上取得了巨大的成功。然而,现有方法都没能充分利用Transformer模型的优势,忽略了文本不同粒度下细微的局部语义信息,同时标签与文本之间的潜在关联尚未得到稳健的建立与利用。对此,提出了一种基于语义特征与关联注意力的大规模多标签文本分类模型SemFA(An Extreme Multi-Label Text Classification Model Based on Semantic Features and Association-Attention)。在SemFA中,首先拼接多层编码器顶层输出作为全局特征。其次,结合卷积神经网络从多层编码器浅层向量中获取局部特征。综合丰富的全局信息和不同粒度下细微的局部信息获得更丰富、更准确的语义特征。最后,通过关联注意力机制建立标签特征与文本特征之间的潜在关联,引入关联损失作为潜在关联不断优化模型。在Eurlex-4K和Wiki10-31K两个公开数据集上的实验结果表明,SemFA优于大多数现有的XMTC模型,能有效地融合语义特征与关联注意力,提升整体的分类性能。