One of the critical hurdles, and breakthroughs, in the field of Natural Language Processing (NLP) in the last two decades has been the development of techniques for text representation that solves the so-called curse ...One of the critical hurdles, and breakthroughs, in the field of Natural Language Processing (NLP) in the last two decades has been the development of techniques for text representation that solves the so-called curse of dimensionality, a problem which plagues NLP in general given that the feature set for learning starts as a function of the size of the language in question, upwards of hundreds of thousands of terms typically. As such, much of the research and development in NLP in the last two decades has been in finding and optimizing solutions to this problem, to feature selection in NLP effectively. This paper looks at the development of these various techniques, leveraging a variety of statistical methods which rest on linguistic theories that were advanced in the middle of the last century, namely the distributional hypothesis which suggests that words that are found in similar contexts generally have similar meanings. In this survey paper we look at the development of some of the most popular of these techniques from a mathematical as well as data structure perspective, from Latent Semantic Analysis to Vector Space Models to their more modern variants which are typically referred to as word embeddings. In this review of algoriths such as Word2Vec, GloVe, ELMo and BERT, we explore the idea of semantic spaces more generally beyond applicability to NLP.展开更多
Purpose:This work aims to normalize the NLPCONTRIBUTIONS scheme(henceforward,NLPCONTRIBUTIONGRAPH)to structure,directly from article sentences,the contributions information in Natural Language Processing(NLP)scholarly...Purpose:This work aims to normalize the NLPCONTRIBUTIONS scheme(henceforward,NLPCONTRIBUTIONGRAPH)to structure,directly from article sentences,the contributions information in Natural Language Processing(NLP)scholarly articles via a two-stage annotation methodology:1)pilot stage-to define the scheme(described in prior work);and 2)adjudication stage-to normalize the graphing model(the focus of this paper).Design/methodology/approach:We re-annotate,a second time,the contributions-pertinent information across 50 prior-annotated NLP scholarly articles in terms of a data pipeline comprising:contribution-centered sentences,phrases,and triple statements.To this end,specifically,care was taken in the adjudication annotation stage to reduce annotation noise while formulating the guidelines for our proposed novel NLP contributions structuring and graphing scheme.Findings:The application of NLPCONTRIBUTIONGRAPH on the 50 articles resulted finally in a dataset of 900 contribution-focused sentences,4,702 contribution-information-centered phrases,and 2,980 surface-structured triples.The intra-annotation agreement between the first and second stages,in terms of F1-score,was 67.92%for sentences,41.82%for phrases,and 22.31%for triple statements indicating that with increased granularity of the information,the annotation decision variance is greater.Research limitations:NLPCONTRIBUTIONGRAPH has limited scope for structuring scholarly contributions compared with STEM(Science,Technology,Engineering,and Medicine)scholarly knowledge at large.Further,the annotation scheme in this work is designed by only an intra-annotator consensus-a single annotator first annotated the data to propose the initial scheme,following which,the same annotator reannotated the data to normalize the annotations in an adjudication stage.However,the expected goal of this work is to achieve a standardized retrospective model of capturing NLP contributions from scholarly articles.This would entail a larger initiative of enlisting multiple annotators to accommodate different worldviews into a“single”set of structures and relationships as the final scheme.Given that the initial scheme is first proposed and the complexity of the annotation task in the realistic timeframe,our intraannotation procedure is well-suited.Nevertheless,the model proposed in this work is presently limited since it does not incorporate multiple annotator worldviews.This is planned as future work to produce a robust model.Practical implications:We demonstrate NLPCONTRIBUTIONGRAPH data integrated into the Open Research Knowledge Graph(ORKG),a next-generation KG-based digital library with intelligent computations enabled over structured scholarly knowledge,as a viable aid to assist researchers in their day-to-day tasks.Originality/value:NLPCONTRIBUTIONGRAPH is a novel scheme to annotate research contributions from NLP articles and integrate them in a knowledge graph,which to the best of our knowledge does not exist in the community.Furthermore,our quantitative evaluations over the two-stage annotation tasks offer insights into task difficulty.展开更多
多模态数据处理是一个重要的研究领域,它可以通过结合文本、图像等多种信息来提高模型性能.然而,由于不同模态之间的异构性以及信息融合的挑战,设计有效的多模态分类模型仍然是一个具有挑战性的问题.本文提出了一种新的多模态分类模型—...多模态数据处理是一个重要的研究领域,它可以通过结合文本、图像等多种信息来提高模型性能.然而,由于不同模态之间的异构性以及信息融合的挑战,设计有效的多模态分类模型仍然是一个具有挑战性的问题.本文提出了一种新的多模态分类模型——MCM-ICE,它通过联合独立编码和协同编码策略来解决特征表示和特征融合的挑战.MCM-ICE在Fashion-Gen和Hateful Memes Challenge两个数据集上进行了实验,结果表明该模型在这两项任务中均优于现有的最先进方法.本文还探究了协同编码模块Transformer输出层的不同向量选取对结果的影响,结果表明选取[CLS]向量和去除[CLS]的向量的平均池化向量可以获得最佳结果.消融研究和探索性分析支持了MCM-ICE模型在处理多模态分类任务方面的有效性.展开更多
抽象语义表示(Abstract Meaning Representation,AMR)是一种深层次的句子级语义表示形式,其将句子中的语义信息抽象为由概念结点与关系组成的有向无环图,相比其他较为浅层的语义表示形式如语义角色标注、语义依存分析等,AMR因其出色的...抽象语义表示(Abstract Meaning Representation,AMR)是一种深层次的句子级语义表示形式,其将句子中的语义信息抽象为由概念结点与关系组成的有向无环图,相比其他较为浅层的语义表示形式如语义角色标注、语义依存分析等,AMR因其出色的深层次语义信息捕捉能力,被广泛运用在例如信息抽取、智能问答、对话系统等多种下游任务中。AMR解析过程将自然语言转换成AMR图。虽然AMR图中的大部分概念结点和关系与句子中的词语具有较为明显的对齐关系,但原始的英文AMR语料中并没有给出具体的对齐信息。为了克服对齐信息不足给AMR解析以及AMR在下游任务上的应用造成的阻碍,Li等人[14]提出并标注了具有概念和关系对齐的中文AMR语料库。然而,现有的AMR解析方法并不能很好地在AMR解析的过程中利用和生成对齐信息。因此,该文首次提出了一种可以利用并且生成对齐信息的AMR解析方法,包括了概念预测和关系预测两个阶段。该文提出的方法具有高度的灵活性和可扩展性,实验结果表明,该方法在公开数据集CAMR 2.0和CAMRP 2022盲测集分别取得了77.6(+10.6)和70.7(+8.5)的Align Smatch分数,超过了过去基于序列到序列(Sequence-to-Sequence)模型的方法。该文同时对AMR解析的性能和细粒度指标进行详细的分析,并对存在的改进方向进行了展望。该文的代码和模型参数已经开源到https://github.com/pkunlp-icler/Two-Stage-CAMRP,供复现与参考。展开更多
在信息检索领域,量子干涉理论已应用于文档相关性、次序效应等核心问题的研究中,旨在建模用户认知引起的类量子干涉现象.文中从语言理解的需求出发,利用量子理论的数学工具分析语义组合过程中存在的语义演化现象,提出融合量子干涉信息...在信息检索领域,量子干涉理论已应用于文档相关性、次序效应等核心问题的研究中,旨在建模用户认知引起的类量子干涉现象.文中从语言理解的需求出发,利用量子理论的数学工具分析语义组合过程中存在的语义演化现象,提出融合量子干涉信息的双重特征文本表示模型(Quantum Interference Based Duet-Feature Text Representation Model,QDTM).模型以约化密度矩阵为语言表示的核心组件,有效建模维度级别的语义干涉信息.在此基础上,构建捕获全局特征信息与局部特征信息的模型结构,满足语言理解过程中不同粒度的语义特征需求.在文本分类数据集和问答数据集上的实验表明,QDTM的性能优于量子启发的语言模型和神经网络文本匹配模型.展开更多
首先利用bidirectional encoder representations from transformers(BERT)模型的强大的语境理解能力来提取数据法律文本的深层语义特征,然后引入细粒度特征提取层,依照注意力机制,重点关注文本中与数据法律问答相关的关键部分,最后对...首先利用bidirectional encoder representations from transformers(BERT)模型的强大的语境理解能力来提取数据法律文本的深层语义特征,然后引入细粒度特征提取层,依照注意力机制,重点关注文本中与数据法律问答相关的关键部分,最后对所采集的法律问答数据集进行训练和评估.结果显示:与传统的多个单一模型相比,所提出的模型在准确度、精确度、召回率、F1分数等关键性能指标上均有提升,表明该系统能够更有效地理解和回应复杂的数据法学问题,为研究数据法学的专业人士和公众用户提供更高质量的问答服务.展开更多
Sentence classification is the process of categorizing a sentence based on the context of the sentence.Sentence categorization requires more semantic highlights than other tasks,such as dependence parsing,which requir...Sentence classification is the process of categorizing a sentence based on the context of the sentence.Sentence categorization requires more semantic highlights than other tasks,such as dependence parsing,which requires more syntactic elements.Most existing strategies focus on the general semantics of a conversation without involving the context of the sentence,recognizing the progress and comparing impacts.An ensemble pre-trained language model was taken up here to classify the conversation sentences from the conversation corpus.The conversational sentences are classified into four categories:information,question,directive,and commission.These classification label sequences are for analyzing the conversation progress and predicting the pecking order of the conversation.Ensemble of Bidirectional Encoder for Representation of Transformer(BERT),Robustly Optimized BERT pretraining Approach(RoBERTa),Generative Pre-Trained Transformer(GPT),DistilBERT and Generalized Autoregressive Pretraining for Language Understanding(XLNet)models are trained on conversation corpus with hyperparameters.Hyperparameter tuning approach is carried out for better performance on sentence classification.This Ensemble of Pre-trained Language Models with a Hyperparameter Tuning(EPLM-HT)system is trained on an annotated conversation dataset.The proposed approach outperformed compared to the base BERT,GPT,DistilBERT and XLNet transformer models.The proposed ensemble model with the fine-tuned parameters achieved an F1_score of 0.88.展开更多
文摘One of the critical hurdles, and breakthroughs, in the field of Natural Language Processing (NLP) in the last two decades has been the development of techniques for text representation that solves the so-called curse of dimensionality, a problem which plagues NLP in general given that the feature set for learning starts as a function of the size of the language in question, upwards of hundreds of thousands of terms typically. As such, much of the research and development in NLP in the last two decades has been in finding and optimizing solutions to this problem, to feature selection in NLP effectively. This paper looks at the development of these various techniques, leveraging a variety of statistical methods which rest on linguistic theories that were advanced in the middle of the last century, namely the distributional hypothesis which suggests that words that are found in similar contexts generally have similar meanings. In this survey paper we look at the development of some of the most popular of these techniques from a mathematical as well as data structure perspective, from Latent Semantic Analysis to Vector Space Models to their more modern variants which are typically referred to as word embeddings. In this review of algoriths such as Word2Vec, GloVe, ELMo and BERT, we explore the idea of semantic spaces more generally beyond applicability to NLP.
基金This work was co-funded by the European Research Council for the project ScienceGRAPH(Grant agreement ID:819536)by the TIB Leibniz Information Centre for Science and Technology.
文摘Purpose:This work aims to normalize the NLPCONTRIBUTIONS scheme(henceforward,NLPCONTRIBUTIONGRAPH)to structure,directly from article sentences,the contributions information in Natural Language Processing(NLP)scholarly articles via a two-stage annotation methodology:1)pilot stage-to define the scheme(described in prior work);and 2)adjudication stage-to normalize the graphing model(the focus of this paper).Design/methodology/approach:We re-annotate,a second time,the contributions-pertinent information across 50 prior-annotated NLP scholarly articles in terms of a data pipeline comprising:contribution-centered sentences,phrases,and triple statements.To this end,specifically,care was taken in the adjudication annotation stage to reduce annotation noise while formulating the guidelines for our proposed novel NLP contributions structuring and graphing scheme.Findings:The application of NLPCONTRIBUTIONGRAPH on the 50 articles resulted finally in a dataset of 900 contribution-focused sentences,4,702 contribution-information-centered phrases,and 2,980 surface-structured triples.The intra-annotation agreement between the first and second stages,in terms of F1-score,was 67.92%for sentences,41.82%for phrases,and 22.31%for triple statements indicating that with increased granularity of the information,the annotation decision variance is greater.Research limitations:NLPCONTRIBUTIONGRAPH has limited scope for structuring scholarly contributions compared with STEM(Science,Technology,Engineering,and Medicine)scholarly knowledge at large.Further,the annotation scheme in this work is designed by only an intra-annotator consensus-a single annotator first annotated the data to propose the initial scheme,following which,the same annotator reannotated the data to normalize the annotations in an adjudication stage.However,the expected goal of this work is to achieve a standardized retrospective model of capturing NLP contributions from scholarly articles.This would entail a larger initiative of enlisting multiple annotators to accommodate different worldviews into a“single”set of structures and relationships as the final scheme.Given that the initial scheme is first proposed and the complexity of the annotation task in the realistic timeframe,our intraannotation procedure is well-suited.Nevertheless,the model proposed in this work is presently limited since it does not incorporate multiple annotator worldviews.This is planned as future work to produce a robust model.Practical implications:We demonstrate NLPCONTRIBUTIONGRAPH data integrated into the Open Research Knowledge Graph(ORKG),a next-generation KG-based digital library with intelligent computations enabled over structured scholarly knowledge,as a viable aid to assist researchers in their day-to-day tasks.Originality/value:NLPCONTRIBUTIONGRAPH is a novel scheme to annotate research contributions from NLP articles and integrate them in a knowledge graph,which to the best of our knowledge does not exist in the community.Furthermore,our quantitative evaluations over the two-stage annotation tasks offer insights into task difficulty.
文摘多模态数据处理是一个重要的研究领域,它可以通过结合文本、图像等多种信息来提高模型性能.然而,由于不同模态之间的异构性以及信息融合的挑战,设计有效的多模态分类模型仍然是一个具有挑战性的问题.本文提出了一种新的多模态分类模型——MCM-ICE,它通过联合独立编码和协同编码策略来解决特征表示和特征融合的挑战.MCM-ICE在Fashion-Gen和Hateful Memes Challenge两个数据集上进行了实验,结果表明该模型在这两项任务中均优于现有的最先进方法.本文还探究了协同编码模块Transformer输出层的不同向量选取对结果的影响,结果表明选取[CLS]向量和去除[CLS]的向量的平均池化向量可以获得最佳结果.消融研究和探索性分析支持了MCM-ICE模型在处理多模态分类任务方面的有效性.
文摘在信息检索领域,量子干涉理论已应用于文档相关性、次序效应等核心问题的研究中,旨在建模用户认知引起的类量子干涉现象.文中从语言理解的需求出发,利用量子理论的数学工具分析语义组合过程中存在的语义演化现象,提出融合量子干涉信息的双重特征文本表示模型(Quantum Interference Based Duet-Feature Text Representation Model,QDTM).模型以约化密度矩阵为语言表示的核心组件,有效建模维度级别的语义干涉信息.在此基础上,构建捕获全局特征信息与局部特征信息的模型结构,满足语言理解过程中不同粒度的语义特征需求.在文本分类数据集和问答数据集上的实验表明,QDTM的性能优于量子启发的语言模型和神经网络文本匹配模型.
文摘首先利用bidirectional encoder representations from transformers(BERT)模型的强大的语境理解能力来提取数据法律文本的深层语义特征,然后引入细粒度特征提取层,依照注意力机制,重点关注文本中与数据法律问答相关的关键部分,最后对所采集的法律问答数据集进行训练和评估.结果显示:与传统的多个单一模型相比,所提出的模型在准确度、精确度、召回率、F1分数等关键性能指标上均有提升,表明该系统能够更有效地理解和回应复杂的数据法学问题,为研究数据法学的专业人士和公众用户提供更高质量的问答服务.
文摘Sentence classification is the process of categorizing a sentence based on the context of the sentence.Sentence categorization requires more semantic highlights than other tasks,such as dependence parsing,which requires more syntactic elements.Most existing strategies focus on the general semantics of a conversation without involving the context of the sentence,recognizing the progress and comparing impacts.An ensemble pre-trained language model was taken up here to classify the conversation sentences from the conversation corpus.The conversational sentences are classified into four categories:information,question,directive,and commission.These classification label sequences are for analyzing the conversation progress and predicting the pecking order of the conversation.Ensemble of Bidirectional Encoder for Representation of Transformer(BERT),Robustly Optimized BERT pretraining Approach(RoBERTa),Generative Pre-Trained Transformer(GPT),DistilBERT and Generalized Autoregressive Pretraining for Language Understanding(XLNet)models are trained on conversation corpus with hyperparameters.Hyperparameter tuning approach is carried out for better performance on sentence classification.This Ensemble of Pre-trained Language Models with a Hyperparameter Tuning(EPLM-HT)system is trained on an annotated conversation dataset.The proposed approach outperformed compared to the base BERT,GPT,DistilBERT and XLNet transformer models.The proposed ensemble model with the fine-tuned parameters achieved an F1_score of 0.88.